I am creating a .pptx file using ASPOSE.Slides. I am trying to a embed font using Aspose, but it's not working because of some issues. Therefore i am searching for an alternative option to achieve the same functionality.
I want to embed my custom font in a .pptx file. Can you please provide suggestions for embedding fonts inside MS Power Point using Apache poi or other.
Please don't give the answers for static or local system.
As it was stated above you can do it easily without Apache.
Official site
You can view/add/delete the font by accessing C:\Windows\Fonts
#Bhagyashree,
Aspose.Slides allows you to embed the fonts inside presentation. I suggest you to please try using following sample code on your end to serve the purpose.
String dataDir = Utils.getDataDir(AddEmbeddedFonts.class);
Presentation pres=new Presentation(dataDir+"");
IFontData[] allFonts = pres.getFontsManager().getFonts();
IFontData[] embeddedFonts = pres.getFontsManager().getEmbeddedFonts();
for (IFontData font : except(allFonts, embeddedFonts))
{
pres.getFontsManager().addEmbeddedFont(font,EmbedFontCharacters.All);
}
pres.save("saved.pptx",SaveFormat.Pptx);
Please feel free to share if there is any issue incurring on your end.
I am working as Support developer/ Evangelist at Aspose.
With POI 4.1.0 (which will be released by approx. Feb. 2019), I've also added font embedding capabilities in Apache POI too. The provided methods are only the "half story", as you can't simply add .ttf/.otf files.
For the conversion of True-Type (.ttf) or Open-Type (.otf) fonts to Office-compatible EOT/MTX fonts, I'm using sfntly. The snftly classes aren't provided as maven artifacts yet and I don't like to import the whole chunk into POI nor release my repackaged version of googles code under my name, therefore you need to clone and adapt my example project.
For adding a MTX font stream to a slideshow (HSLF or XSLF, i.e. SlideShow is their common interface) you would call:
org.apache.poi.sl.usermodel.SlideShow.addFont(InputStream fontData)
For font subsetting, you would need the used codepoints, which can be extracted by:
org.apache.poi.sl.extractor.SlideShowExtractor.getCodepoints(String typeface, Boolean italic, Boolean bold)
For getting information about a MTX font data stream, there's a new helper class:
org.apache.poi.common.usermodel.fonts.FontHeader
I don't know how to change it with Apache, but you can easily change it with powerpoint ...
Related
On my Angular 11 app I have a ckEditor component. I want it to allow uploading images from the user's filesystem (and not just through links) which requires an upload adapter. I customized my ckEditor build through https://ckeditor.com/ckeditor-5/online-builder/ and included the Image, ImageInsert, ImageUpload and Base64UploadAdapter (among many others).
I tried including the SimpleUploadAdapter plugin but it doesn't seem to be recognized on my system, giving me a error-filerepository-no-upload-adapter error, just as if I didn't include the plugin.
Therefore I included the 'Base64UploadAdapter, and this does encode images into my ckeditor data to be uploaded to the server (which is fine).
The problem is that the base64 encodings are not valid. ckeditor.js inserts spaces instead of +, for example and then the image becomes distored or doesn't display at all, giving an ERR_INVALID_URL. Which makes me think the data is not URI-encoded.
In the ckeditor.js file I do see references to encodeURIComponent(), such as (piece of code from within large blob):
sourceMappingURL=data:application/json;base64,".concat(btoa(unescape(encodeURIComponent(JSON.stringify(r)
The strange thing is that in CkEditor's demo at https://ckeditor.com/docs/ckeditor5/latest/features/image-upload/base64-upload-adapter.html#demo , dropping an image into THEIR editor and ispecting it in the developer tools, it is encoded properly. (I copied and pasted the resulting encoding into the browser and the image appeared all right).
I searched Google and StackOverflow, no one seems to have this particular issue.
I cleared the chrome cache and reinstalled the node_modules, didn't help.
I don't see particular configurations for the Base64UploadAdapter plugin, (except limiting the user from uploading certain types of files, and since by default this plugin accepts various file types - limiting the user is not the answer here. )
One thing I want to add: This is probably not related to the encoding issue, but just in case it is. The online builder came with a build folder (which I refer to in my code), but also with additional files that I did not touch or infer. I wonder if there might be code there that I need to build or refer to:
a package.json file, including dependencies. I did not install them because they cause duplicate installations in my node modules. Is there a way to install unique ones, i.e. only dependencies that aren't currently configured?
a webpack.config.js which I did not refer to at all. (maybe it is picked up by the system, I don't know).
This is my my-component.html code:
<div>
<ckeditor [config]="config" [editor]="editor" [(ngModel)]="data"></ckeditor>
</div>
and in my-component.ts:
import * as Editor from 'lib/assets/ckeditor64/build/ckeditor';
export class MyComponent{
editor = Editor;
config = {
toolbar: {
items: ['imageUpload','imageInsert',]
},
image: {
toolbar: [
'imageStyle:full',
'imageStyle:side',
'|',
'imageTextAlternative'
]
}
};
data: string = '';
}
In short, I want to know why it might not be encoding my images correctly into Base64 and if there is possibly a workaround within the Angular framework that doesn't involve integrating my own upload adapter (since I am working with CkEditor's plugin itself anyway).
If there is other code I should include, please tell me.
Thank you very much in advance.
my current process is as follows:
Current process
I add the i18n attributes to the template.
Then I execute ng xi18n. This creates the messages.xlf file.
As soon as the process is finished, you have to copy messages.xlf and change the file extension to *.fr.xlf. In the renamed file you now manually search for all <source> tags and add the <target> tags with the translation.
If there are many different languages, this process is very time-consuming.
Problem
The problems here are the missing versioning of the translations and especially the manual adding of the <target> tags to the corresponding <source> tags.
Desired workflow
It would be desirable to have a workflow where versioning is possible and above all, the desired translation files are created automatically.
Would webpack be the right approach to solve this problem?
I just abandoned built in translation and prefered ngx-translate module.
With it you can :
- auto-extract translation string from source code
- build one app that contains all locales
- change locale during run time
If you're in an IDE that supports regex searches, then you can use regexes for search and replace such as
search: (\s*)<source>([^<]*)</source>
replace: $1\n$2<target>$3</target>
This will add a target after every source.
And about updating the translations, you are supposed to have a versioning tool, that will highlight the changes in the file. All you have to do is keep it updated at every commit you do that involves this file.
I agree that this isn't practical, and so is the fact that you can't translate typescript code. But those are workarounds that you can use in the meantime to ease your life.
I was looking for a CDN to link to for FontAwesome.
Their website instead provides a .js link (rather than using, for instance, this Open Source CDN I found)
Does it check the link (or maybe try several) to the CDN?
If you are using the free version of font-awesome, use cdnjs.
For pro users, Setup Webfont with CDN will provide insights on how to setup CDN with pro.fontawesome.com
#### Old Answer:
use.fontawesome.com is Font Awesome's own CDN.##
Heading
FontAwesome has its own paid option and analytics on usage provided by the CDN among many other features. Thats why they prefer thier own CDN for the end users.
It looks like from deobfuscating and quickly skimming through the JS file that it is a "1 and done" type of solution, meaning:
It loads the necessary CSS, sets the font type for the images, and also does some sort of reporting on who is using their stuff.
It also looks like it might bind their icons to the use of fa within a class
It doesn't really look like there is an obvious advantage to using the .js file over the CDN.
If you inspect the script file you get from the embed code, it starts off with the following:
window.FontAwesomeCdnConfig = {
autoA11y: {
enabled: true
},
asyncLoading: {
enabled: true,
},
reporting: {
enabled: true,
domains: "localhost, *.dev"
},
useUrl: "use.fontawesome.com",
faCdnUrl: "https://cdn.fontawesome.com:443",
code: "5083f6dc23"
};
After which it simply loads the files from the CDN. This obviously means that there's (even if minuscule) extra overhead. So what's really going on here?
There's two good candidates for why FA is picking this approach:
Harvesting e-mails: they have some paid products and wouldn't it be just great if they could e-mail people who are already interested in similar products about them?
Statistics: each generated script has a seemingly unique code which can be used to keep track of who uses how much of their bandwidth.
It seems that using the .js file allows additional features such as asynchronous loading and automatic accessibility. It would not surprise me if they also do more tracking. Asynchronous loading means that the apparent overhead is actually less.
I cam here because I was appalled at the number of scripts fontawesome loaded into my web page (as well as insisting to be placed above the fold).
I have since used this;
http://opensource.keycdn.com/fontawesome/4.6.3/font-awesome.min.css
which, of course, will need keeping up to date and, whereas one can appreciate the need for fontawesome to produce some revenue stream, I would have preferred them to be more up from about the number of calls involved.
There is an option to use the fontawesome CDN as a CSS file, perhaps that addresses some of the issues.
I have just upgraded to Grails 2.4 and am using the Asset-Pipeline1.8.7 plugin. I am wondering how to access the images from Javascript. I am using the Google Maps Javascript V3 API and need to set some marker icons in Javascript. Is there a way to create some Javascript vars on a GSP using the tag and then access the file in my app.js code? If that is not possible, how do a reference the compiled images in assets?
You could define a globally available object that holds the root-path to your assets dir and use this to build-up URLs to your assets.
Add this snippet to your layouts head-section
<g:javascript>
window.grailsSupport = {
assetsRoot : '${ raw(asset.assetPath(src: '')) }'
};
</g:javascript>
Then use it elsewhere like this:
<g:javascript>
var pathToMyImg = window.grailsSupport.assetsRoot + 'images/google_maps_marker.png';
</g:javascript>
Update 2015-08-06
While checking the release notes of the asset-pipeline plugin I noticed that non-digest versions of assets are no longer stored in the WAR-file. This would mean that my proposed solution breaks when the application is deployed as a WAR:
May 31, 2015
2.2.3 Release - No longer storing non digest versions in war file, cutting overhead in half. Also removed Commons i/o dependency. Faster byte stream.
This means that you have to explicitly define all your images beforehand and are no longer able to construct the path dynamically in your scripts:
<g:javascript>
window.grailsSupport = {
myImage1 : '${assetPath(src: 'myImage1.jpg')}',
myImage2 : '${assetPath(src: 'myImage2.jpg')}'
};
</g:javascript>
Update 2016-05-25
It is now possible to configure if non-digest versions of the assets are included in the built war-file by setting grails.assets.skipNonDigests (default is false):
It is normally not necessary to turn off 'skipNonDigests'. Tomcat will automatically still serve files by non digest name and will copy them out using storagePath via the manifest.properties alias map. This simply cuts storage in half. However, if you are attempting to do things like upload to a cdn outside of the cdn-asset-pipeline plugin and via the contents of 'target/assets'. This may still be useful.
Note that you still might use the proposed solution to define all required images beforehand to work around caching issues in the browser (as the digest-version of the asset has its content-hash in the filename).
Yes you can by putting a ${assetPath(src: 'img.png')} in your gsp
I don't know what the ideal solution is in your case but a solution might be:
Use relative paths like '../assets/use-control.png' in your js code.
Add the img in your dom and reference it from your js code.
Add a data-imgpath="${asset.assetPath(src: 'use-control.png')}" attribute on an appropriate element in your dom and use this link.
As an alternative you can use HTML5's data-* Attribute.
I've explained a bit further here:
load images from javascript
I need a little help about the zoomify.
I want to display big images, everything is working, but I don't want to include all openlayer Javascript files which Javascript do I need there.
All the best,
Robert
OpenLayers has special tools to help us with our deployments. You can use them to build a personal OpenLayers file that has only and exactly what you need.
The link is here: http://docs.openlayers.org/library/deploying.html (read section 'Custom Build Profiles')
After downloading openlayer, in the directory build you will find different config file. You can make your own. For example in order to use only zoomify, this one should work.
[first]
[last]
[include]
OpenLayers/Map.js
OpenLayers/Layer/Zoomify.js
OpenLayers/Layer/Markers.js
OpenLayers/Layer/Boxes.js
OpenLayers/Marker/Box.js
OpenLayers/Control/Navigation.js
OpenLayers/Control/Zoom.js
OpenLayers/Protocol/HTTP.js
OpenLayers/Strategy/Fixed.js
OpenLayers/Strategy/BBOX.js
OpenLayers/StyleMap.js
OpenLayers/Rule.js
OpenLayers/Filter/Comparison.js
OpenLayers/Filter/Logical.js
[exclude]
Build the custom openlayer.js with python
python build.py mycustomconfig.cfg