Mat icon custom svg icons - is sanitizing safe? - javascript

I wanted to add possibility to add custom svgs in mat-icon in angular 2+ application.
In order to do that one must add custom svg to mat registry and sanitize the resource url - example below
constructor(private matIconRegistry: MatIconRegistry, private domSanitizer: DomSanitizer) {
this.matIconRegistry.addSvgIcon('hello', this.domSanitizer.bypassSecurityTrustResourceUrl('./assets/images/hello.svg'))
I'm thinking if this approach is fine for commercial apps , is sanitizing posing a threat of cross scripting or any other security issue?
IMO there is no issue here, because noone can replace the assets if they do not have access to server, right?

Related

Implementing oEmbed discovery with a static link

The oEmbed spec requires a site to link to its oEmbed endpoint and encode the current URL in that link. This is quite annoying for static/CDN-served websites that have to now encoding/return the request URL into the HTML response.
I'm wondering if it is known whether major oEmbed consumers (e.g. Slack, Facebook, or oEmbed client libraries) will add this URL themselves when requesting, so much so that it may be reasonable, in practice, to break the spec and do this statically. Any examples of a static implementation could be insightful.
Dynamic:
Link: <http://flickr.com/services/oembed?url=http%3A%2F%2Fflickr.com%2Fphotos%2Fbees%2F2362225867%2F&format=json>; rel="alternate"; type="application/json+oembed"; title="Bacon Lollys oEmbed Profile"
Static:
Link: <http://flickr.com/services/oembed?format=json>; rel="alternate"; type="application/json+oembed"; title="Bacon Lollys oEmbed Profile"
I implemented discovery using the html tag alternative (as opposed to the Link header).
Our frontend services are deployed by having an NGINX container serve up static build files. I wanted to add oEmbed discovery to all responses, so I used ndk_http_module and ngx_http_set_misc_module to create an escaped URI variable then inject it as a tag at the end of the html element. My experience after playing around with it for a few days on platforms like Slack and Teams is that including the URL query parameter and the format haven't conflicted so far.
Our frontend services are deployed by having an NGINX container serve up static build files. I wanted to add oEmbed discovery to all responses, so I used ndk_http_module and ngx_http_set_misc_module to create an escaped URI variable then injected it as a tag at the end of the html element like so:
server {
listen 80;
set_escape_uri $escaped_uri $http_host$request_uri;
sub_filter '</head>' '<link rel=\"alternate\" type=\"application/json+oembed\" href=\"${OEMBED_URL}?format=json&url=https%3A%2F%2F$escaped_uri\" title=\"Bacon Lollys oEmbed Profile\"></head>';
...
}

How to convert editable PDF to non-editable PDF in PHP/NodeJS?

Problem
I would like to know is there any PHP/NodeJS API available to convert editable PDF to non-editable PDF online. We have a client application where we need a scenario where the user downloads the PDF should not able to modify it thought any software (eg. Foxit reader, Adobe)
Basically, we are using PDF-LIB right now and it seems there is no solution for the non-editable pdf API to set access privileges, I have search a lot but does not found any API for that, Am not using the pdf-flatten because we want everything selectable, Appreciate your help.
List of libraries tried and fail to achieve the results
bpampuch/pdfmake issue can't load an existing pdf
PDF-LIB issue can't support permissions
nrhirani/node-qpdf issue File restrictions not working properly
I think flattening the PDF might help you to make it un-editable in case your target is
Just the form fields then you might use this from the PDF-LIB github repo
The entire PDF then, see if pdf-flatten package helps for Node.js
After a lot of research work and tried multiple libraries in PHP/Node. I don't found any library that is mature enough to proceed with that, so I decided to make an API that will build in different technology C# and Java
Solution
we post the PDF URL through API, the API download that file, and apply for multiple permission according to the dataset.
Library
the library we choose is ASPOSE
// These can be true/false
config.IsPrint = true;
// Document is allowed to be changed.
config.IsModify = false;
// Annotation is allowed.
config.IsAnnot = true;
// Form filling is allowed.
config.IsFillForm = true;
// Content extraction is allowed.
config.IsExtract = true;

AngularJS - SEO - S3 Static Pages

My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">

Angular translate default language with Static Files Loader

Hello I am developing app on angular and I use angular translate plugin with static file loader. Everything works fine but I have question is there a way to set a defaul language? I know that you can set a fallbackLanguage but it doesn't work for me or maybe it doesn't work with static file loader? I mean that in my app language is loaded via user setting in system (by sysytem here I mean our database holds what user culture type is) so there can be a user that has some strange locale set and app doesn't have translations for it than I would like to show it in English by default, same thing would come when there is missing translation for users language. Here is way I set up whole translateProvider:
$translateProvider.useStaticFilesLoader({
prefix: '/languages/',
suffix: '.json'
});
$translateProvider.fallbackLanguage('en_US');
$translateProvider.preferredLanguage('pl_PL');
And way I think of it is: if I don't have pl_PL.json file en_US.json should be loaded but instead in template the key isn't translated but just printed. Maybe I need to do dometing more?
$translateProvider.preferredLanguage('en_US');
You don't know that the user wants pl_PL during app initialisation; you have to wait until you've checked your database and then call $translate.uses('pl_PL'); to override the preferred language.

How to composite and compress Javascripts (.net ajax) to get best performance

I use JQuery library and some customised JavaScripts in my web app. I combined and compress them into one JavaScript file and located it at the end of body as static script, which normally can get the best performance, agree?
<script src="../js/Default.js" type="text/javascript"></script>
Moreover, there are some .net ajax controls e.g.<ajax:calendarExtender> applied in some pages. Some articles, http://am-blog.no-ip.org/BlogEngine/post/2010/04/12/Increase-AJAX-Performance-by-combining-Scripts-using-Scriptmanager-Composite-Scripts.aspx suggest using <asp:CompositeScript> to composite default asp.net JavaScripts which are required by those controls e.g.<ajax:calendarExtender>
Since my web app only have one ScriptManager which is located in the MasterPage at the beginning of the Form. What is the best way to composite and compress those .net JS to make the use similar as my static script? Since those ajax controls are not applied in every page, should I still composite all JavasSripts in Master Page? (guess reference-proxy in each child page wont help?)
Regards,
I think that you do not need to compress more this files, other than gZip.
Of course you need here to be sour that your js files on webresource are not the debug versions.
If you try to change the location of this files, eg try to place them on the end, then maybe you get javascript errors because this functions are probably needed to be found by asp.net controls
I only suggest to add some cache on the client side. You can do that on Application Begin Request - something like:
protected void Application_BeginRequest(Object sender, EventArgs e)
{
// get the file name
string cTheFile = HttpContext.Current.Request.Path;
// here just see if this is the files that you wish to add some extra cache
if (cTheFile.EndsWith("WebResource.axd", StringComparison.InvariantCultureIgnoreCase))
{
// add some cache - what ever you think, or add an eTag
app.Response.Cache.SetExpires(DateTime.UtcNow.AddHours(4));
app.Response.Cache.SetMaxAge(new TimeSpan(4, 0, 0));
app.Response.Cache.SetCacheability(HttpCacheability.Public);
}
}
Obfuscation will shrink the size of a script (sometimes dramatically). IMO that is all that obfuscation is good for.

Categories

Resources