I am working on a project built with sails.js. I have taken over this project and I'm not very familiar with sails.js yet so I'm still getting my bearings.
From what I can see, the sails package includes flashify and the previous developer is using it in several places to display notification. The client wants a few of those notifications to be a little fancier than just the default popup and I'm wondering if it's possible to somehow restyle the flashify popup and add html markup to it.
Here's one instance where a message is initiated:
if(!req.isAuthenticated()){
req.flash("error", "Please log in");
return res.redirect('back');
}
I have tried adding html tags to the message string, escaping them and taking them out of the string quotes but none of that worked. Is it possible at all?
A couple of things:
Sails uses connect-flash by default. Your project might actually be using Flashify, but if you don't actually see require('flashify') in the code, it probably isn't.
More importantly, both connect-flash and flashify are back-end packages that just save messages into the session so that they survive between requests. They have nothing to do with the actual display of the message, which is handled in your template (most likely an .ejs file in the views folder). If there's a popup, then that's being created and launched in that template, so that's where you'd want to style it. Take a look at the Sails views documentation for more about view rendering, and also this question about rendering flash messages in a view.
Related
How can I provide the user of a web-app with a download-link to programmatically created data in AngularDart?
I thought this would be an easy task, since the download of data could be handled via data-links. But it turns out that AngluarDart doesn't let me use data-links since they are considered unsecure. In a pure Javascript environment I would use Filesaver.js, but also this is not possible with AngularDart (at least I didn't find a way to use it there).
What I really want to do: I create data in the app with code. At the end i have a json-structure that needs to be downloaded to the client computer of the user. He should be presented with a file select dialog where he can enter a filename and then the data should be saved there. And this should be initiated by a click on a button.
Up to now i didn't find a working way to make this happen in AngularDart. I tried BrowserClient, a-tags with download attribute, forms with data-url, but nothing works.
If anybody could give me a hint how to make this work, I would be very happy. A hint on how to use Javascript-Libraries (like FileSaver.js) in AngularDart would also be welcome.
I don't use Flutter and also I need this to work in the browser. So File from dart:io is no solution for me (this will be one of the first things you find, when searching for a solution). Also it is no solution to save the file to the server and download it to the client.
What I want
I want to be able to copy/paste the entire content of a chat to memory
so I can extract included YouTube urls from it.
What I know
As you may know, the group chat(s) run on a separate url and are loaded page by page. Normally you go to the previous page either by simply scrolling upwards, or by clicking on a show previous link (works differently on different devices I think).
Things I tried
Sadly I can't find the urls to either anymore, but ...
Add a script to Chrome console
The point was to add a script that went looking for the show previous link and clicked it.
Add a start=0 parameter to the url
This assumes you can find out the actual url, either manually or through something like Fiddler.
The idea was that you add something like ?start=0 to the url. This would cause the paging to start from the very first record and load all.
Both solutions didn't work.
Possibly this is because Facebook made these options obsolete. It's my impression that Facebook initially provided more dev options than it does now.
My question
What can I do to fully load chat content?
Not really sure what this has to do with C#, but i'll give a C# solution anyways. My solution would be to use something such as HTMLAgilityPack to get the InnerHTML from a page once it's loaded, although this will obviously require some type of authentication, so for this I suggest using something like a WebClient and sending along Auth credentials with whatever it is you're doing, OR just create a method to login, then use the same webclient to access chats via URL, use DownloadString() to get the contents of the page then using HTMLAgilityPack's methods to get the InnerHTML of whatever the chat box is called/indentified as.
Right now this is the nearest thing I can find:
https://www.facebook.com/help/community/question/?id=10200611181580779
There is a way to see your complete chat history on Facebook easily.
By this method you can also see Photos or videos you've shared on
Facebook. Your Wall posts etc. -- 'A copy of what you've shared on
Facebook' Follow these steps:
Go to 'Account Settings'
Click on 'Download a copy of your Facebook data' from bottom of General section
Then click 'Start My Archive' -- It may take a little while for gather your photos, wall posts, messages, and other information.
(Usually 20 to 60 minutes)
Once Archive generated Download it.
Extract and open 'index.html' from downloaded folder
Now you can see 'Messages' on bottom of the page, click it.
Done!
I got a response in my mail way faster than 20 minutes.
You will get a mail with a link to a zip file, containing your archive:
In the html folder you find: messages.htm
For that I can write a script that looks for YouTube URLs in that file.
My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">
I'm trying to use ms-seo package for meteor but I'm not understanding how it works.
It's supposed to add meta tags to your page for crawlers and social media (google, facebook, twitter, etc...)
To see it working according to the docs all I should have to do is
meteor add manuelschoebel:ms-seo
and then add some defaults
Meteor.startup(function () {
if(Meteor.isClient){
return SEO.config({
title: 'Manuel Schoebel - MVP Development',
meta: {
'description': 'Manuel Schoebel develops Minimal Viable Producs (MVP) for Startups',
},
og: {
'image': 'http://manuel-schoebel.com/images/authors/manuel-schoebel.jpg',
}
});
}
});
which I did but that code only executes on the client (browser). How is that helpful to search engines?
So I test it
curl http://localhost:3000
Results have no tags
If In the browser I go to http://localhost:3000 and inspect the elements in the debugger I see the tag but if I check the source I don't.
I don't understand how client side added tags have anything to do with SEO. I thought Google, Facebook, Twitter when scanning your page for meta tags basically just do a single request. Effectively the same as curl http://localhost:3000
So how does this package actually do anything useful? I feel stupid. 27k users it must work but I don't understand how. Does it require the spiderable package to get static pages generated?
You are correct. You need to use something like the spiderable package or prerender.io to get this to work. This package will add tags, but like any Meteor page, it's rendered on the client.
Try this with curl to see the result when using spiderable:
curl http://localhost:3000/?_escaped_fragment_=
Google will now render the JS itself so for Google to index your page correctly you don't need to use spiderable/prerender.io, but for other search engines I believe you still do have to.
An alternate answer:
Don't use spiderable, as it uses PhantomJS which is rather resource intensive when bots crawl your site.
Many Meteor devs are using Prerender these days, check it out.
If you still have some problems with social share buttons or the package, try to read this: https://webdevelopment7636.wordpress.com/2017/02/15/social-share-with-meteor/ . It was the only way I got mine to work. You don't have to worry about phantomJS or spiderable to make it work fine.
It is a complete tutorial using meteorhacks:ssr and meteorhacks:picker. You have to create a crawler filter on the server side and a route that will be called by it when it is activated. The route will send dynamically the template and the data to a html on the "private" folder, and will render the html to the crawler. The template on the private folder will be the that gets the metatags and the tag.
This is the file that will be on the private folder
I can't put the other links with the code here, but if you need anymore help, go to the first link and see if the tutorial helps.
I'm building a user dashboard in Django for a python based web service. This web service creates emails, and the HTML strings of these emails are saved in a file (and could theoretically also be saved in a db table). As part of the dashboard functionality I want to be able to preview the email, essentially rendering the html string of the email within the Django html view. Is it possible to do this? Will I need to work with a Javascript library to achieve this? Which one? Any help would be very appreciated!
EDIT
To clarify, the html string when put into a text editor is about 360+ lines. It has its own styling and it's own <head>, <body>, etcetera, tags. I want to display it like a webpage within a webpage, if that makes sense, so that it looks like a proper preview. I just have no idea how to do this, my experience hasn't really been with js or front end dev.
make the email html available like any other page and display it inside of an iframe.
be warned - email clients don't use the same rendering engines as browsers. its hell. (we use this - https://litmus.com)
Nothing more but:
document.getElementById('IDofDisplayContainer').innerHTML = 'your mail HTML string';