static html page in Django - javascript

I'm building a fairly simple experiment to run as a website.
It's all javascript, so all I need the server for is to store the experiment results. I've set up a Django server on WebFaction, but my problem is making django redirect clients to the 1st static html page (from where js takes over).
I gather my solution should be something like this (using django 1.7.1):
urls.py
django.views.generic import RedirectView
urlpatterns = patterns('',
url(r'^$', RedirectView.as_view (url='fudge.html'))
)
It works when DEBUG = True on settings.py, but when it's set to False I get:
Not Found
The requested URL /fudge.html was not found on this server.
Iv'e set STATIC_ROOT = '/AmirStatic/' in settings.py and ran python manage.py collectstatic following instructions iv'e found on the net, but this didn't do the Trick.
Either I don't understand the path settings in Django or they are not configured properly or I don't understand how static webpages are used. But I would have thought it would be much simpler than this. Iv'e spent a good 3 days losing hair over it.
I would be very thankful to anyone with good advice on this,
thanks a lot in advance.
Oh and i'm a newbie to python, Django and web-developing in general if that hasn't come across :)

When you are developing, Django can serve your statics files, but when going into production, django doesn't serve your static files (and should not, it should be handled by your reverse proxy such as nginx or apache)
Please take a look at the Deploying static files of the django documentation.

As #Mihai Zamfir suggested above-- try actually using a URL and linking to a view. Then within the view you define the logic for the template which is contained in fudge.html.
Example:
url(r'^fudge/$', views.fudge_view, name='fudge')
This is assuming you also have in your urls.py:
from your_app_name import views

Related

AngularJS - SEO - S3 Static Pages

My application uses AngularJS for frontend and .NET for the backend.
In my application I have a list view. On clicking each list item, It will fetch a pre rendered HTML page from S3.
I am using angular state.
app.js
...
state('staticpage', {
url: "/staticpage",
templateUrl: function (){
return 'http://xxxxxxx.cloudfront.net/staticpage/staticpage1.html';
},
controller: 'StaticPageCtrl',
title: 'Static Page'
})
StaticPage1.html
<div>
Hello static world 1!
<div>
How do I do SEO here?
Do I really need to do HTML snapshot using PanthomJS or so.
Yes PhantomJS would do the trick or you can use prerender.io with that service you can just use their open source renderer and have your own server.
Another way is to use _escaped_fragment_ meta tag
I hope this helps, if you have any questions add comments and I will update my answer.
Do you know that google renders html pages and executes javascript code in the page and does not need any pre-rendering anymore?
https://webmasters.googleblog.com/2014/05/understanding-web-pages-better.html
And take a look at these :
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
http://wijmo.com/blog/how-to-improve-seo-in-angularjs-applications/
My project front-end also has biult on top of Angular and I decieded to solve SEO issue like this:
I've created an endpiont for all search engines (SE) where all the requests go with _escaped_fragment_ parameter;
I parse a HTTP Request for _escaped_fragment_ GET parameter;
I make cURL request with parsed category and article parameters and get the article content;
Then I render a simpliest (and seo friendly) template for SE with the article content or throw a 404 Not Found Exception if article does not exists;
In total: I do not need to prerender some html pages or use prrender.io, have a nice user interface for my users and Search Engines index my pages very well.
P.S. Do not forget to generate sitemap.xml and include there all urls (with _escaped_fragment_) wich you want to be indexed.
P.P.S. Unfortunately my project's back-end has built on top of php and can not show you suitable example for you. But if you want more explanations do not hesitate to ask.
Firstly you can not assume anything.
Google does say that there bots can very well understand javascript application but that is not true for all scenarios.
Start from using crawl as google feature from the webmaster for your link and see if page is rendered properly. If yes, then you need not read further.
In case, you see just your skeleton HTML, this is because google bot assumes page load complete before it actually completes. To fix this you need an environment where you can recognize that a request is from a bot and you need to return it a prerendered page.
To create such environment, you need to make some changes in code.
Follow the instructions Setting up SEO with Angularjs and Phantomjs
or alternatively just write code in any server side language like PHP to generate prerendered HTML pages of your application.
(Phantomjs is not mandatory)
Create a redirect rule in your server config which detects the bot and redirects the bot to prerendered plain html files (Only thing you need to make sure is that the content of the page you return should match with the actual page content else bots might not consider the content authentic).
It is to be noted that you also need to consider how will you make entries to sitemap.xml dynamically when you have to add pages to your application in future.
In case you are not looking for such overhead and you are lacking time, you can surely follow a managed service like prerender.
Eventually bots will get matured and they would understand your application and you will say goodbye to your SEO proxy infrastructure. This is just for time being.
At this point in time, the question really becomes somewhat subjective, at least with Google -- it really depends on your specific site, like how quickly your pages render, how much content renders after the DOM loads, etc. Certainly (as #birju-shaw mentions) if Google can't read your page at all, you know you need to do something else.
Google has officially deprecated the _escaped_fragment_ approach as of October 14, 2015, but that doesn't mean you might not want to still pre-render.
YMMV on trusting Google (and other crawlers) for reasons stated here, so the only definitive way to find out which is best in your scenario would be to test it out. There could be other reasons you may want to pre-render, but since you mentioned SEO specifically, I'll leave it at that.
If you have a server-side templating system (php, python, etc.) you can implement a solution like prerender.io
If you only have AngularJS-only files hosted on a static server (e.g. amazon s3) => Have a look at the answer in the following post : AngularJS SEO for static webpages (S3 CDN)
yes you need to prerender the page for the bots, prrender.io
can be used and your page must have the
meta tag
<meta name="fragment" content="!">

Understanding ms-seo for meteor

I'm trying to use ms-seo package for meteor but I'm not understanding how it works.
It's supposed to add meta tags to your page for crawlers and social media (google, facebook, twitter, etc...)
To see it working according to the docs all I should have to do is
meteor add manuelschoebel:ms-seo
and then add some defaults
Meteor.startup(function () {
if(Meteor.isClient){
return SEO.config({
title: 'Manuel Schoebel - MVP Development',
meta: {
'description': 'Manuel Schoebel develops Minimal Viable Producs (MVP) for Startups',
},
og: {
'image': 'http://manuel-schoebel.com/images/authors/manuel-schoebel.jpg',
}
});
}
});
which I did but that code only executes on the client (browser). How is that helpful to search engines?
So I test it
curl http://localhost:3000
Results have no tags
If In the browser I go to http://localhost:3000 and inspect the elements in the debugger I see the tag but if I check the source I don't.
I don't understand how client side added tags have anything to do with SEO. I thought Google, Facebook, Twitter when scanning your page for meta tags basically just do a single request. Effectively the same as curl http://localhost:3000
So how does this package actually do anything useful? I feel stupid. 27k users it must work but I don't understand how. Does it require the spiderable package to get static pages generated?
You are correct. You need to use something like the spiderable package or prerender.io to get this to work. This package will add tags, but like any Meteor page, it's rendered on the client.
Try this with curl to see the result when using spiderable:
curl http://localhost:3000/?_escaped_fragment_=
Google will now render the JS itself so for Google to index your page correctly you don't need to use spiderable/prerender.io, but for other search engines I believe you still do have to.
An alternate answer:
Don't use spiderable, as it uses PhantomJS which is rather resource intensive when bots crawl your site.
Many Meteor devs are using Prerender these days, check it out.
If you still have some problems with social share buttons or the package, try to read this: https://webdevelopment7636.wordpress.com/2017/02/15/social-share-with-meteor/ . It was the only way I got mine to work. You don't have to worry about phantomJS or spiderable to make it work fine.
It is a complete tutorial using meteorhacks:ssr and meteorhacks:picker. You have to create a crawler filter on the server side and a route that will be called by it when it is activated. The route will send dynamically the template and the data to a html on the "private" folder, and will render the html to the crawler. The template on the private folder will be the that gets the metatags and the tag.
This is the file that will be on the private folder
I can't put the other links with the code here, but if you need anymore help, go to the first link and see if the tutorial helps.

Django cannot find the djangular/app.js file via the urls.py config. How to troubleshoot?

I've spent a few days trying to figure this out and it's driving me up the wall. I'm limited on what I can copy and paste, so forgive the 'code brevity'. I also have a working version I developed and have uploaded it to GitHub.
I'm developing a Django website that also uses AngularJS, so I'm using the djangular package, specifically the bit that lets me import Django variables into Angular. This is the section from GitHub:
To use the AngularJS module that Djangular provides you, you'll need to add the djangular app to your projects URLs.
urlpatterns = patterns('',
...
url(r'^djangular/', include('djangular.urls')),
...
)
And I've placed this in my project/urls.py file. I've done the same with my GitHub repository.
When I reference that URL in my appName/app/index.html, I do so like this:
<script src="{% static 'djangular/app.js' %}"></script>
But that leads to a 500 response from the server as Angular produces the Module 'djangular' is not available! error. What should be happening is that the URL djangular/app.js in the script tag above, should redirect to urls.py inside the Djangular folder in the Python site-packages, which then points to DjangularModuleTemplateView.as_view(). This seems to work in my GitHub version, but not in the local version I have for some reason.
If I have my script tag without the "{%static '...'%}" part I still get a 500, with the same error:
<script src="/djangular/app.js"></script>
What config could I possibly have overlooked that's causing the app not to find the right Djangular config? I've stared at both configurations so long my eyes are glazing over, and I'm struggling to find any differences. What else could it be?
I'm more than happy to provide more details if needed to answer this question.
I managed to solve this myself by running ./manage syncdb
This creates various tables (the user table, and the session table at least) which are required for Djangular to run.
Then I double checked all my <script> imports/includes.

How to set content type of JavaScript files in Django

I have a Django application, which requires several JavaScript files.
In Chrome I get the error "Resource interpreted as Script, but transferred with MIME type text/html".
AFAIK (see 2) in order to fix this problem, I need to configure Django so that JavaScript files are returned with content-type "application/x-javascript".
How can I do this in Django?
UPDATE: I followed the advice by Daniel Roseman and found following solution.
1) Modify urls.py:
urlpatterns = patterns('',
...
url(r'.*\.js$', java_script),
...
)
2) Add following function to views.py:
def java_script(request):
filename = request.path.strip("/")
data = open(filename, "rb").read()
return HttpResponse(data, mimetype="application/x-javascript")
I had an issue with Django serving javascript files as text/plain with the included server, which doesn't work too well with ES6 modules. I found out here that you could change file extension associations by placing the following lines in your settings.py:
#settings.py
if DEBUG:
import mimetypes
mimetypes.add_type("application/javascript", ".js", True)
and javascript files were now served as application/javascript.
I suspect the problem is not what you think it is. What is probably actually happening is that your JS files are not being served at all: instead, the Django error page is being sent. You need to figure out why.
Expanding on Alexandre's answer using put the below code into settings.py of your main project. After that you will need to clear your browser cache (you can test it by opening an incognito window as well) in order to get the debug panel to appear.
if DEBUG:
import mimetypes
mimetypes.add_type("application/javascript", ".js", True)
Since this doesn't prevent scripts from being interpreted correctly by the browser, why is this a problem? runserver is only for development (not production use), and as such is not a full blown web server.
You should continue to use it in development and when you move to production configure your webserver appropriately for static files.
However, if you absolutely must use the development server to serve static files; see how to serve static files.
For Django use request context in views :
return render_to_response('success.html', {'object': varobject},context_instance=RequestContext(request))
I ran into that error today even after adding #piephai s solution to my settings.py. I then noticed that #Daniel Roseman got it right as well:
My import paths were wrong, I had to add ".js" to all of them, for example:
import {HttpTool} from "./requests"; became import {HttpTool} from "./requests.js";
Makes sense after thinking about how routes for static files are generated.
The solution to the problem is describe in the documentation
For Windows, you need to edit the registry. Set HKEY_CLASSES_ROOT\.js\Content Type to text/javascript.

Rails Asset Pipeline: Return digest version when non-digest requested

I am providing a snippet for a client to paste into their static html that refers to my application.js file.
As this sits on a page that I do not have control over, and I do not want to ask the client to update their snippet every time I push a release, I am wondering if there is a way to return my digest-application.js version when the normal one is requested to ensure that the browser is getting the the most recent version?
I can set a cache-busting timestamp on the script src, but not sure this is really reliable.
Any thoughts on the best way to handle this?
we are doing something similar for our "public" javascript, which is integrated into a 3rd party web-application.
the way we do this is by creating a symlink on our asset-server during the capistrano deployment that points to the non-digest name of the file. since they are just files on our webserver, the apache does the rest.
I think the must elegant way is doing some Controller to do some redirect 302 to you assets.
You can paste to your client a code link /public-assets/my_assets
In your route create the route :
match '/public-assets/:asset_name' => 'PublicAsset#index'
And create your controller PublicAssetController
class PublicAssetContoller < ApplicationController::Base
def index
redirect_to asset_path(params[:asset_name])
end
end

Categories

Resources