The Problem
I've recently experimented with HUGO and it really surprised me how much I like how simple and fast it is compared to Gatsby and NextJS, the static website builders I've dealt with before.
The most important drawback from my perspective is that Gatsby produces
traditional static websites, where each site-internal navigation re-requests all common dependencies and loses all state, whereas the other two make
SPAs, where site-internal navigation does not cause common Javascript to be reprocessed.
While the advantages of SPAs are many, I'm only focusing on the points regarding in-page navigation:
It's faster. It's important to keep site navigation fast to avoid exposing a re-rendering of common page elements to the user (experienced as annoying flickering of the menus or even the page background). Static website pages can be served in 50ms or so, but browsers usually can't do the rest quickly enough if they need to evaluate the scripts again.
For example, the bootstrap docs, built with HUGO and quite responsive, are still not fast enough: It often flickers on navigation - even though the main content is loaded in less than 100ms.
State can persist. That's important because many Javascript tags used on traditional non-SPA websites can make use of that: For example, a chat window enters the screen in an animation, but only the first time - it then just stays where it is on internal navigation.
I'm asking here how I could, while keep using fast and simple HUGO, still at least get these two advantages.
A possible solution?
A somewhat hacky but very general approach I could think of would be to write a generic piece of Javascript that, just like SPAs,
intercepts browser navigation,
modifies all a tags to intercept clicks,
loads new pages with AJAX on those clicks rather than allowing browser handling,
change the content and fix the history.
In my case, I'd be perfectly happy to have the restriction to have the same head element and immutable outer layout (menus should be within that and still get loaded every time) except for a select few items such as the page title.
So, my questions here are
Is there an easier way?
Is there a problem with this approach I don't see?
Do I have to write this myself or is there already something I could build upon?
You can use turbo by hotwired in your hugo project, you just need to install it via npm/yarn
npm i #hotwired/turbo
and then import to your JavaScript config, like this way:
import * as Turbo from "#hotwired/turbo";
And now your hugo website looks like a SPA web.
Maybe this web will be an example for you : https://www.petanikode.com/
Related
I'm trying to recreate the following "effect" for my portfolio but I've been out of web development for a while and I can't find my way around it. Hopefully someone here can give me a hint.
I'm trying to achieve a kind of smooth transition between pages as you can see for example on those websites.
www. say.studio/
cedricklachot. com
www.durimel. io
(sorry I know is not the right way to put links but I don't know how to put them, it's constantly giving me error messages when I paste the URLs and it's driving me mad!)
When you switch from one page to another it feels like everything is on the same page, because it's so smooth and navigation elements remain on place, like it's just one single html file for the whole website and the rest of stuff is loaded by javascript, but as I see from the URLs there are different pages, so it must be switching between different html files. But this "smooth switching" so to say is what I can't find the way to replicate.
I have tried with onload animations like fading in effects, but still it's very clear that the browser is switching between different html pages so it definitely doesn't have the smoothness that I see on the examples I provided.
I hope I explained myself well as I'm not native English speaker :) thanks
Welsome to Stack Overflow. React framework is good for this, the page re-renders with updated components when a user changes it. You can also use php for this, which is a more primitive version in my opinion.
here's a cool app. i use Wappalyzer for firefox, it lets you see what technologies. the page is using, when i examine your link
well this one doesnt work https://www.say.studio/cedricklachot.com
this one https://www.durimel.io/nel
uses PHP on an apache server.
So view the source code, then you're going to have to learn PHP (Or copy and edit it)
the server is going to be tricky if you don't have experience. I'd recommend using AWS EC2, and doing an ubuntu build with Apache (or Nginx)
let us know when its finished : )
I'm writing Angular.js application. I want it to be really fast, therefore I serve it completely generated server-side when it is initially loaded. After that every change should be handled client-side by Angular with asynchronous communication with server.
I have ng-view attribute on central <div>. But now Angular regenerates content of this <div> even on first load, before clicking any link. I don't want this behavior because then the server-side generation of page is useless.
How to achieve that?
Although Gloopy's suggestion will work in some cases, it will fail in others (namely ng-repeat). AngularJS does not currently have the ability to render on the server, but this is something that (as far as I know) no other JavaScript framework does either. I also know that server-side rendering is something that the AngularJS developers are looking into, so you may yet see it in the not-too-distant future. :)
When you say you want the application to be "really fast," you should consider where exactly you want this speed. There are a lot of places to consider speed, such as time it takes to bootstrap the app, time it takes to respond, resource intensiveness, etc (you seem to be focusing on bootstrap time). There are often different trade-offs that must be made to balance performance in an application. I'd recommend reading this response to another question on performance with AngularJS for more on the subject: Angular.js Backbone.js or which has better performance
Are you actually running into performance issues, or is this just something you predict to be a problem? If it's the later, I'd recommend building a prototype representative of your type of application to see if it really is an issue. If it's the former and it's taking your app too long to bootstrap on the client side, there may be some optimizations that you can make (for instance, inlining some model data to avoid an additional round trip, or using Gloopy's suggestion). You can also use the profiling tools in Chrome, as well as the AngularJS Batarang to look for slow areas in your application.
btford: You are absolutely right that this sounds like premature optimization - it sounds that way to me either :). Another reason is, that the application should work without JS in very simple way, so this basic layout (and angular does that for me for all other pages), so there will be always rendering on server.
I found a very hacky ugly solution - I bootstrap application after first click on any internal link. After clicking it, I unbind this initial callback, update URL with History.pushState and bootstrap app - it grabs the new URL and regeneration is absolutely OK. Well, I'll keep looking into not-too-distant future :).
I was able to come up with a solution for doing this. It doesn't work perfectly or for everything, but it is ok at least as far as routing and the directive I made that uses ng-repeat.
https://github.com/ithkuil/angular-on-server/wiki/Running-AngularJS-on-the-server-with-Node.js-and-jsdom
I'm not a big fan of the way code is organized in the jqtouch examples I can find. So far, all I've seen are monolithic "index.html" files, which contain all the separate views for the iPhone app as separate divs.
Are there any examples out there of better organized jqtouch code?
I'm not looking for generic advice - I'd like to see specific examples of differently organized code.
What you're seeing is usually thought of as a feature of JQTouch, not as a negative "monolithic" style. -- Mobile networks tend to have a large time overhead per http request, so the general idea is to use the one request to download multiple small "pages" (as divs) all at once.
Of course, this paradigm may not fit your use case...
Added Re: alternatives: There are lots of mobile frameworks, see a list or Google. For JQTouch, you can return a response that includes only a single page if you wish to. The reason you're not seeing such examples is because the whole idea of the framework is to make it easy for the developer to return multiple "pages" as a single web server response.
For your server's responses which are a set of mobile pages, the multiple pages-at-a-time trick is the usual approach. For responses which include an infinite scroll page, or which have a lot of dynamic content, you can do Ajax updating of the mobile page, esp if you limit yourself to iPhone and Android browsers.
Overall, the per-request overhead is the big issue for good mobile web app performance. Anytime you can (or probably can) avoid a browser/server round-trip, you should aggressively do so.
Have you experimented with single page web application, i.e. where the browser only 'GETs' one page form the server, the rest being handled by client side javascript code (one good example of such an 'application page' is Gmail)?
What are some pro's and con's of going with this approach for simpler applications (such as blogs and CMSs)?
How do you go about designing such an application?
Edit: As mentioned in the response a difficuly is to handle the back button, the refresh button, bookmarking/copying url. The latter can be solved using location.hash, any clue about the remaining two issues?
I call these single page apps "long lived" apps.
For "simpler applications" as you put it it's terrible. Things that work OOTB for browsers all of a sudden need special care and attention:
the back button
the refresh button
bookmarking/copying url
Note I'm not saying you can't do these things with single-page apps, I'm saying you need to make the effort to build them into the app code. If you simply had different resources at different urls, these work with no additional developer effort.
Now, for complex apps like gmail, google maps, the benefits there are:
user-perceived responsiveness of the application can increase
the usability of the application may go up (eg scrollbars don't jump to the top on the new page when clicking on what the user thought was a small action)
no white screen flicker during the HTTP request->response
One concern with long-lived apps is memory leaks. Traditional sites that requests a new page for each user action have the added benefit that the browser discards the DOM and any unused objects to the degree that memory can be reclaimed. Newer browsers have different mechanisms for this, but lets take IE as an example. IE will require special care to clean up memory periodically during the lifetime of the long-lived app. This is made somewhat easier by libraries these days, but by no means is a triviality.
As with a lot of things, a hybrid approach is great. It allows you to leverage JavaScript for lazy-loading specific content while separating parts of the app by page/url.
One pro is that you get the full presentation power of JavaScript as opposed to non-JavaScript web sites where the browser may flicker between pages and similar minor nuisances. You may notice lower bandwidth use as well as a result of only handling with the immediately important parts that need to be refreshed instead of getting a full web page back from the server.
The major con behind this is the accessibility concern. Users without JavaScript (or those who choose to disable it) can't use your web site unless you do some serious server-side coding to determine what to respond with depending on whether the request was made using AJAX or not. Depending on what (server-side) web framework you use, this can be either easy or extremely tedious.
It is not considered a good idea in general to have a web site which relies completely on the user having JavaScript.
One major con, and a major complaint of websites that have taken AJAX perhaps a bit too far, is that you lose the ability to bookmark pages that are "deep" into the content of the site. When a user bookmarks the page they will always get the "front" page of the site, regardless of what content they were looking at when they made the bookmark.
Maybe you should check SproutCore (Apple Used it for MobileMe) or Cappuccino, these are Javascript frameworks to make exactly that, designing desktop-like interfaces that only fetch responses from the server via JSON or XML.
Using either for a blog won't be a good idea, but a well designed desktop-like blog admin area may be a joy to use.
The main reason to avoid it is that taken alone it's extremely search-unfriendly. That's fine for webapps like GMail that don't need to be publically searchable, but for your blogs and CMS-driven sites it would be a disaster.
You could of course create the simple HTML version and then progressive-enhance it, but making it work nicely in both versions at once could be a bunch of work.
I was creating exactly these kind of pages as webapps for the iPhone. My method was to really put everything in one huge index.html file and to hide or show certain content. This showing and hiding i.e. the navigation of the page, I control in a special javascript file where the necessary functions for handling the display of the parts in the page are.
Pro: Everything is loaded in the beginning and you don't need to request anything from the server anymore, e.g. "switching" content and performing actions is very fast.
Con: First, everything has to load... that can take its time, if you have a lot of content that has to be shown immediately.
Another issue is that in case when the connection goes down, the user will not really notice until he actually needs the server side. You can notice that in Gmail as well. (It sometimes can be a positive thing though).
Hope it helps! greets
Usually, you will take a framework like GWT, Echo2 or similar.
The advantage of this approach is that the application feels much more like a desktop app. When the server is fast enough, users won't notice the many little data packets that go back and forth. Also, loading a page from scratch is an expensive operation. If you just modify parts of it, the browser can keep a lot of the existing model in memory and just change the parts that changed.
Another advantage of these frameworks is that you can develop your application in pure Java. This means you can debug it in your IDE just like any other Java app, you can write unit tests and run them automatically, etc.
I'll add that on slower machines, a con is that a large amount of JavaScript will bring the browser to a screeching halt. Since all the rendering is done client-side, if the user doesn't have a higher-end computer, it will ruin the experience. My work computer is a P4 3.0GHZ with 2 GB of ram and JavaScript heavy sites cause it to chug along slower than molasses, which really kills the user experience for me.
This post probably will need some modification. I'll do my best to explain...
Basically, as a tester, I have noticed that sometimes programers who use template-based web back ends push a lot of stuff into onload handlers that then do stuff like load menu items, change display values in forms, etc.
For example, a page that displays your network configuration loads blank (or dummy values) for the IP info, then loads a block of variables in an onload function that sets the values when the page is rendered.
My experience (and gut feeling) is that this is a really bad practice, for a couple reasons.
1- If the page is displayed in an environment where Javascript is off (such as using "Send Page") the page will not display properly in that environment.
2- The HTML page becomes very hard to diagnose, because what is actually on screen is needs to be pieced together by executing the javascript in your head (this problem is less prominent w/ Firefox because of Firebug).
3- Most of the time, this is not being done via a standard practice of feature of the environment. In other words, there isn't a service on the back-end, the back-end code looks just as spaghetti as the resulting HTML.
and, not really a reason, more a correlation:
I have noticed that most coders that do this are generally the coders that have a lot of code-related bugs or critical integration bugs.
So, I'm not saying we shouldn't use javascript, I think what I'm saying is, when you produce a page dynamically, the dynamic behavior should be isolated to the back-end, and you should avoid changing the displayed information after the page is loaded and rendered.
I think what you're saying is what we should be doing is Progressive Enhancement with JavaScript.
Also related: Progressive Enhancement with CSS, Understanding Progressive Enhancement and Test-Driven Progressive Enhancement.
So the actual question is "What are advantages/disadvantages" of javascript content generation?
here's one: a lot of the things designers want are hard in straight html/css, or not fully supported. using Jquery to do zebra-tables with ":odd" for instance. Sometimes the server-side framework doesn't have good ways to accomplish this, so the way to get the cleanest code is actually to split it up like that.