Related
We have a Rest Service that returns a text/event-stream from a POST endpoint, which contains a series of JSON Objects.
(It is a Spring Boot / Kotlin RestController that returns a kotlinx.coroutines.flow.Flow<SomeJSONObject>)
Now we want to consume this event-stream in a angular WebApplication, processing each object as it arrives. Unfortunately we don't know how this works.
We tried the obvious things, like:
this.http.post(url, request)
or
this.http.post(url, request).toPromise().then(value => ...
and
this.http.post(url, request).subscribe(value => ...
It seems like the browser does not even make a request and does not receive any data.
The backend service works fine, we can see this by calling the endpoint with e.g. postman.
It would be enaugh to have any hint how this works in JavaScript, then it will also work in angular.
text/event-stream is the mime type associated with Server-sent Events. You can read more about them and see some code examples here and there are plenty of tutorials online showing how to use them with Angular, such as this one.
Here's a simple example of handling a Server-sent Event in plain Javascript - yourSSEURL is the URL that returns the text/event-stream:
if(typeof(EventSource) !== "undefined") {
var source = new EventSource("yourSSEURL");
source.onmessage = function(event) {
document.getElementById("result").innerHTML += event.data + "<br>";
};
} else {
document.getElementById("result").innerHTML = "Sorry, your browser does not support server-sent events...";
}
I understand JSON, but not JSONP. Wikipedia's document on JSON is (was) the top search result for JSONP. It says this:
JSONP or "JSON with padding" is a JSON extension wherein a prefix is specified as an input argument of the call itself.
Huh? What call? That doesn't make any sense to me. JSON is a data format. There's no call.
The 2nd search result is from some guy named Remy, who writes this about JSONP:
JSONP is script tag injection, passing the response from the server in to a user specified function.
I can sort of understand that, but it's still not making any sense.
So what is JSONP? Why was it created (what problem does it solve)? And why would I use it?
Addendum: I've just created a new page for JSONP on Wikipedia; it now has a clear and thorough description of JSONP, based on jvenema's answer.
It's actually not too complicated...
Say you're on domain example.com, and you want to make a request to domain example.net. To do so, you need to cross domain boundaries, a no-no in most of browserland.
The one item that bypasses this limitation is <script> tags. When you use a script tag, the domain limitation is ignored, but under normal circumstances, you can't really do anything with the results, the script just gets evaluated.
Enter JSONP. When you make your request to a server that is JSONP enabled, you pass a special parameter that tells the server a little bit about your page. That way, the server is able to nicely wrap up its response in a way that your page can handle.
For example, say the server expects a parameter called callback to enable its JSONP capabilities. Then your request would look like:
http://www.example.net/sample.aspx?callback=mycallback
Without JSONP, this might return some basic JavaScript object, like so:
{ foo: 'bar' }
However, with JSONP, when the server receives the "callback" parameter, it wraps up the result a little differently, returning something like this:
mycallback({ foo: 'bar' });
As you can see, it will now invoke the method you specified. So, in your page, you define the callback function:
mycallback = function(data){
alert(data.foo);
};
And now, when the script is loaded, it'll be evaluated, and your function will be executed. Voila, cross-domain requests!
It's also worth noting the one major issue with JSONP: you lose a lot of control of the request. For example, there is no "nice" way to get proper failure codes back. As a result, you end up using timers to monitor the request, etc, which is always a bit suspect. The proposition for JSONRequest is a great solution to allowing cross domain scripting, maintaining security, and allowing proper control of the request.
These days (2015), CORS is the recommended approach vs. JSONRequest. JSONP is still useful for older browser support, but given the security implications, unless you have no choice CORS is the better choice.
JSONP is really a simple trick to overcome the XMLHttpRequest same domain policy. (As you know one cannot send AJAX (XMLHttpRequest) request to a different domain.)
So - instead of using XMLHttpRequest we have to use script HTML tags, the ones you usually use to load js files, in order for js to get data from another domain. Sounds weird?
Thing is - turns out script tags can be used in a fashion similar to XMLHttpRequest! Check this out:
script = document.createElement('script');
script.type = 'text/javascript';
script.src = 'http://www.someWebApiServer.com/some-data';
You will end up with a script segment that looks like this after it loads the data:
<script>
{['some string 1', 'some data', 'whatever data']}
</script>
However this is a bit inconvenient, because we have to fetch this array from script tag. So JSONP creators decided that this will work better(and it is):
script = document.createElement('script');
script.type = 'text/javascript';
script.src = 'http://www.someWebApiServer.com/some-data?callback=my_callback';
Notice the my_callback function over there? So - when JSONP server receives your request and finds callback parameter - instead of returning plain js array it'll return this:
my_callback({['some string 1', 'some data', 'whatever data']});
See where the profit is: now we get automatic callback (my_callback) that'll be triggered once we get the data.
That's all there is to know about JSONP: it's a callback and script tags.
NOTE: these are simple examples of JSONP usage, these are not production ready scripts.
Basic JavaScript example (simple Twitter feed using JSONP)
<html>
<head>
</head>
<body>
<div id = 'twitterFeed'></div>
<script>
function myCallback(dataWeGotViaJsonp){
var text = '';
var len = dataWeGotViaJsonp.length;
for(var i=0;i<len;i++){
twitterEntry = dataWeGotViaJsonp[i];
text += '<p><img src = "' + twitterEntry.user.profile_image_url_https +'"/>' + twitterEntry['text'] + '</p>'
}
document.getElementById('twitterFeed').innerHTML = text;
}
</script>
<script type="text/javascript" src="http://twitter.com/status/user_timeline/padraicb.json?count=10&callback=myCallback"></script>
</body>
</html>
Basic jQuery example (simple Twitter feed using JSONP)
<html>
<head>
<script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script>
<script>
$(document).ready(function(){
$.ajax({
url: 'http://twitter.com/status/user_timeline/padraicb.json?count=10',
dataType: 'jsonp',
success: function(dataWeGotViaJsonp){
var text = '';
var len = dataWeGotViaJsonp.length;
for(var i=0;i<len;i++){
twitterEntry = dataWeGotViaJsonp[i];
text += '<p><img src = "' + twitterEntry.user.profile_image_url_https +'"/>' + twitterEntry['text'] + '</p>'
}
$('#twitterFeed').html(text);
}
});
})
</script>
</head>
<body>
<div id = 'twitterFeed'></div>
</body>
</html>
JSONP stands for JSON with Padding. (very poorly named technique as it really has nothing to do with what most people would think of as “padding”.)
JSONP works by constructing a “script” element (either in HTML markup or inserted into the DOM via JavaScript), which requests to a remote data service location. The response is a javascript loaded on to your browser with name of the pre-defined function along with parameter being passed that is tht JSON data being requested. When the script executes, the function is called along with JSON data, allowing the requesting page to receive and process the data.
For Further Reading Visit: https://blogs.sap.com/2013/07/15/secret-behind-jsonp/
client side snippet of code
<!DOCTYPE html>
<html lang="en">
<head>
<title>AvLabz - CORS : The Secrets Behind JSONP </title>
<meta charset="UTF-8" />
</head>
<body>
<input type="text" id="username" placeholder="Enter Your Name"/>
<button type="submit" onclick="sendRequest()"> Send Request to Server </button>
<script>
"use strict";
//Construct the script tag at Runtime
function requestServerCall(url) {
var head = document.head;
var script = document.createElement("script");
script.setAttribute("src", url);
head.appendChild(script);
head.removeChild(script);
}
//Predefined callback function
function jsonpCallback(data) {
alert(data.message); // Response data from the server
}
//Reference to the input field
var username = document.getElementById("username");
//Send Request to Server
function sendRequest() {
// Edit with your Web Service URL
requestServerCall("http://localhost/PHP_Series/CORS/myService.php?callback=jsonpCallback&message="+username.value+"");
}
</script>
</body>
</html>
Server side piece of PHP code
<?php
header("Content-Type: application/javascript");
$callback = $_GET["callback"];
$message = $_GET["message"]." you got a response from server yipeee!!!";
$jsonResponse = "{\"message\":\"" . $message . "\"}";
echo $callback . "(" . $jsonResponse . ")";
?>
This is my ELI5 (explain me like I'm 5) attempt for those that need it.
TL;DR
JSONP is an old trick invented to bypass the security restriction in web browsers that forbids us to get data that is in a different website/server (called different origin1) than our own.
The trick works by using a <script> tag to load a JSON (e.g.: { "city":"Barcelona" }) from somewhere else, that will send us the data wrapped in a function, the actual JSONP ("JSON with Padding"):
tourismJSONP({"city":"Barcelona"})
Receiving it in this way enables us to use the data within our tourismJSONP function. JSONP is a bad practice and not needed anymore, don't use it (read at the end).
The problem
Say we want to use on ourweb.com some JSON data (or any raw data really) hosted at anotherweb.com. If we were to use GET request (think XMLHttpRequest, or fetch call, $.ajax, etc.), our browser would tell us it's not allowed with this ugly error:
This is a Content Security Policy restriction error, it's designed to protect users from certain attacks. You should just configure it properly (see at the end).
How would the JSONP trick help us here? Well, <script> tags are not subjected to this whole server (origin1) restriction! That's why we can load a library like jQuery or Google Maps from any server.
Here's the important point: if you think about it, those libraries are actual, runnable JS code (usually a massive function with all the logic inside). But raw data is not code. There's nothing to run; it's just plain text.
Hence, the browser will download the data pointed at by our <script> tag and when processing it'll rightfully complain:
wtf is this {"city":"Barcelona"} crap we loaded? It's not code. I can't compute!
The old JSONP hack
If only we could make plain text somehow runnable, we could grab it on runtime. We need anotherweb.com to send it as it if were code, so when it's downloaded the browser will run it. We just need two things: 1) to get the data in a way that it can be run, and 2) write some code in the client so that when the data runs, our function is called and we get to use the data.
For 1) if the foreign server is JSONP friendly we'll ask for the data like this:
<script src="https://anotherweb.com/api/tourism-data.json?myCallback=tourismJSONP"></script>
So we'll receive it like this:
tourismJSONP({"city":"Barcelona"})
which now makes it JS code that we could interact with.
As per 2), we need to write a function with the same name in our code, like this:
function tourismJSONP(data){
alert(data.city); // "Barcelona"
}
The browser will download the JSONP and run it, which calls our function, where the argument data will be the JSON data from anotherweb.com. We can now do with our data whatever we want to.
Don't use JSONP, use CORS
JSONP is a cross-site hack with a few downsides:
We can only perform GET requests
Since it's a GET request triggered by a simple script tag, we don't get helpful errors or progress info
There are also some security concerns, like running in your client JS code that could be changed to a malicious payload
It only solves the problem with JSON data, but Same-Origin security policy applies to other data (WebFonts, images/video drawn with drawImage()...)
It's not very elegant nor readable.
The takeaway is that there's no need to use it nowadays.
You should read about CORS here, but the gist of it is:
Cross-Origin Resource Sharing (CORS) is a mechanism that uses
additional HTTP headers to tell browsers to give a web application
running at one origin, access to selected resources from a different
origin. A web application executes a cross-origin HTTP request when it
requests a resource that has a different origin (domain, protocol, or
port) from its own.
origin is defined by 3 things: protocol, port, and host. So, https://web.com is a different origin than http://web.com (different protocol), also https://web.com:8081 (different port) and obviously https://thatotherweb.net (different host)
Because you can ask the server to prepend a prefix to the returned JSON object. E.g
function_prefix(json_object);
in order for the browser to eval "inline" the JSON string as an expression. This trick makes it possible for the server to "inject" javascript code directly in the Client browser and this with bypassing the "same origin" restrictions.
In other words, you can achieve cross-domain data exchange.
Normally, XMLHttpRequest doesn't permit cross-domain data-exchange directly (one needs to go through a server in the same domain) whereas:
<script src="some_other_domain/some_data.js&prefix=function_prefix>` one can access data from a domain different than from the origin.
Also worth noting: even though the server should be considered as "trusted" before attempting that sort of "trick", the side-effects of possible change in object format etc. can be contained. If a function_prefix (i.e. a proper js function) is used to receive the JSON object, the said function can perform checks before accepting/further processing the returned data.
JSONP is a great away to get around cross-domain scripting errors. You can consume a JSONP service purely with JS without having to implement a AJAX proxy on the server side.
You can use the b1t.co service to see how it works. This is a free JSONP service that alllows you to minify your URLs. Here is the url to use for the service:
http://b1t.co/Site/api/External/MakeUrlWithGet?callback=[resultsCallBack]&url=[escapedUrlToMinify]
For example the call, http://b1t.co/Site/api/External/MakeUrlWithGet?callback=whateverJavascriptName&url=google.com
would return
whateverJavascriptName({"success":true,"url":"http://google.com","shortUrl":"http://b1t.co/54"});
And thus when that get's loaded in your js as a src, it will automatically run whateverJavascriptName which you should implement as your callback function:
function minifyResultsCallBack(data)
{
document.getElementById("results").innerHTML = JSON.stringify(data);
}
To actually make the JSONP call, you can do it about several ways (including using jQuery) but here is a pure JS example:
function minify(urlToMinify)
{
url = escape(urlToMinify);
var s = document.createElement('script');
s.id = 'dynScript';
s.type='text/javascript';
s.src = "http://b1t.co/Site/api/External/MakeUrlWithGet?callback=resultsCallBack&url=" + url;
document.getElementsByTagName('head')[0].appendChild(s);
}
A step by step example and a jsonp web service to practice on is available at: this post
A simple example for the usage of JSONP.
client.html
<html>
<head>
</head>
body>
<input type="button" id="001" onclick=gO("getCompany") value="Company" />
<input type="button" id="002" onclick=gO("getPosition") value="Position"/>
<h3>
<div id="101">
</div>
</h3>
<script type="text/javascript">
var elem=document.getElementById("101");
function gO(callback){
script = document.createElement('script');
script.type = 'text/javascript';
script.src = 'http://localhost/test/server.php?callback='+callback;
elem.appendChild(script);
elem.removeChild(script);
}
function getCompany(data){
var message="The company you work for is "+data.company +"<img src='"+data.image+"'/ >";
elem.innerHTML=message;
}
function getPosition(data){
var message="The position you are offered is "+data.position;
elem.innerHTML=message;
}
</script>
</body>
</html>
server.php
<?php
$callback=$_GET["callback"];
echo $callback;
if($callback=='getCompany')
$response="({\"company\":\"Google\",\"image\":\"xyz.jpg\"})";
else
$response="({\"position\":\"Development Intern\"})";
echo $response;
?>
Before understanding JSONP, you need to know JSON format and XML. Currently the most frequently used data format on the web is XML, but XML is very complicated. It makes users inconvenient to process embedded in Web pages.
To make JavaScript can easily exchange data, even as the data processing program, we use the wording according to JavaScript objects and developed a simple data exchange format, which is JSON. JSON can be used as data, or as a JavaScript program.
JSON can be directly embedded in JavaScript, using them you can directly execute certain JSON program, but due to security constraints, the browser Sandbox mechanism disables cross-domain JSON code execution.
To make JSON can be passed after the execution, we developed a JSONP. JSONP bypass the security limits of the browser with JavaScript Callback functionality and the < script > tag.
So in short it explains what JSONP is, what problem it solves (when to use it).
JSONP stands for JSON with Padding.
Here is the site, with great examples, with the explanation from the simplest use of this technique to the most advanced in plane JavaScript:
w3schools.com / JSONP
One of my more favorite techniques described above is Dynamic JSON Result, which allow to send JSON to the PHP file in URL parameter, and let the PHP file also return a JSON object based on the information it gets.
Tools like jQuery also have facilities to use JSONP:
jQuery.ajax({
url: "https://data.acgov.org/resource/k9se-aps6.json?city=Berkeley",
jsonp: "callbackName",
dataType: "jsonp"
}).done(
response => console.log(response)
);
Background
You should look to use CORS where possible (i.e. your server or
API supports it, and the browser support is adequate), as JSONP has inherent security risks.
Examples
JSONP (JSON with Padding) is a method commonly used to
bypass the cross-domain policies in web browsers. (You are not allowed to make AJAX requests to a web page perceived to be on a different server by the browser.)
JSON and JSONP behave differently on the client and the server. JSONP requests are not dispatched using the XMLHTTPRequest and the associated browser methods. Instead, a <script> tag is created, whose source is set to the target URL. This script tag is then added to the DOM (normally inside the <head> element).
JSON Request:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 4 && xhr.status == 200) {
// success
};
};
xhr.open("GET", "somewhere.php", true);
xhr.send();
JSONP Request:
var tag = document.createElement("script");
tag.src = 'somewhere_else.php?callback=foo';
document.getElementsByTagName("head")[0].appendChild(tag);
The difference between a JSON response and a JSONP response is that the JSONP response object is passed as an argument to a callback function.
JSON:
{ "bar": "baz" }
JSONP:
foo( { "bar": "baz" } );
This is why you see JSONP requests containing the callback parameter so that the server knows the name of the function to wrap the response.
This function must exist in the global scope at the time the <script> tag is evaluated by the browser (once the request has been completed).
Another difference to be aware of between the handling of a JSON response and a JSONP response is that any parse errors in a JSON response could be caught by wrapping the attempt to evaluate the responseText
in a try/catch statement. Because of the nature of a JSONP response, parse errors in the response will cause an uncatchable JavaScript parse error.
Both formats can implement timeout errors by setting a timeout before initiating the request and clearing the timeout in the response handler.
Using jQuery
The usefulness of using jQuery to make JSONP requests, is that jQuery does all of the work for you in the background.
By default, jQuery requires you to include &callback=? in the URL of your AJAX request. jQuery will take the success function you specify, assign it a unique name, and publish it in the global scope. It will then replace the question mark ? in &callback=? with the name it has assigned.
Comparing similar JSON and JSONP Implementations
The following assumes a response object { "bar" : "baz" }
JSON:
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
if (xhr.readyState == 4 && xhr.status == 200) {
document.getElementById("output").innerHTML = eval('(' + this.responseText + ')').bar;
};
};
xhr.open("GET", "somewhere.php", true);
xhr.send();
JSONP:
function foo(response) {
document.getElementById("output").innerHTML = response.bar;
};
var tag = document.createElement("script");
tag.src = 'somewhere_else.php?callback=foo';
document.getElementsByTagName("head")[0].appendChild(tag);
I'm writing a webapp in Angular where authentication is handled by a JWT token, meaning that every request has an "Authentication" header with all the necessary information.
This works nicely for REST calls, but I don't understand how I should handle download links for files hosted on the backend (the files reside on the same server where the webservices are hosted).
I can't use regular <a href='...'/> links since they won't carry any header and the authentication will fail. Same for the various incantations of window.open(...).
Some solutions I thought of:
Generate a temporary unsecured download link on the server
Pass the authentication information as an url parameter and manually handle the case
Get the data through XHR and save the file client side.
All of the above are less than satisfactory.
1 is the solution I am using right now. I don't like it for two reasons: first it is not ideal security-wise, second it works but it requires quite a lot of work especially on the server: to download something I need to call a service that generates a new "random" url, stores it somewhere (possibly on the DB) for a some time, and returns it to the client. The client gets the url, and use window.open or similar with it. When requested, the new url should check if it is still valid, and then return the data.
2 seems at least as much work.
3 seems a lot of work, even using available libraries, and lot of potential issues. (I would need to provide my own download status bar, load the whole file in memory and then ask the user to save the file locally).
The task seems a pretty basic one though, so I'm wondering if there is anything much simpler that I can use.
I'm not necessarily looking for a solution "the Angular way". Regular Javascript would be fine.
Here's a way to download it on the client using the download attribute, the fetch API, and URL.createObjectURL. You would fetch the file using your JWT, convert the payload into a blob, put the blob into an objectURL, set the source of an anchor tag to that objectURL, and click that objectURL in javascript.
let anchor = document.createElement("a");
document.body.appendChild(anchor);
let file = 'https://www.example.com/some-file.pdf';
let headers = new Headers();
headers.append('Authorization', 'Bearer MY-TOKEN');
fetch(file, { headers })
.then(response => response.blob())
.then(blobby => {
let objectUrl = window.URL.createObjectURL(blobby);
anchor.href = objectUrl;
anchor.download = 'some-file.pdf';
anchor.click();
window.URL.revokeObjectURL(objectUrl);
});
The value of the download attribute will be the eventual file name. If desired, you can mine an intended filename out of the content disposition response header as described in other answers.
Technique
Based on this advice of Matias Woloski from Auth0, known JWT evangelist, I solved it by generating a signed request with Hawk.
Quoting Woloski:
The way you solve this is by generating a signed request like AWS does, for example.
Here you have an example of this technique, used for activation links.
backend
I created an API to sign my download urls:
Request:
POST /api/sign
Content-Type: application/json
Authorization: Bearer...
{"url": "https://path.to/protected.file"}
Response:
{"url": "https://path.to/protected.file?bewit=NTUzMDYzZTQ2NDYxNzQwMGFlMDMwMDAwXDE0NTU2MzU5OThcZDBIeEplRHJLVVFRWTY0OWFFZUVEaGpMOWJlVTk2czA0cmN6UU4zZndTOD1c"}
With a signed URL, we can get the file
Request:
GET https://path.to/protected.file?bewit=NTUzMDYzZTQ2NDYxNzQwMGFlMDMwMDAwXDE0NTU2MzU5OThcZDBIeEplRHJLVVFRWTY0OWFFZUVEaGpMOWJlVTk2czA0cmN6UU4zZndTOD1c
Response:
Content-Type: multipart/mixed; charset="UTF-8"
Content-Disposition': attachment; filename=protected.file
{BLOB}
frontend (by jojoyuji)
This way you can do it all on a single user click:
function clickedOnDownloadButton() {
postToSignWithAuthorizationHeader({
url: 'https://path.to/protected.file'
}).then(function(signed) {
window.location = signed.url;
});
}
An alternative to the existing "fetch/createObjectURL" and "download-token" approaches already mentioned is a standard Form POST that targets a new window. Once the browser reads the attachment header on the server response, it will close the new tab and begin the download. This same approach also happens to work nicely for displaying a resource like a PDF in a new tab.
This has better support for older browsers and avoids having to manage a new type of token. This will also have better long-term support than basic auth on the URL, since support for username/password on the url is being removed by browsers.
On the client-side we use target="_blank" to avoid navigation even in failure cases, which is particularly important for SPAs (single page apps).
The major caveat is that the server-side JWT validation has to get the token from the POST data and not from the header. If your framework manages access to route handlers automatically using the Authentication header, you may need to mark your handler as unauthenticated/anonymous so that you can manually validate the JWT to ensure proper authorization.
The form can be dynamically created and immediately destroyed so that it is properly cleaned up (note: this can be done in plain JS, but JQuery is used here for clarity) -
function DownloadWithJwtViaFormPost(url, id, token) {
var jwtInput = $('<input type="hidden" name="jwtToken">').val(token);
var idInput = $('<input type="hidden" name="id">').val(id);
$('<form method="post" target="_blank"></form>')
.attr("action", url)
.append(jwtInput)
.append(idInput)
.appendTo('body')
.submit()
.remove();
}
Just add any extra data you need to submit as hidden inputs and make sure they are appended to the form.
Pure JS version of James' answer
function downloadFile (url, token) {
let form = document.createElement('form')
form.method = 'post'
form.target = '_blank'
form.action = url
form.innerHTML = '<input type="hidden" name="jwtToken" value="' + token + '">'
console.log('form:', form)
document.body.appendChild(form)
form.submit()
document.body.removeChild(form)
}
I would generate tokens for download.
Within angular make an authenticated request to obtain a temporary token (say an hour) then add it to the url as a get parameter. This way you can download files in any way you like (window.open ...)
I have a news project with comment feature. Any one who add a comment can see his comment immediately without reloading the page ( using ajax ). The problem is that when user1 ( for example ) comment on post1 , only user1 can see his comment immediately but all other users need to reload the page to see the comment of user1. How can I solve this problem ?
The code I am using to get the comment :
$(function () {
$("#AddComment").click(function () {
var CommentText = document.getElementById("CommetForm").innerHTML;
var UserName = document.getElementById("UserName").innerHTML;
var PostId = document.getElementById("PostId").innerHTML;
$.ajax({
url: '/PostComment/AddComment',
type: 'POST',
dataType: 'json',
cache: false,
data: { "PostId": PostId, "CommentText": OrignalCommentText },
success: function (data)
{
if (data == "P") // Commet Stored on database successfully
{
document.getElementById("PostComments-" + PostId).innerHTML +=
"<li>" +
"<div class='media'>" +
"<div class='media-body'>" +
"<a href='' class='comment-author'>"+UserName+"</a>" +
"<span class='CommetText' id='CommentText-" + PostId + "'>" + CommentText + "</span>" +
"</div>" +
"</div>" +
"</li>";
}
else // Some Error occur during storing database
{
document.getElementById("CommentError-" + PostId).innerHTML = "\nSomething went wrog, please try agin";
}
}
});
});
});
And This code for storing comment in database :
private SocialMediaDatabaseContext db = new SocialMediaDatabaseContext();
[HttpPost]
public JsonResult AddComment(string PostId, string CommentText)
{
try
{
Users CurrentUser = (Users)Session["CurrentUser"];
PostComment postcomment = new PostComment();
CommentText = System.Uri.UnescapeDataString(CommentText);
postcomment.PostId = int.Parse(PostId);
postcomment.CommentFromId = CurrentUser.UserId;
postcomment.CommentText = CommentText;
postcomment.CommentDate = DateTime.Now;
db.PostComments.Add(postcomment);
db.SaveChanges();
return Json("P");
}
catch
{
return Json("F");
}
}
I suggest you use SignalR for this. http://www.asp.net/signalr/overview/getting-started/introduction-to-signalr
TL;DR Use can use setInterval or Websockets to accomplish this. Below I explain how.
First of all, we need to understand what is behind this Publish/Subscribe pattern. Since you want to build a real-time application, you may create a function that asks to your server if some data was added since last time it was checked.
USING WindowTimers.setInterval()
Here is the simplest way to accomplish this in my point of view, assuming that's your first time and you never worked with websockets before. For instance, in your client-side project you create a function within a setInterval setInterval( checkNewData, time). Your method checkNewData() will make an ajax requisition to your server, asking if some data was added recently:
function checkNewData() {
// ajax call
// On response, if found some new comment, you will inject it in the DOM
}
Then, in your server-side method, get the timestamp of its call and verify in your database if there are some data. Something like this:
// Method written in PHP
public function ajax_checkNewData() {
$time = time();
// Asks to your model controller if has something new for us.
// SELECT comment FROM comments WHERE timestamp > $time
// Then return its response
}
You will use the response that came from your controller method ajax_checkNewData() to write on your comments-container.
USING WEBSOCKETS (beautiful way)
Now, there are another way to do this, using WebSockets. HTML5 WebSocket represents the first major upgrade in the history of web communications. Before WebSocket, all communication between web clients and servers relied only on HTTP. Now, dynamic data can flow freely over WebSocket connections that are persistent (always on), full duplex (simultaneously bi-directional) and blazingly fast. Amongst different libraries and frameworks, you can use socket.io. I believe this will solve your real-time application problem pretty good, but I am not sure how much of your project you will need to change to suit this solution.
Check it out the simple chat tutorial from SocketIo page and see for yourself if it fits to your needs. Its pretty neat and would be a good challenge to implement using it. Since its event-driven, I believe you wont have problems implementing it.
For further information check it out:
REFERENCES
Get Started: Chat application - http://socket.io/get-started/chat/
Websockets - http://en.wikipedia.org/wiki/WebSocket
WebSockets - https://developer.mozilla.org/en/docs/WebSockets
Good luck!
You could write a JavaScript code which makes ajax call to a servlet that checks for updates in the database.
Return a flag to the success function of the ajax call and If the state has changed or any comment added to the database, you can reload the page or refresh the consisting of the comments with the new comments.
It's not posting on other pages, because the user1 page is making an AJAX call, so it loads correctly. However, the other pages don't 'know' they are supposed to reload via AJAX. You need some kind of timed loop running that checks for any changes. Either of the above answers should work for it.
You could use SignalR for this, you can send realtime messages to the server, here is a sample to know how to implement SignalR in ASP.NET MVC
https://github.com/WaleedChayeb/SignalRChatApp
How do I access a page's HTTP response headers via JavaScript?
Related to this question, which was modified to ask about accessing two specific HTTP headers.
Related:
How do I access the HTTP request header fields via JavaScript?
It's not possible to read the current headers. You could make another request to the same URL and read its headers, but there is no guarantee that the headers are exactly equal to the current.
Use the following JavaScript code to get all the HTTP headers by performing a get request:
var req = new XMLHttpRequest();
req.open('GET', document.location, false);
req.send(null);
var headers = req.getAllResponseHeaders().toLowerCase();
alert(headers);
Unfortunately, there isn't an API to give you the HTTP response headers for your initial page request. That was the original question posted here. It has been repeatedly asked, too, because some people would like to get the actual response headers of the original page request without issuing another one.
For AJAX Requests:
If an HTTP request is made over AJAX, it is possible to get the response headers with the getAllResponseHeaders() method. It's part of the XMLHttpRequest API. To see how this can be applied, check out the fetchSimilarHeaders() function below. Note that this is a work-around to the problem that won't be reliable for some applications.
myXMLHttpRequest.getAllResponseHeaders();
The API was specified in the following candidate recommendation for XMLHttpRequest: XMLHttpRequest - W3C Candidate Recommendation 3 August 2010
Specifically, the getAllResponseHeaders() method was specified in the following section: w3.org: XMLHttpRequest: the getallresponseheaders() method
The MDN documentation is good, too: developer.mozilla.org: XMLHttpRequest.
This will not give you information about the original page request's HTTP response headers, but it could be used to make educated guesses about what those headers were. More on that is described next.
Getting header values from the Initial Page Request:
This question was first asked several years ago, asking specifically about how to get at the original HTTP response headers for the current page (i.e. the same page inside of which the javascript was running). This is quite a different question than simply getting the response headers for any HTTP request. For the initial page request, the headers aren't readily available to javascript. Whether the header values you need will be reliably and sufficiently consistent if you request the same page again via AJAX will depend on your particular application.
The following are a few suggestions for getting around that problem.
1. Requests on Resources which are largely static
If the response is largely static and the headers are not expected to change much between requests, you could make an AJAX request for the same page you're currently on and assume that they're they are the same values which were part of the page's HTTP response. This could allow you to access the headers you need using the nice XMLHttpRequest API described above.
function fetchSimilarHeaders (callback) {
var request = new XMLHttpRequest();
request.onreadystatechange = function () {
if (request.readyState === XMLHttpRequest.DONE) {
//
// The following headers may often be similar
// to those of the original page request...
//
if (callback && typeof callback === 'function') {
callback(request.getAllResponseHeaders());
}
}
};
//
// Re-request the same page (document.location)
// We hope to get the same or similar response headers to those which
// came with the current page, but we have no guarantee.
// Since we are only after the headers, a HEAD request may be sufficient.
//
request.open('HEAD', document.location, true);
request.send(null);
}
This approach will be problematic if you truly have to rely on the values being consistent between requests, since you can't fully guarantee that they are the same. It's going to depend on your specific application and whether you know that the value you need is something that won't be changing from one request to the next.
2. Make Inferences
There are some BOM properties (Browser Object Model) which the browser determines by looking at the headers. Some of these properties reflect HTTP headers directly (e.g. navigator.userAgent is set to the value of the HTTP User-Agent header field). By sniffing around the available properties you might be able to find what you need, or some clues to indicate what the HTTP response contained.
3. Stash them
If you control the server side, you can access any header you like as you construct the full response. Values could be passed to the client with the page, stashed in some markup or perhaps in an inlined JSON structure. If you wanted to have every HTTP request header available to your javascript, you could iterate through them on the server and send them back as hidden values in the markup. It's probably not ideal to send header values this way, but you could certainly do it for the specific value you need. This solution is arguably inefficient, too, but it would do the job if you needed it.
Using XmlHttpRequest you can pull up the current page and then examine the http headers of the response.
Best case is to just do a HEAD request and then examine the headers.
For some examples of doing this have a look at http://www.jibbering.com/2002/4/httprequest.html
Just my 2 cents.
A solution with Service Workers
Service workers are able to access network information, which includes headers. The good part is that it works on any kind of request, not just XMLHttpRequest.
How it works:
Add a service worker on your website.
Watch every request that's being sent.
Make the service worker fetch the request with the respondWith function.
When the response arrives, read the headers.
Send the headers from the service worker to the page with the postMessage function.
Working example:
Service workers are a bit complicated to understand, so I've built a small library that does all this. It is available on github: https://github.com/gmetais/sw-get-headers.
Limitations:
the website needs to be on HTTPS
the browser needs to support the Service Workers API
the same-domain/cross-domain policies are in action, just like on XMLHttpRequest
Another way to send header information to JavaScript would be through cookies. The server can extract whatever data it needs from the request headers and send them back inside a Set-Cookie response header — and cookies can be read in JavaScript. As keparo says, though, it's best to do this for just one or two headers, rather than for all of them.
(2021) An answer without additional HTTP call
While it's not possible in general to read arbitrary HTTP response headers of the top-level HTML navigation, if you control the server (or middleboxes on the way) and want to expose some info to JavaScript that can't be exposed easily in any other way than via a header:
You may use Server-Timing header to expose arbitrary key-value data, and it will be readable by JavaScript.
(*in supported browsers: Firefox 61, Chrome 65, Edge 79; no Safari yet and no immediate plans for shipping as of 2021.09; no IE)
Example:
server-timing: key;desc="value"
You can use this header multiple times for multiple pieces of data:
server-timing: key1;desc="value1"
server-timing: key2;desc="value2"
or use its compact version where you expose multiple pieces of data in one header, comma-separated.
server-timing: key1;desc="value1", key2;desc="value2"
Example of how Wikipedia uses this header to expose info about cache hit/miss:
Code example (need to account for lack of browser support in Safari and IE):
if (window.performance && performance.getEntriesByType) { // avoid error in Safari 10, IE9- and other old browsers
let navTiming = performance.getEntriesByType('navigation')
if (navTiming.length > 0) { // still not supported as of Safari 14...
let serverTiming = navTiming[0].serverTiming
if (serverTiming && serverTiming.length > 0) {
for (let i=0; i<serverTiming.length; i++) {
console.log(`${serverTiming[i].name} = ${serverTiming[i].description}`)
}
}
}
}
This logs cache = hit-front in supported browsers.
Notes:
as mentioned on MDN, the API is only supported over HTTPS
if your JS is served from another domain, you have to add Timing-Allow-Origin response header to make the data readable to JS (Timing-Allow-Origin: * or Timing-Allow-Origin: https://www.example.com)
Server-Timing headers support also dur(header) field, readable as duration on JS side, but it's optional and defaults to 0 in JS if not passed
regarding Safari support: see bug 1 and bug 2 and bug 3
You can read more on server-timing in this blog post
Note that performance entries buffers might get cleaned by JS on the page (via an API call), or by the browser, if the page issues too many calls for subresources. For that reason, you should capture the data as soon as possible, and/or use PerformanceObserver API instead. See the blog post for details.
For those looking for a way to parse all HTTP headers into an object that can be accessed as a dictionary headers["content-type"], I've created a function parseHttpHeaders:
function parseHttpHeaders(httpHeaders) {
return httpHeaders.split("\n")
.map(x=>x.split(/: */,2))
.filter(x=>x[0])
.reduce((ac, x)=>{ac[x[0]] = x[1];return ac;}, {});
}
var req = new XMLHttpRequest();
req.open('GET', document.location, false);
req.send(null);
var headers = parseHttpHeaders(req.getAllResponseHeaders());
// Now we can do: headers["content-type"]
You can't access the http headers, but some of the information provided in them is available in the DOM. For example, if you want to see the http referer (sic), use document.referrer. There may be others like this for other http headers. Try googling the specific thing you want, like "http referer javascript".
I know this should be obvious, but I kept searching for stuff like "http headers javascript" when all I really wanted was the referer, and didn't get any useful results. I don't know how I didn't realize I could make a more specific query.
Like many people I've been digging the net with no real answer :(
I've nevertheless find out a bypass that could help others. In my case I fully control my web server. In fact it is part of my application (see end reference). It is easy for me to add a script to my http response. I modified my httpd server to inject a small script within every html pages. I only push a extra 'js script' line right after my header construction, that set an existing variable from my document within my browser [I choose location], but any other option is possible. While my server is written in nodejs, I've no doubt that the same technique can be use from PHP or others.
case ".html":
response.setHeader("Content-Type", "text/html");
response.write ("<script>location['GPSD_HTTP_AJAX']=true</script>")
// process the real contend of my page
Now every html pages loaded from my server, have this script executed by the browser at reception. I can then easily check from JavaScript if the variable exist or not. In my usecase I need to know if I should use JSON or JSON-P profile to avoid CORS issue, but the same technique can be used for other purposes [ie: choose in between development/production server, get from server a REST/API key, etc ....]
On the browser you just need to check variable directly from JavaScript as in my example, where I use it to select my Json/JQuery profile
// Select direct Ajax/Json profile if using GpsdTracking/HttpAjax server otherwise use JsonP
var corsbypass = true;
if (location['GPSD_HTTP_AJAX']) corsbypass = false;
if (corsbypass) { // Json & html served from two different web servers
var gpsdApi = "http://localhost:4080/geojson.rest?jsoncallback=?";
} else { // Json & html served from same web server [no ?jsoncallback=]
var gpsdApi = "geojson.rest?";
}
var gpsdRqt =
{key :123456789 // user authentication key
,cmd :'list' // rest command
,group :'all' // group to retreive
,round : true // ask server to round numbers
};
$.getJSON(gpsdApi,gpsdRqt, DevListCB);
For who ever would like to check my code:
https://www.npmjs.org/package/gpsdtracking
Allain Lalonde's link made my day.
Just adding some simple working html code here.
Works with any reasonable browser since ages plus IE9+ and Presto-Opera 12.
<!DOCTYPE html>
<title>(XHR) Show all response headers</title>
<h1>All Response Headers with XHR</h1>
<script>
var X= new XMLHttpRequest();
X.open("HEAD", location);
X.send();
X.onload= function() {
document.body.appendChild(document.createElement("pre")).textContent= X.getAllResponseHeaders();
}
</script>
Note: You get headers of a second request, the result may differ from the initial request.
Another way is the more modern fetch() API
https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch
Per caniuse.com it's supported by Firefox 40, Chrome 42, Edge 14, Safari 11
Working example code:
<!DOCTYPE html>
<title>fetch() all Response Headers</title>
<h1>All Response Headers with fetch()</h1>
<script>
var x= "";
if(window.fetch)
fetch(location, {method:'HEAD'})
.then(function(r) {
r.headers.forEach(
function(Value, Header) { x= x + Header + "\n" + Value + "\n\n"; }
);
})
.then(function() {
document.body.appendChild(document.createElement("pre")).textContent= x;
});
else
document.write("This does not work in your browser - no support for fetch API");
</script>
If we're talking about Request headers, you can create your own headers when doing XmlHttpRequests.
var request = new XMLHttpRequest();
request.setRequestHeader("X-Requested-With", "XMLHttpRequest");
request.open("GET", path, true);
request.send(null);
To get the headers as an object which is handier (improvement of Raja's answer):
var req = new XMLHttpRequest();
req.open('GET', document.location, false);
req.send(null);
var headers = req.getAllResponseHeaders().toLowerCase();
headers = headers.split(/\n|\r|\r\n/g).reduce(function(a, b) {
if (b.length) {
var [ key, value ] = b.split(': ');
a[key] = value;
}
return a;
}, {});
I've just tested, and this works for me using Chrome Version 28.0.1500.95.
I was needing to download a file and read the file name. The file name is in the header so I did the following:
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = "blob";
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
success(xhr.response); // the function to proccess the response
console.log("++++++ reading headers ++++++++");
var headers = xhr.getAllResponseHeaders();
console.log(headers);
console.log("++++++ reading headers end ++++++++");
}
};
Output:
Date: Fri, 16 Aug 2013 16:21:33 GMT
Content-Disposition: attachment;filename=testFileName.doc
Content-Length: 20
Server: Apache-Coyote/1.1
Content-Type: application/octet-stream
This is my script to get all the response headers:
var url = "< URL >";
var req = new XMLHttpRequest();
req.open('HEAD', url, false);
req.send(null);
var headers = req.getAllResponseHeaders();
//Show alert with response headers.
alert(headers);
Having as a result the response headers.
This is a comparison test using Hurl.it:
Using mootools, you can use this.xhr.getAllResponseHeaders()
This is an old question. Not sure when support became more broad, but getAllResponseHeaders() and getResponseHeader() appear to now be fairly standard: http://www.w3schools.com/xml/dom_http.asp
As has already been mentioned, if you control the server side then it should be possible to send the initial request headers back to the client in the initial response.
In Express, for example, the following works:
app.get('/somepage', (req, res) => {
res.render('somepage.hbs', {headers: req.headers});
})
The headers are then available within the template, so could be hidden visually but included in the markup and read by clientside javascript.
I think the question went in the wrong way,
If you want to take the Request header from JQuery/JavaScript the answer is simply No. The other solutions is create a aspx page or jsp page then we can easily access the request header.
Take all the request in aspx page and put into a session/cookies then you can access the cookies in JavaScript page..