When you do a XMLHttpRequest, the data is often compressed. Looking at the Content-Length header
xhr.getResponseHeader("Content-Length");
gives you the number of octets in the response body, to which you could add an approximation of the header by sizing the response headers.
But: How do you find the number of (compressed) bytes actually transferred? (in Firefox, if this is only possible in a browser-specific way.)
In the screenshot below, you see a difference for several files:
The following should all be equal to this
the number of bytes read from the socket
the file size in the squid log
the number of application-layer octets sent over the network in response to the request
UPDATE: the performance api seems to provide this: Call as
performance.getEntries()[0]
and see the encodedBodySize (see also at MDN).
The screenshot above shows the Network Monitor. Its code seems to use the network-monitor.js file which
implements the nsIStreamListener and nsIRequestObserver interfaces. This is used within the NetworkMonitor feature to get the response body of the request.
The relevant code seems to be
onProgress: function(request, context, progress, progressMax) {
this.transferredSize = progress;
// Need to forward as well to keep things like Download Manager's progress
// bar working properly.
this._forwardNotification(Ci.nsIProgressEventSink, "onProgress", arguments);
},
with
_onComplete: function NRL__onComplete(aData)
{
let response = {
mimeType: "",
text: aData || "",
};
response.size = response.text.length;
response.transferredSize = this.transferredSize;
The progress event is part of neither interface. It may be from xhr, but where it came from is as of yet unclear.
As long as the response is text (as opposed to binary blobs), you have some good starting points here, on SO :
Measure string length in bytes
Related
I have copied an Excel table which is about a million rows. When I look at the clipboard on my system, it seems to contain about 250MB of data. However, I only need to grab the styling information from it, for example:
This entire data comes out to (far) less than 1MB of data. Is there a way to read the clipboard as if it were a file or stream, so that I can just do, for example:
clipboard.read(1024)
Otherwise, if I do the straight:
evt.clipboardData.getData('text/html')
And grab the section of data I want after getting the data, it takes me over 10 seconds to do! Whereas I believe the event should only take 0.1s or so, if I'm able to read the clipboard data in a partial manner, as if it were a file.
What can I do here? Is it possible to use FileReader on the clipboard? If so, how could that be done?
The Clipboard.read API cited in the comments can return clipboard contents as a list of ClipboardItem objects, from which you can then obtain a Blob object, which you can then .slice to perform partial reads. (The MDN claims that Clipboard.read returns a DataTransfer object, but this disagrees with the specification, so I assume this is stale information, or simply in error.)
const perm = await navigator.permissions.query({ name: 'clipboard-read' });
switch (perm.state) {
case 'granted':
case 'prompt':
break;
default:
throw new Error("clipboard-read permission not granted");
}
const items = await navigator.clipboard.read();
for (const item of items) {
const blob = await item.getType('text/html');
const first1M = await blob.slice(0, 1048576).arrayBuffer();
/* process first1M */
}
However, the Clipboard API is nowhere near universally available as of yet. Firefox ESR 78.9 doesn’t implement it, and by the state of MDN it hardly seems to be on Mozilla’s radar at all. (I haven’t tried other browsers yet; perhaps in Chrome it’s already usable.)
After a lot a research, this is not possible in Javascript, there is no support for stream manipulation using the clipboard object so u have to read the entire content at once.
However, u can use MacOS (inferred from your picture) native tools for processing the clipboard data: pbcopy and pbpaste, and they are extremely fast, orders of magnitude faster than Javascript, so u can delegate the heavy processing of the text to them.
So, after u copy the 250MB of text, you can slice it and read only the first n bytes (in this case 1024) and substitute the content of the clipboard with that, so now u it will be available for u to use it in Javascript:
pbpaste | cut -b 1-1024 | pbcopy
If u need any documentation about each terminal command, u can run man command_name. Extracting the first 1024 bytes of the clipboard took less than a second with this approach.
I tested it with a sample text file of 390MB created with python with this script:
c = 30000000
with open('sample.txt', 'w') as file:
file.writelines('a sample code' for i in range(c))
Can I perform something like this?
Situation
I want to check the URL. if URL equal to http://sample.com, do this, otherwise, do that.
What I did:
In Web.config -
<add key="ServerURLCloud" value="sample.com" />
In C# -
public static string GetURL()
{
string[] url = ConfigurationManager.AppSettings["ServerURLCloud"];
return url;
}
In Javascript -
if(varURL.indexOf('#ClassName.GetURL()') > 0){
urlToCall = 'sub.sample.com';
}else{
urlToCall = 'sub.not-sample.com';
}
$ajax(
url = urlToCall,
data = .........
....
)
I tested it, it is working very well. But, I want to know, will it be any problem if:
Internet connection slow
EDITED:
My Question
Is this practice (get Server side information at JavaScript) is good? Or bad?
i believe this code sample can be altered slightly to make it a little easier to maintain.
You could create a variable in your layout which could contain ConfigurationManager.AppSettings["ServerURLCloud"]
var siteSettings = {};
siteSettings.serverUrlCloud = '#ConfigurationManager.AppSettings["ServerURLCloud"]';
siteSettings.subSampleUrl = 'url';
siteSettings.subNotSampleUrl = '';
This site settings can hold anything useful as well (like base url etc)...
Also, try not to use magic strings in your code... instead, prefer to create variables/consts etc which hold these.
These changes wont impact the speed of your application but they will make it slightly easier to manage.
Also, the speed of the response from your ajax request is completely down to the executed code within that request, the length of the response and the internet connection speed... if the code is complex and doing a lot then it will naturally take longer. If the response is big, it will take longer to download. If the internet connection is slow, it will take longer to send the request and download the response.
Hope this helps
I have a <video> and <audio> element which load a file (mp4, mp3, doesn't matter) from my server via Range requests.
It seems however that the element only request the end Range from my server, and from there on out tries to stream directly from bytes 0 to the end, causing the player to be stuck in a "download loop", which makes the browser suspend all other actions until the download is complete.
Does anyone know a solution to this issue? Do I for example have to make my stream request HAVE an actual end to its content length or accept-ranges?
Here's the full request list from Chrome and at the bottom you can see that a request for the url view?watch=v__AAAAAA673pxX just stays pending, basically until either a new request is placed by the element.
In a nutshell: html5 elements get stuck in a download loop when using http-range requests and cause all other requests to stay "pending".
UPDATE
The issue was resolved server-side.
Whereas the original stream function would literally output every byte, I've modified the code to output ONLY the size of the actual buffer. This forces the elements to make a new request for the remaining data.
An important note here is to return the content-length, accept-ranges and content-ranges that match the file's size, start and ending position in each HTTP RANGE request.
For future references:
function stream(){
$i = $this->start;
set_time_limit(0);
while(!feof($this->stream) && $i <= $this->end) {
$bytesToRead = $this->buffer;
if(($i+$bytesToRead) > $this->end) {
$bytesToRead = $this->end - $i + 1;
}
$data = fread($this->stream, $bytesToRead);
echo $data;
flush();
$i += $bytesToRead;
}
}
new stream function:
function stream()
{
//added a time limit for safe-guarding
set_time_limit(3);
echo fread($this->stream, $this->offset);
flush();
}
Suppose you have a video of 1M bytes
When you browser request for video the first time it will send headers like this
Host:localhost
Range:bytes=0-
Range header bytes=0- means browser is asking server to return till whatever it can return ie. no end position is specified
To this server would usually reply with whole file except last byte to preserve the range context
Accept-Ranges:bytes
Content-Length:99999
Content-Range:bytes 0-99999/1000000
Now suppose your video is downloaded till 30% and you seek to 70% then browser will request that part header would be like this
Host:localhost
Range:bytes=700000-
It seems however that the element only request the end Range from my server,
You can see you inferred wrongly it's the starting position of video part
Now server might reply like
Accept-Ranges:bytes
Content-Length:300000
Content-Range:bytes 700000-99999/1000000
Note Content-Range it explicitly tells what portion of file .So my guess it that your server is not sending this information and browser is getting bugged.
Also sometimes mime-types can also cause problems try to use the exact mimetype of your file like Content-Type: video/mp4.If you use Content-Type: application/octet-stream then might cause compression which would disabled range headers
I want to download and play m3u8 file which is on server machine. I am using following code to read and send m3u8 file to web server.
Browser is displaying contents of file instead of downloading it.
So please let me know that, how to download it.
if ((exportHandle = fopen(v3FileName, "a+")) != NULL) {
long end = 0, start = 0, pos = 0;
char* m3u8FileDataBuff = NULL;
fseek(exportHandle, 0, SEEK_END);
end = ftell(exportHandle);
fseek(exportHandle, 0, SEEK_SET);
start = ftell(exportHandle);
pos = end - start;
m3u8FileDataBuff = (char *) malloc(pos);
end = 0;
start = 0;
fread(m3u8FileDataBuff, 1, pos, exportHandle);
pClienCommunication->writeBuffer(m3u8FileDataBuff, pos);
free(m3u8FileDataBuff);
fclose(exportHandle);
}
Client's web browser is displaying the content, because the MIME type of the response is either nil, or something like "text/plain". Set up the http response header properly to indicate mime type of m3u8 file (application/x-mpegURL or vnd.apple.mpegURL).
The piece of code you provided does not seem to set anything around response header, just content.
Check available API of pClienCommunication->, or place where that originates, what are your options to adjust response header.
Or maybe it's possible to work-around this also by some rule set up in the web server serving the response, to set the MIME type for certain URLs, or based on the response content (but applying such rules on web server level is usually more costly then adjusting the response while being created in the C++ part).
And why is this tagged C++, when the code itself is C-like with all the problems of it. In modern C++ you never do things like "fclose(..)", because that is done in the destructor of the file wrapper class, so you don't risk the fclose will be skipped in case of some exception raised in fread, etc.
So in modern C++ these things should look somewhat like this:
{
SomeFileClass exportFile(v3FileName, "a+");
if (exportFile.isOK()) {
SomeFileContentBuffer data = exportFile.read();
pClienCommunication->writeBuffer(data.asCharPtr(), data.size());
}
}
So you can't forget to release any file handle, or buffer memory (as the destructors of particular helper classes will handle that).
I've a DataSnap server method
function TServerMethods.GetFile(filename): TStream
returning a file.
In my test case the file is a simple .PDF.
I'm sure this function works fine, as I'm able to open files on ObjectiveC client side app's where I've used my own http call to the DataSnap method (no Delphi proxy).
The stream is read from ASIHttpRequest object and saved as local file, then loaded and regulary shown in standard pdf reader.
I do not kown how exactly ASIHttpRequest manages the returned data.
But on JavaScript client side where I use standard
stream = ServerMethods().GetFile('test.pdf')
JavaScript function, as provided from DataSnap proxy itself, I do not figure out how to show the .pdf data to the user.
Using
window.open().document.write(stream);
a new browser window opens with textual raw data ( %PDF-1.5 %âãÏÓ 1 0 obj << /Type /Catalog /Pages 2 0 R …..)
With
window.open("data:application/pdf;base64," +stream);
I get an empty new browser page.
With
window.open("data:application/pdf," +stream);
or
document.location = 'data:application/pdf,'+encodeURIComponent(serverMethods().GetFile('test'));
I get an new browser page with pdf empry reader and alert “This PDF document could not be displayed correctly”
Nothing changes adding:
GetInvocationMetadata().ResponseContentType := 'application/pdf';
into the DataSnap function.
I've no other ideas...
EDIT
The task is for a general file download, not only PDF. PDF is a test only. GetFile have to manage .pdf, .xlsx, .docx, .png, .eml, etc...
Your server side code works as expected once you set the ResponseContentType. You can test this by calling the method directly from a browser. Change the class name to match the one you're using:
http://localhost:8080/datasnap/rest/TServerMethods1/GetFile/test.pdf
I'm sure there's a way to display the stream properly on the browser side, but I'm not sure what that is. Unless you're doing something special with the stream, I'd recommend getting the document directly or using a web action and getting out of the browser's way. Basically what mjn suggested.
I can think of a couple of solutions.
1) A quick way would be to allow access to the documents directly.
In the WebFileDispatcher, add a WebFileExtension. Select .pdf and it will fill in the mime type for you. Assuming your pdf documents are in the "docs" folder, the url might look like this:
http://localhost:8080/docs/test.pdf
2) I would probably add an action on the web module. It's a little more involved, but it also gives me more control.
Using this url:
http://localhost:8080/getfile?filename=test.pdf
And code like this in the web action handler (no checking or error handling). The Content-Disposition header suggests a file name for the downloaded file:
procedure TWebModule1.WebModule1GetFileActionAction(Sender: TObject;
Request: TWebRequest; Response: TWebResponse; var Handled: Boolean);
var
lStream: TMemoryStream;
lFilename: string;
begin
lFilename := Request.QueryFields.Values['filename'];
lStream := TMemoryStream.Create;
lStream.LoadFromFile('.\Docs\' + lFilename);
lStream.Position := 0;
Response.ContentStream := lStream;
Response.ContentType := 'application/pdf';
Response.SetCustomHeader('Content-Disposition',
Format('attachment; filename="%s"', [lFilename]));
end;