I have been using openKM for document management and after retrieving image from openkm using api it shows question marks rectangles.
I have already checked this question but did not help
.
my python code for making api request
any help will be much appreciated
url = "http://ipaddress/aa18be7a5/hhhhhggg.png"
payload={}
headers = {'Internal-Key':"gjffhddsgsgdfgkhkhggdgsfd"}
response = requests.request("GET", url, headers=headers, data=payload)
return response.text
You requested .PNG data, and that's what the server sent you.
It all looks good.
You did this
response = requests.request("GET", url, ...)
return response.text
The request is beautiful.
But then you looked at .text, hoping for some unicode.
That would make sense, on a text document.
But instead you obtained a binary PNG document.
Just look at the returned headers -- they will
explain that its "Content-type" is "image/png".
You want response.content instead,
which is uninterpreted binary bytes,
suitable for writing to some "foo.png" file for display.
(Remember to use "wb" as the open() mode, not just "w"!)
It's important to understand that some byte strings
do not form valid utf-8, and trying to treat
binary images as unicode will soon end in tears.
Related
I have Ajax call which return JSON in response, Backend is written in servlet, and I have set content-type and character-set as well
response.setContentType("application/json");
response.setCharacterEncoding("UTF-8");
But in Ajax response , I see gibberish character like Bullet turns into � .
Any idea why this is happening?
Thanks in Advance.
This is a replacement character:
It is used to indicate problems when a system is unable to render a stream of data to a correct symbol. It is usually seen when the data is invalid and does not match any character
https://en.wikipedia.org/wiki/Specials_(Unicode_block)#Replacement_character
Don't use bullet directly. Use the equivalent hexcode or html code and it should work.
The browser/DOM doesn't identify some characters. Check the link below:
https://www.toptal.com/designers/htmlarrows/punctuation/bullet/
Possible solutions to this problem can be :
Setting the response with appropriate charset in the backend layer.
Setting appropriate charset on view/frontend layer.
Setting appropriate charset on the database layer.
In my case, it was utf8 [or you can use utf8mb4 based on your needs].
I am trying to speed up my Meteor application by only loading enough content of a webpage to get the <head> tag of its HTML, to obtain its title, image, and description.
I have a client calling a server-side method with the following code:
Meteor.call("metaGetter", url, function(err, res){...});
And on the server side, in the metaGetter method, I am using Meteor's HTTP.call:
var result = HTTP.call('GET', url, {headers: {'content-range': "bytes 0-100"}});
as written in Meteor's documentation. I am able to get the result's content, html. However, after printing the returned headers, I do not see the content-range attribute that I have tried to set.
Edit: Akshat's solution works, but only for some websites, very few in fact. Any help would be much appreciated.
use the range header:
var result = HTTP.call('GET', url, {headers: {'range': "bytes=0-100"}});
The response should have a content-range header if the server used supports content ranges.
Of course, this needs a host that supports request ranges. I've tried the above code and it does work on http://www.microsoft.com as the url.
Its sad to say there's nothing you can do really for websites that don't support it besides requesting the entire document.
One rather weird alternative is to manually request the webpage as a socket and cut off when you get more bytes than what you need.
In general, you can't have fixed limit if you want always fetch the title:
Some HTTP servers doesn't support range header: How can I find out whether a server supports the Range header?
You can't guarantee that X bytes will always contain title. E.g. it may appear after 1000 bytes.
In general I would fetch whole HTML file. On most decent servers, that should take less than 100 ms. Hardly noticeable by human. If you do that a lot, you may want to allow executing server side method in parallel (see http://docs.meteor.com/#/full/method_unblock)
If optimization is must, you can use previous method, fetch 100 bytes, but if you don't find </title> than you fall back to downloading whole HTML file.
I have been breaking my head for the last couple of days trying to save the screenshot from a ThreeJS on the server using .NET Web API.
I have gone through all the possible questions on this specific topic and related ones on StackOverflow and tried the suggestions as well.
The issue details are specified below:
I am successfully able to get the base64 encoded string from renderer.domElement.toDataUrl() which contains a valid threejs image.
When I pass this string as is to my .NET WebAPI, the Convert.FromBase64String() method fails saying invalid length of the Base64 string or invalid padding characters. I ensured to extract out the "data:image/png;base64," part before passing.
I tried a number of things to resolve this issue like adding padding characters to ensure the length is mod 4, using regular expression to extract out the right data. I was able to get through the Convert.FromBase64String() and saved the resulting byte array as a png on my server. It resulted in a blank image.
I also discovered that when I used the chrome extension Advanced Rest Client to hit my WebAPI and used the Encode Payload feature before posting the string, I was able to get the image saved on my server successfully and got back the desired image as well.
Seeing this, I used the encodeURIComponent() function in Javascript to pass my base64 string to the WebAPI from my web app, but failed, getting back the same behavior as my earlier attempts.
One important observation was that the whitespaces were getting eliminated in case of Encode Payload but not in case of encodeURIComponent.
I compared the strings between encodeURIComponent() and the Encode Payload from Advanced Rest Client. Although, on a high level, they do the same thing by replacing the special characters with their escape sequences, there is still a significant difference between them.
Request help on this issue.
I would like to know if there is any other way of getting the threejs base64 string passed to .NET successfully.
What might be the difference between the encoding of encodeURIComponent and Advanced Rest Client Encode Payload feature?
Thanks in advance!
I am posting a large amount of data. I need this to be as small as possible for performance reasons.
My data starts out as JS objects. I then stringify it using json. I then send it in a post.
The thing is that I have a lot of objects:lists [] and dict {}, as well as short texts, which are placed in quotes "" by json.
These are then uri encoded before being posted. I do not do this; the browser does it. I can see the result when I look in the request body.
So, every [, {, and "" is now uri encoded, meaning that my string becomes much longer. In fact, if I compare
alert( JSON_local.stringify(myStuff).length);
alert(encodeURI(JSON_local.stringify(myStuff).length);
the uri encoded string is 50% larger. That's a lot bigger when the string starts out big.
Am I missing something here? json is standard, but it seems to have a negative side effect for me. is there an alternative to using json? Or am I doing something wrong here? Data alsways has to be send as uri encoded, no?
Data always has to be send as uri encoded, no?
Not true. This depends on the content type you're sending it.
If you use x-www-form-urlencoded content-type when sending it, you need to encode the data. If you use multipart/form-data, for example, you do not need to. This has been discussed in more length in here. For considerable amount of data, I don't see any real reason to use x-www-form-urlencoded.
Of course, there is more to it than just changing the content type, you need to supply the mime boundaries then. It does sound to me however that that'd be more efficient for you. From http://www.w3.org/TR/html401/interact/forms.html#h-17.13.4:
The content type "application/x-www-form-urlencoded" is inefficient
for sending large quantities of binary data or text containing
non-ASCII characters. The content type "multipart/form-data" should be
used for submitting forms that contain files, non-ASCII data, and
binary data.
I am using Ajax to retrieve images from a remote server. First I try this directly using the URL of the remote server - the returned image is a string(since that's how Ajax communicates). I use the Javascript fromCharCode and CharCodeAt to convert the data back to binary and then the window.btoa() to display it. This works. Then I want to transfer this image through an overlay network (P2P). I intercept the Ajax request, transfer it to the server through the P2P network and then retrieve the response in []byte array. But now I need to know to what type of string I should convert the byte array before I feed it back to the calling Ajax client. If I use Base64 or simply convert the byte array to string it does not display the image correctly.
Anyone has tried working with something like this before?
I will appreciate any feedback very much. Thanks
Javascript doesn't have different kinds of strings.
The desired character set will be the same one the web page is encoded in, ideally UTF-8.
Have you compared the response sent by the P2P server to the response sent by the original server? Is there some kind of wrapper that's missing, or perhaps an important MIMEtype difference?