Streaming File Request from Browser to FastAPI [duplicate] - javascript

I am trying to upload a large file (≥3GB) to my FastAPI server, without loading the entire file into memory, as my server has only 2GB of free memory.
Server side:
async def uploadfiles(upload_file: UploadFile = File(...):
Client side:
m = MultipartEncoder(fields = {"upload_file":open(file_name,'rb')})
prefix = "http://xxx:5000"
url = "{}/v1/uploadfiles".format(prefix)
try:
req = requests.post(
url,
data=m,
verify=False,
)
which returns:
HTTP 422 {"detail":[{"loc":["body","upload_file"],"msg":"field required","type":"value_error.missing"}]}
I am not sure what MultipartEncoder actually sends to the server, so that the request does not match. Any ideas?

With requests-toolbelt library, you have to pass the filename as well, when declaring the field for upload_file, as well as set the Content-Type header—which is the main reason for the error you get, as you are sending the request without setting the Content-Type header to multipart/form-data, followed by the necessary boundary string—as shown in the documentation. Example:
filename = 'my_file.txt'
m = MultipartEncoder(fields={'upload_file': (filename, open(filename, 'rb'))})
r = requests.post(url, data=m, headers={'Content-Type': m.content_type})
print(r.request.headers) # confirm that the 'Content-Type' header has been set
However, I wouldn't recommend using a library (i.e., requests-toolbelt) that hasn't provided a new release for over three years now. I would suggest using Python requests instead, as demonstrated in this answer and that answer (also see Streaming Uploads and Chunk-Encoded Requests), or, preferably, use the HTTPX library, which supports async requests (if you had to send multiple requests simultaneously), as well as streaming File uploads by default, meaning that only one chunk at a time will be loaded into memory (see the documentation). Examples are given below.
Option 1 (Fast) - Upload File and Form data using .stream()
As previously explained in detail in this answer, when you declare an UploadFile object, FastAPI/Starlette, under the hood, uses a SpooledTemporaryFile with the max_size attribute set to 1MB, meaning that the file data is spooled in memory until the file size exceeds the max_size, at which point the contents are written to disk; more specifically, to a temporary file on your OS's temporary directory—see this answer on how to find/change the default temporary directory—that you later need to read the data from, using the .read() method. Hence, this whole process makes uploading file quite slow; especially, if it is a large file (as you'll see in Option 2 below later on).
To avoid that and speed up the process, as the linked answer above suggested, one can access the request body as a stream. As per Starlette documentation, if you use the .stream() method, the (request) byte chunks are provided without storing the entire body to memory (and later to a temporary file, if the body size exceeds 1MB). This method allows you to read and process the byte chunks as they arrive. The below takes the suggested solution a step further, by using the streaming-form-data library, which provides a Python parser for parsing streaming multipart/form-data input chunks. This means that not only you can upload Form data along with File(s), but you also don't have to wait for the entire request body to be received, in order to start parsing the data. The way it's done is that you initialise the main parser class (passing the HTTP request headers that help to determine the input Content-Type, and hence, the boundary string used to separate each body part in the multipart payload, etc.), and associate one of the Target classes to define what should be done with a field when it has been extracted out of the request body. For instance, FileTarget would stream the data to a file on disk, whereas ValueTarget would hold the data in memory (this class can be used for either Form or File data as well, if you don't need the file(s) saved to the disk). It is also possible to define your own custom Target classes. I have to mention that streaming-form-data library does not currently support async calls to I/O operations, meaning that the writing of chunks happens synchronously (within a def function). Though, as the endpoint below uses .stream() (which is an async function), it will give up control for other tasks/requests to run on the event loop, while waiting for data to become available from the stream. You could also run the function for parsing the received data in a separate thread and await it, using Starlette's run_in_threadpool()—e.g., await run_in_threadpool(parser.data_received, chunk)—which is used by FastAPI internally when you call the async methods of UploadFile, as shown here. For more details on def vs async def, please have a look at this answer.
You can also perform certain validation tasks, e.g., ensuring that the input size is not exceeding a certain value. This can be done using the MaxSizeValidator. However, as this would only be applied to the fields you defined—and hence, it wouldn't prevent a malicious user from sending extremely large request body, which could result in consuming server resources in a way that the application may end up crashing—the below incorporates a custom MaxBodySizeValidator class that is used to make sure that the request body size is not exceeding a pre-defined value. The both validators desribed above solve the problem of limiting upload file (as well as the entire request body) size in a likely better way than the one desribed here, which uses UploadFile, and hence, the file needs to be entirely received and saved to the temporary directory, before performing the check (not to mention that the approach does not take into account the request body size at all)—using as ASGI middleware such as this would be an alternative solution for limiting the request body. Also, in case you are using Gunicorn with Uvicorn, you can also define limits with regards to, for example, the number of HTTP header fields in a request, the size of an HTTP request header field, and so on (see the documentation). Similar limits can be applied when using reverse proxy servers, such as Nginx (which also allows you to set the maximum request body size using the client_max_body_size directive).
A few notes for the example below. Since it uses the Request object directly, and not UploadFile and Form objects, the endpoint won't be properly documented in the auto-generated docs at /docs (if that's important for your app at all). This also means that you have to perform some checks yourself, such as whether the required fields for the endpoint were received or not, and if they were in the expected format. For instance, for the data field, you could check whether data.value is empty or not (empty would mean that the user has either not included that field in the multipart/form-data, or sent an empty value), as well as if isinstance(data.value, str). As for the file(s), you can check whether file_.multipart_filename is not empty; however, since a filename could likely not be included in the Content-Disposition by some user, you may also want to check if the file exists in the filesystem, using os.path.isfile(filepath) (Note: you need to make sure there is no pre-existing file with the same name in that specified location; otherwise, the aforementioned function would always return True, even when the user did not send the file).
Regarding the applied size limits, the MAX_REQUEST_BODY_SIZE below must be larger than MAX_FILE_SIZE (plus all the Form values size) you expcect to receive, as the raw request body (that you get from using the .stream() method) includes a few more bytes for the --boundary and Content-Disposition header for each of the fields in the body. Hence, you should add a few more bytes, depending on the Form values and the number of files you expect to receive (hence the MAX_FILE_SIZE + 1024 below).
app.py
from fastapi import FastAPI, Request, HTTPException, status
from streaming_form_data import StreamingFormDataParser
from streaming_form_data.targets import FileTarget, ValueTarget
from streaming_form_data.validators import MaxSizeValidator
import streaming_form_data
from starlette.requests import ClientDisconnect
import os
MAX_FILE_SIZE = 1024 * 1024 * 1024 * 4 # = 4GB
MAX_REQUEST_BODY_SIZE = MAX_FILE_SIZE + 1024
app = FastAPI()
class MaxBodySizeException(Exception):
def __init__(self, body_len: str):
self.body_len = body_len
class MaxBodySizeValidator:
def __init__(self, max_size: int):
self.body_len = 0
self.max_size = max_size
def __call__(self, chunk: bytes):
self.body_len += len(chunk)
if self.body_len > self.max_size:
raise MaxBodySizeException(body_len=self.body_len)
#app.post('/upload')
async def upload(request: Request):
body_validator = MaxBodySizeValidator(MAX_REQUEST_BODY_SIZE)
filename = request.headers.get('Filename')
if not filename:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail='Filename header is missing')
try:
filepath = os.path.join('./', os.path.basename(filename))
file_ = FileTarget(filepath, validator=MaxSizeValidator(MAX_FILE_SIZE))
data = ValueTarget()
parser = StreamingFormDataParser(headers=request.headers)
parser.register('file', file_)
parser.register('data', data)
async for chunk in request.stream():
body_validator(chunk)
parser.data_received(chunk)
except ClientDisconnect:
print("Client Disconnected")
except MaxBodySizeException as e:
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
detail=f'Maximum request body size limit ({MAX_REQUEST_BODY_SIZE} bytes) exceeded ({e.body_len} bytes read)')
except streaming_form_data.validators.ValidationError:
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
detail=f'Maximum file size limit ({MAX_FILE_SIZE} bytes) exceeded')
except Exception:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='There was an error uploading the file')
if not file_.multipart_filename:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, detail='File is missing')
print(data.value.decode())
print(file_.multipart_filename)
return {"message": f"Successfuly uploaded {filename}"}
As mentioned earlier, to upload the data (on client side), you can use the HTTPX library, which supports streaming file uploads by default, and thus allows you to send large streams/files without loading them entirely into memory. You can pass additional Form data as well, using the data argument. Below, a custom header, i.e., Filename, is used to pass the filename to the server, so that the server instantiates the FileTarget class with that name (you could use the X- prefix for custom headers, if you wish; however, it is not officially recommended anymore).
To upload multiple files, use a header for each file (or, use random names on server side, and once the file has been fully uploaded, you can optionally rename it using the file_.multipart_filename attribute), pass a list of files, as described in the documentation (Note: use a different field name for each file, so that they won't overlap when parsing them on server side, e.g., files = [('file', open('bigFile.zip', 'rb')),('file_2', open('bigFile2.zip', 'rb'))], and finally, define the Target classes on server side accordingly.
test.py
import httpx
import time
url ='http://127.0.0.1:8000/upload'
files = {'file': open('bigFile.zip', 'rb')}
headers={'Filename': 'bigFile.zip'}
data = {'data': 'Hello World!'}
with httpx.Client() as client:
start = time.time()
r = client.post(url, data=data, files=files, headers=headers)
end = time.time()
print(f'Time elapsed: {end - start}s')
print(r.status_code, r.json(), sep=' ')
Upload both File and JSON body
In case you would like to upload both file(s) and JSON instead of Form data, you can use the approach described in Method 3 of this answer, thus also saving you from performing manual checks on the received Form fields, as explained earlier (see the linked answer for more details). To do that, make the following changes in the code above.
app.py
#...
from fastapi import Form
from pydantic import BaseModel, ValidationError
from typing import Optional
from fastapi.encoders import jsonable_encoder
class Base(BaseModel):
name: str
point: Optional[float] = None
is_accepted: Optional[bool] = False
def checker(data: str = Form(...)):
try:
model = Base.parse_raw(data)
except ValidationError as e:
raise HTTPException(detail=jsonable_encoder(e.errors()), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY)
return model
#...
#app.post('/upload')
async def upload(request: Request):
#...
# place this after the try-except block
model = checker(data.value.decode())
print(model.dict())
test.py
#...
import json
data = {'data': json.dumps({"name": "foo", "point": 0.13, "is_accepted": False})}
#...
Option 2 (Slow) - Upload File and Form data using UploadFile and Form
If you would like to use a normal def endpoint instead, see this answer.
app.py
from fastapi import FastAPI, File, UploadFile, Form, HTTPException, status
import aiofiles
import os
CHUNK_SIZE = 1024 * 1024 # adjust the chunk size as desired
app = FastAPI()
#app.post("/upload")
async def upload(file: UploadFile = File(...), data: str = Form(...)):
try:
filepath = os.path.join('./', os.path.basename(file.filename))
async with aiofiles.open(filepath, 'wb') as f:
while chunk := await file.read(CHUNK_SIZE):
await f.write(chunk)
except Exception:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='There was an error uploading the file')
finally:
await file.close()
return {"message": f"Successfuly uploaded {file.filename}"}
As mentioned earlier, using this option would take longer for the file upload to complete, and as HTTPX uses a default timeout of 5 seconds, you will most likely get a ReadTimeout exception (as the server will need some time to read the SpooledTemporaryFile in chunks and write the contents to a permanent location on the disk). Thus, you can configure the timeout (see the Timeout class in the source code too), and more specifically, the read timeout, which "specifies the maximum duration to wait for a chunk of data to be received (for example, a chunk of the response body)". If set to None instead of some positive numerical value, there will be no timeout on read.
test.py
import httpx
import time
url ='http://127.0.0.1:8000/upload'
files = {'file': open('bigFile.zip', 'rb')}
headers={'Filename': 'bigFile.zip'}
data = {'data': 'Hello World!'}
timeout = httpx.Timeout(None, read=180.0)
with httpx.Client(timeout=timeout) as client:
start = time.time()
r = client.post(url, data=data, files=files, headers=headers)
end = time.time()
print(f'Time elapsed: {end - start}s')
print(r.status_code, r.json(), sep=' ')

Related

Express JS: how does it handle simultaneous requests and avoid collision?

I am new to nodejs/Express.js development.
I have built my backend service with Express.js / Typescript and I have multiple routes / api endpoints defined. One is like this:
app.post('/api/issues/new', createNewIssue);
where browser will send a post request when a user submits a new photo (also called an issue in my app).
The user can send an issue to another user, and the backend will first query the database to find the number of issues that matches the conditions of "source user" and "destination user", and then give the new issue an identifying ID in the form srcUser-dstUser-[number], where number is the auto-incremented count.
The createNewIssue function is like this:
export const createNewIssue = catchErrors(async (req, res) => {
const srcUser = req.header('src_username');
const dstUser = req.header('dst_username');
// query database for number of issues matching "srcUser" and "dstUser"
...
const lastIssues = await Issue.find( {where: {"srcUser": srcUser, "dstUser": dstUser}, order: { id: 'DESC'}});
const count = lastIssues.length;
// create a new issue Entity with the ID `srcUser-dstUser-[count+1]`
const newIssue = await createEntity(Issue, {
...
id: `srcUser-dstUser-${count+1}`,
...
});
res.respond({ newIssue: newIssue});
})
Say the backend receives multiple requests with the same srcUser and dstUser attributes at the same time, will there be collisions where multiple new issues are created with the same id?
I have read some documentation about nodejs being single-threaded, but I'm not sure what that means definitely for this specific scenario.
Besides business logic in this scenario, I have some confusions in general about Express JS / Node JS:
When there is only one cpu core, Express JS process multiple concurrent requests asynchronously: it starts processing one and does not wait for it to finish, instead continues to process the next one. Is this understanding accurate?
When there are multiple cpu cores, does Express JS / Node Js utilize them all in the same manner?
Node.js will not solve this problem for you automatically.
While it will only deal with one thing at a time, it is entirely possible that Request 2 will request the latest ID in the database while Request 1 has hit the await statement at the same point and gone to sleep. This would mean they get the same answer and would each try to create a new entry with the same ID.
You need to write your JavaScript to make sure that this doesn't happen.
The usual ways to handle this would be to either:
Let the database (and not your JavaScript) handle the ID generation (usually by using a sequence.
Use transactions so that the request for the latest ID and the insertion of the new row are treated as one operation by the database (so it won't start the same operation for Request 2 until the select and insert for Request 1 are both done).
Test to make sure createEntity is successful (and doesn't throw a 'duplicate id' error) and try again if it fails (with a limit in case it keeps failing in which case it should return an error message to the client).
The specifics depend on which database you use. I linked to the Postgresql documentation for the sake of example.

Retrieving data with many jsonp requests

My question is how retrieve data using many jsonp requests? it is practical? Currently, I'm using in my CRA this fragment of code (below)(pseudocode).
import * as fetch from 'fetch-jsonp';
import * as BlueBird from 'bluebird'; // bluebird is promise library
const getData = async () => {
const urls = ['https://...', 'https://...'] // containt about 20000 urls
const response = await BlueBird.map(urls, url => fetch(url), { concurrency: 10 })
return response
}
Working version of code creates script tags in my DOM, so as you can guess it prolongs DOM rendering, my laptop starts really heating and I end up getting error "render process gone" (on small data everything works). So what I should do? move my code to the server side and use json? or it's possible to create separate react dom and use it for jsonp? (cannot use json on client side cause cors)
Use jsoup official dependency if you are working for Android or library for java.
Then use
String html="here html";
Document doc=Jsoup.parse(html);

How does simply piping to the response object render data to the client?

In the example code in this article, how is the last segment of the stream working on the line:
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
I understand that the first part reading the file, the second is compressing it, but what is .pipe(res)? which seems to do the job I'd usually do with res.send or res.sendFile.
Full code†:
const accepts = require('accepts')
const brotli = require('iltorb').compressStream
function onRequest (req, res) {
res.setHeader('Content-Type', 'text/html')
const fileName = req.params.fileName
const filePath = path.resolve(__dirname, 'files', fileName)
const encodings = new Set(accepts(req).encodings())
if (encodings.has('br')) {
res.setHeader('Content-Encoding', 'br')
fs.createReadStream(filePath).pipe(brotli()).pipe(res)
}
}
const app = express()
app.use('/files/:fileName', onRequest)
localhost:5000/files/test.txt => Browser displays text contents of that file
How does simply piping the data to the response object render the data back to the client?
† which I changed slightly to use express, and a few other minor stuff.
"How does simply piping the data to the response object render the data back to the client?"
The wording of "the response object" in the question could mean the asker is trying to understand why piping data from a stream to res does anything. The misconception is that res is just some object.
This is because all express Responses (res) inherit from http.ServerResponse (on this line), which is a writable Stream. Thus, whenever data is written to res, the written data is handled by http.ServerResponse which internally sends the written data back to the client.
Internally, res.send actually just writes to the underlying stream it represents (itself). res.sendFile actually pipes the data read from the file to itself.
In case the act of "piping" data from one stream to another is unclear, see the section at the bottom.
If, instead, the flow of data from file to client isn't clear to the asker, then here's a separate explanation.
I'd say the first step to understanding this line is to break it up into smaller, more understandable fragments:
First, fs.createReadStream is used to get a readable stream of a file's contents.
const fileStream = fs.createReadStream(filePath);
Next, a transform stream that transforms data into a compressed format is created and the data in the fileStream is "piped" (passed) into it.
const compressionStream = brotli();
fileStream.pipe(compressionStream);
Finally, the data that passes through the compressionStream (the transform stream) is piped into the response, which is also a writable stream.
compressionStream.pipe(res);
The process is quite simple when laid out visually:
Following the flow of data is now quite simple: the data first comes from a file, through a compressor, and finally to the response, which internally sends the data back to the client.
Wait, but how does the compression stream pipe into the response stream?
The answer is that pipe returns the destination stream. That means when you do a.pipe(b), you'll get b back from the method call.
Take the line a.pipe(b).pipe(c) for example. First, a.pipe(b) is evaluated, returning b. Then, .pipe(c) is called on the result of a.pipe(b), which is b, thus being equivalent to b.pipe(c).
a.pipe(b).pipe(c);
// is the same as
a.pipe(b); // returns `b`
b.pipe(c);
// is the same as
(a.pipe(b)).pipe(c);
The wording "imply piping the data to the response object" in the question could also entail the asker doesn't understand the flow of the data, thinking that the data goes directly from a to c. Instead, the above should clarify that the data goes from a to b, then b to c; fileStream to compressionStream, then compressionStream to res.
A Code Analogy
If the whole process still makes no sense, it might be beneficial to rewrite the process without the concept of streams:
First, the data is read from the file.
const fileContents = fs.readFileSync(filePath);
The fileContents are then compressed. This is done using some compress function.
function compress(data) {
// ...
}
const compressedData = compress(fileContents);
Finally, the data is sent back to the client through the response res.
res.send(compressedData);
The original line of code in the question and the above process are more or less the same, barring the inclusion of streams in the original.
The act of taking some data in from an outside source (fs.readFileSync) is like a readable Stream. The act of transforming the data (compress) via a function is like a transform Stream. The act of sending the data to an outside source (res.send) is like a writable Stream.
"Streams are Confusing"
If you're confused about how streams work, here's a simple analogy: each type of stream can be thought of in the context of water (data) flowing down the side of a mountain from a lake on the top.
Readable streams are like the lake on the top, the source of the water (data).
Writable streams are like people or plants at the bottom of the mountain, consuming the water (data).
Duplex streams are just streams that are both Readable and Writable. They're be akin to a facility at the bottom that takes in water and puts out some type of product (i.e. purified water, carbonated water, etc.).
Transform streams are also Duplex streams. They're like rocks or trees on the side of the mountain, forcing the water (data) to take a different path to get to the bottom.
A convenient way of writing all data read from a readable stream directly to a writable stream is to just pipe it, which is just directly connecting the lake to the people.
readable.pipe(writable); // easy & simple
This is in contrast to reading data from the readable stream, then manually writing it to the writable stream:
// "pipe" data from a `readable` stream to a `writable` one.
readable.on('data', (chunk) => {
writable.write(chunk);
});
readable.on('end', () => writable.end());
You might immediately question why Transform streams are the same as Duplex streams. The only difference between the two is how they're implemented.
Transform streams implement a _transform function that's supposed to take in written data and return readable data, whereas a Duplex stream is simply both a Readable and Writable stream, thus having to implement _read and _write.
I'm not sure if I understand your question correctly. But I'll attempt to explain the code fs.createReadStream(filePath).pipe(brotli()).pipe(res) which might clarify your doubt, hopefully.
If you check the source code of iltorb, compressStream returns an object of TransformStreamEncode which extends Transform. As you can see Transform streams implement both the Readable and Writable interfaces. So when fs.createReadStream(filePath).pipe(brotli()) is getting executed, TransformStreamEncode's writable interface is used to write the data read from filePath. Now when the next call to .pipe(res) is getting executed, readable interface of TransformStreamEncode is used to read the compressed data and it is passed to res. If you check the documentation of HTTP Response object it implements the Writable interface. So it internally handles the pipe event to read the compressed data from Readable TransformStreamEncode and then sends it to client.
HTH.
You ask:
How does simply piping the data to the response object render the data back to the client?
Most people understand "render X" as "produce some visual representation of X". Sending the data to the browser (here, through piping) is a necessary step prior to rendering on the browser the file that is read from the file system, but piping is not what does the rendering. What happens is that the Express app takes the content of the file, compresses it and sends the compressed stream as-is to the browser. This is a necessary step because the browser cannot render anything if it does not have the data. So .pipe is only used to pass the data to the response sent to the browser.
By itself, this does not "render", nor tell the browser what to do with the data. Before the piping, this happens: res.setHeader('Content-Type', 'text/html'). So the browser will see a header telling it that the content is HTML. Browsers know what to do with HTML: display it. So it will take the data it gets, decompress it (because the Content-Encoding header tells it it is compressed), interpret it as HTML, and show it to the user, that is, render it.
what is .pipe(res)? which seems to do the job I'd usually do with res.send or res.sendFile.
.pipe is used to pass the entire content of a readable stream to a writable stream. It is a convenience method when handling streams. Using .pipe to send a response makes sense when you must read from a stream to get the data you want to include in the response. If you do not have to read from a stream, you should use .send or .sendFile. They perform nice bookkeeping tasks like setting the Content-Length header, that otherwise you'd have to do yourself.
In fact, the example you show is doing a poor attempt at performing content negotiation. That code should be rewritten to use res.sendFile to send the file to the browser, and the handling of compression should be done by a middleware designed for content negotiation, because there's much more to it than only supporting the br scheme.
read this to obtain answer : Node.js Streams: Everything you need to know
I'll quote interressant part :
a.pipe(b).pipe(c).pipe(d)
# Which is equivalent to:
a.pipe(b)
b.pipe(c)
c.pipe(d)
# Which, in Linux, is equivalent to:
$ a | b | c | d
so fs.createReadStream(filePath).pipe(brotli()).pipe(res) is equivalent to var readableStream = fs.createReadStream(filePath).pipe(brotli());readableStream .pipe(res)
and
# readable.pipe(writable)
readable.on('data', (chunk) => {
writable.write(chunk);
});
readable.on('end', () => {
writable.end();
});
so Node.js read the file and convert it to readble stream object fs.createReadStream(filePath).
Then it give to iltorb library that create another readble stream .pipe(brotli()) (containing the compressed content) and finally pass the content to res which is writable stream. So nodejs call internally res.write() that write back to the browser.

Cache HTML using request-promise and Node.js

I'm looking for a simple way to cache HTML that I pull using the request-promise library.
The way I've done this in the past is specify a time-to-live say one day. Then I take the parameters passed into request and I hash them. Then whenever a request is made I save the HTML contents on the file-system in a specific folder and name the file the name of the hash and the unix timestamp. Then when a request is made for the using the same parameters I check if the cache is still relevant via timestamp and pull it or make a new request.
Is there any library that can help with this that can wrap around request? Does request have a method of doing this natively?
I went with the recco in the comments and used Redis. Note this only works for get requests.
/* cached requests */
async function cacheRequest(options){
let stringOptions = JSON.stringify(options)
let optionsHashed = crypto.createHash('md5').update(stringOptions).digest('hex')
let get = await client.getAsync(optionsHashed)
if (get) return get
let HTML = await request.get(options)
await client.setAsync(optionsHashed, HTML)
return HTML
}

Using Google Cloud Datastore and AJAX(blobs)-python

Hi I have some images stored as BlobProperty in Google Cloud Datastore. I am trying to load these images via ajax into my template. For eg:- a user has an image and a name. Now the image and name area gets populated via an AJAX get call to the server. I am not understanding how to I send these images to the client , JSON wont support binary data. However googling around tells me of something called base 64.(I am quite new to all this, so let me admit , I am a noob).
Is this the only way to handle this or is there some other better way.
This thread suggests that if you just create an image element, set its src, and add it to your page using Javascript, the browser will take care of making an HTTP request for the image:
http://bytes.com/topic/javascript/answers/472046-using-ajax-xmlhttprequest-load-images
If you do want to do it with 'pure' AJAX, then base64 is probably the best thing: it's a way of encoding binary data (like images) as text, so you can send it as a long string in json.
This is how I do it, it's in flask but nonetheless it's python
this way, you create a request handler to display the images.
So all you need to do to get the image via ajax is getting the image id to be served. It's simpler and you can manipulate the size as well on the fly
from flask import request
from google.appengine.api import taskqueue, images, mail
from google.appengine.ext import db
#app.route('/image/<img_id>')
def imgshow(img_id):
imageuse = Image.all().filter("image_id =", img_id).get()
if imageuse:
response = Response(response=imageuse.content)
#you can use any type over here
response.headers['Content-Type']='image/png'
return response
else:
return
this is what I do to manipulate the size
#app.route('/thumb/<img_id>')
def thumbshow(img_id):
imageuse = Image.all().filter("image_id =", img_id).get()
if imageuse:
thbimg = images.resize(imageuse.content, 80)
response = Response(thbimg)
response.headers['Content-Type']='image/png'
return response
else:
return
hope that helps

Categories

Resources