Using Google Cloud Datastore and AJAX(blobs)-python - javascript

Hi I have some images stored as BlobProperty in Google Cloud Datastore. I am trying to load these images via ajax into my template. For eg:- a user has an image and a name. Now the image and name area gets populated via an AJAX get call to the server. I am not understanding how to I send these images to the client , JSON wont support binary data. However googling around tells me of something called base 64.(I am quite new to all this, so let me admit , I am a noob).
Is this the only way to handle this or is there some other better way.

This thread suggests that if you just create an image element, set its src, and add it to your page using Javascript, the browser will take care of making an HTTP request for the image:
http://bytes.com/topic/javascript/answers/472046-using-ajax-xmlhttprequest-load-images
If you do want to do it with 'pure' AJAX, then base64 is probably the best thing: it's a way of encoding binary data (like images) as text, so you can send it as a long string in json.

This is how I do it, it's in flask but nonetheless it's python
this way, you create a request handler to display the images.
So all you need to do to get the image via ajax is getting the image id to be served. It's simpler and you can manipulate the size as well on the fly
from flask import request
from google.appengine.api import taskqueue, images, mail
from google.appengine.ext import db
#app.route('/image/<img_id>')
def imgshow(img_id):
imageuse = Image.all().filter("image_id =", img_id).get()
if imageuse:
response = Response(response=imageuse.content)
#you can use any type over here
response.headers['Content-Type']='image/png'
return response
else:
return
this is what I do to manipulate the size
#app.route('/thumb/<img_id>')
def thumbshow(img_id):
imageuse = Image.all().filter("image_id =", img_id).get()
if imageuse:
thbimg = images.resize(imageuse.content, 80)
response = Response(thbimg)
response.headers['Content-Type']='image/png'
return response
else:
return
hope that helps

Related

Streaming File Request from Browser to FastAPI [duplicate]

I am trying to upload a large file (≥3GB) to my FastAPI server, without loading the entire file into memory, as my server has only 2GB of free memory.
Server side:
async def uploadfiles(upload_file: UploadFile = File(...):
Client side:
m = MultipartEncoder(fields = {"upload_file":open(file_name,'rb')})
prefix = "http://xxx:5000"
url = "{}/v1/uploadfiles".format(prefix)
try:
req = requests.post(
url,
data=m,
verify=False,
)
which returns:
HTTP 422 {"detail":[{"loc":["body","upload_file"],"msg":"field required","type":"value_error.missing"}]}
I am not sure what MultipartEncoder actually sends to the server, so that the request does not match. Any ideas?
With requests-toolbelt library, you have to pass the filename as well, when declaring the field for upload_file, as well as set the Content-Type header—which is the main reason for the error you get, as you are sending the request without setting the Content-Type header to multipart/form-data, followed by the necessary boundary string—as shown in the documentation. Example:
filename = 'my_file.txt'
m = MultipartEncoder(fields={'upload_file': (filename, open(filename, 'rb'))})
r = requests.post(url, data=m, headers={'Content-Type': m.content_type})
print(r.request.headers) # confirm that the 'Content-Type' header has been set
However, I wouldn't recommend using a library (i.e., requests-toolbelt) that hasn't provided a new release for over three years now. I would suggest using Python requests instead, as demonstrated in this answer and that answer (also see Streaming Uploads and Chunk-Encoded Requests), or, preferably, use the HTTPX library, which supports async requests (if you had to send multiple requests simultaneously), as well as streaming File uploads by default, meaning that only one chunk at a time will be loaded into memory (see the documentation). Examples are given below.
Option 1 (Fast) - Upload File and Form data using .stream()
As previously explained in detail in this answer, when you declare an UploadFile object, FastAPI/Starlette, under the hood, uses a SpooledTemporaryFile with the max_size attribute set to 1MB, meaning that the file data is spooled in memory until the file size exceeds the max_size, at which point the contents are written to disk; more specifically, to a temporary file on your OS's temporary directory—see this answer on how to find/change the default temporary directory—that you later need to read the data from, using the .read() method. Hence, this whole process makes uploading file quite slow; especially, if it is a large file (as you'll see in Option 2 below later on).
To avoid that and speed up the process, as the linked answer above suggested, one can access the request body as a stream. As per Starlette documentation, if you use the .stream() method, the (request) byte chunks are provided without storing the entire body to memory (and later to a temporary file, if the body size exceeds 1MB). This method allows you to read and process the byte chunks as they arrive. The below takes the suggested solution a step further, by using the streaming-form-data library, which provides a Python parser for parsing streaming multipart/form-data input chunks. This means that not only you can upload Form data along with File(s), but you also don't have to wait for the entire request body to be received, in order to start parsing the data. The way it's done is that you initialise the main parser class (passing the HTTP request headers that help to determine the input Content-Type, and hence, the boundary string used to separate each body part in the multipart payload, etc.), and associate one of the Target classes to define what should be done with a field when it has been extracted out of the request body. For instance, FileTarget would stream the data to a file on disk, whereas ValueTarget would hold the data in memory (this class can be used for either Form or File data as well, if you don't need the file(s) saved to the disk). It is also possible to define your own custom Target classes. I have to mention that streaming-form-data library does not currently support async calls to I/O operations, meaning that the writing of chunks happens synchronously (within a def function). Though, as the endpoint below uses .stream() (which is an async function), it will give up control for other tasks/requests to run on the event loop, while waiting for data to become available from the stream. You could also run the function for parsing the received data in a separate thread and await it, using Starlette's run_in_threadpool()—e.g., await run_in_threadpool(parser.data_received, chunk)—which is used by FastAPI internally when you call the async methods of UploadFile, as shown here. For more details on def vs async def, please have a look at this answer.
You can also perform certain validation tasks, e.g., ensuring that the input size is not exceeding a certain value. This can be done using the MaxSizeValidator. However, as this would only be applied to the fields you defined—and hence, it wouldn't prevent a malicious user from sending extremely large request body, which could result in consuming server resources in a way that the application may end up crashing—the below incorporates a custom MaxBodySizeValidator class that is used to make sure that the request body size is not exceeding a pre-defined value. The both validators desribed above solve the problem of limiting upload file (as well as the entire request body) size in a likely better way than the one desribed here, which uses UploadFile, and hence, the file needs to be entirely received and saved to the temporary directory, before performing the check (not to mention that the approach does not take into account the request body size at all)—using as ASGI middleware such as this would be an alternative solution for limiting the request body. Also, in case you are using Gunicorn with Uvicorn, you can also define limits with regards to, for example, the number of HTTP header fields in a request, the size of an HTTP request header field, and so on (see the documentation). Similar limits can be applied when using reverse proxy servers, such as Nginx (which also allows you to set the maximum request body size using the client_max_body_size directive).
A few notes for the example below. Since it uses the Request object directly, and not UploadFile and Form objects, the endpoint won't be properly documented in the auto-generated docs at /docs (if that's important for your app at all). This also means that you have to perform some checks yourself, such as whether the required fields for the endpoint were received or not, and if they were in the expected format. For instance, for the data field, you could check whether data.value is empty or not (empty would mean that the user has either not included that field in the multipart/form-data, or sent an empty value), as well as if isinstance(data.value, str). As for the file(s), you can check whether file_.multipart_filename is not empty; however, since a filename could likely not be included in the Content-Disposition by some user, you may also want to check if the file exists in the filesystem, using os.path.isfile(filepath) (Note: you need to make sure there is no pre-existing file with the same name in that specified location; otherwise, the aforementioned function would always return True, even when the user did not send the file).
Regarding the applied size limits, the MAX_REQUEST_BODY_SIZE below must be larger than MAX_FILE_SIZE (plus all the Form values size) you expcect to receive, as the raw request body (that you get from using the .stream() method) includes a few more bytes for the --boundary and Content-Disposition header for each of the fields in the body. Hence, you should add a few more bytes, depending on the Form values and the number of files you expect to receive (hence the MAX_FILE_SIZE + 1024 below).
app.py
from fastapi import FastAPI, Request, HTTPException, status
from streaming_form_data import StreamingFormDataParser
from streaming_form_data.targets import FileTarget, ValueTarget
from streaming_form_data.validators import MaxSizeValidator
import streaming_form_data
from starlette.requests import ClientDisconnect
import os
MAX_FILE_SIZE = 1024 * 1024 * 1024 * 4 # = 4GB
MAX_REQUEST_BODY_SIZE = MAX_FILE_SIZE + 1024
app = FastAPI()
class MaxBodySizeException(Exception):
def __init__(self, body_len: str):
self.body_len = body_len
class MaxBodySizeValidator:
def __init__(self, max_size: int):
self.body_len = 0
self.max_size = max_size
def __call__(self, chunk: bytes):
self.body_len += len(chunk)
if self.body_len > self.max_size:
raise MaxBodySizeException(body_len=self.body_len)
#app.post('/upload')
async def upload(request: Request):
body_validator = MaxBodySizeValidator(MAX_REQUEST_BODY_SIZE)
filename = request.headers.get('Filename')
if not filename:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail='Filename header is missing')
try:
filepath = os.path.join('./', os.path.basename(filename))
file_ = FileTarget(filepath, validator=MaxSizeValidator(MAX_FILE_SIZE))
data = ValueTarget()
parser = StreamingFormDataParser(headers=request.headers)
parser.register('file', file_)
parser.register('data', data)
async for chunk in request.stream():
body_validator(chunk)
parser.data_received(chunk)
except ClientDisconnect:
print("Client Disconnected")
except MaxBodySizeException as e:
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
detail=f'Maximum request body size limit ({MAX_REQUEST_BODY_SIZE} bytes) exceeded ({e.body_len} bytes read)')
except streaming_form_data.validators.ValidationError:
raise HTTPException(status_code=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE,
detail=f'Maximum file size limit ({MAX_FILE_SIZE} bytes) exceeded')
except Exception:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='There was an error uploading the file')
if not file_.multipart_filename:
raise HTTPException(status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, detail='File is missing')
print(data.value.decode())
print(file_.multipart_filename)
return {"message": f"Successfuly uploaded {filename}"}
As mentioned earlier, to upload the data (on client side), you can use the HTTPX library, which supports streaming file uploads by default, and thus allows you to send large streams/files without loading them entirely into memory. You can pass additional Form data as well, using the data argument. Below, a custom header, i.e., Filename, is used to pass the filename to the server, so that the server instantiates the FileTarget class with that name (you could use the X- prefix for custom headers, if you wish; however, it is not officially recommended anymore).
To upload multiple files, use a header for each file (or, use random names on server side, and once the file has been fully uploaded, you can optionally rename it using the file_.multipart_filename attribute), pass a list of files, as described in the documentation (Note: use a different field name for each file, so that they won't overlap when parsing them on server side, e.g., files = [('file', open('bigFile.zip', 'rb')),('file_2', open('bigFile2.zip', 'rb'))], and finally, define the Target classes on server side accordingly.
test.py
import httpx
import time
url ='http://127.0.0.1:8000/upload'
files = {'file': open('bigFile.zip', 'rb')}
headers={'Filename': 'bigFile.zip'}
data = {'data': 'Hello World!'}
with httpx.Client() as client:
start = time.time()
r = client.post(url, data=data, files=files, headers=headers)
end = time.time()
print(f'Time elapsed: {end - start}s')
print(r.status_code, r.json(), sep=' ')
Upload both File and JSON body
In case you would like to upload both file(s) and JSON instead of Form data, you can use the approach described in Method 3 of this answer, thus also saving you from performing manual checks on the received Form fields, as explained earlier (see the linked answer for more details). To do that, make the following changes in the code above.
app.py
#...
from fastapi import Form
from pydantic import BaseModel, ValidationError
from typing import Optional
from fastapi.encoders import jsonable_encoder
class Base(BaseModel):
name: str
point: Optional[float] = None
is_accepted: Optional[bool] = False
def checker(data: str = Form(...)):
try:
model = Base.parse_raw(data)
except ValidationError as e:
raise HTTPException(detail=jsonable_encoder(e.errors()), status_code=status.HTTP_422_UNPROCESSABLE_ENTITY)
return model
#...
#app.post('/upload')
async def upload(request: Request):
#...
# place this after the try-except block
model = checker(data.value.decode())
print(model.dict())
test.py
#...
import json
data = {'data': json.dumps({"name": "foo", "point": 0.13, "is_accepted": False})}
#...
Option 2 (Slow) - Upload File and Form data using UploadFile and Form
If you would like to use a normal def endpoint instead, see this answer.
app.py
from fastapi import FastAPI, File, UploadFile, Form, HTTPException, status
import aiofiles
import os
CHUNK_SIZE = 1024 * 1024 # adjust the chunk size as desired
app = FastAPI()
#app.post("/upload")
async def upload(file: UploadFile = File(...), data: str = Form(...)):
try:
filepath = os.path.join('./', os.path.basename(file.filename))
async with aiofiles.open(filepath, 'wb') as f:
while chunk := await file.read(CHUNK_SIZE):
await f.write(chunk)
except Exception:
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail='There was an error uploading the file')
finally:
await file.close()
return {"message": f"Successfuly uploaded {file.filename}"}
As mentioned earlier, using this option would take longer for the file upload to complete, and as HTTPX uses a default timeout of 5 seconds, you will most likely get a ReadTimeout exception (as the server will need some time to read the SpooledTemporaryFile in chunks and write the contents to a permanent location on the disk). Thus, you can configure the timeout (see the Timeout class in the source code too), and more specifically, the read timeout, which "specifies the maximum duration to wait for a chunk of data to be received (for example, a chunk of the response body)". If set to None instead of some positive numerical value, there will be no timeout on read.
test.py
import httpx
import time
url ='http://127.0.0.1:8000/upload'
files = {'file': open('bigFile.zip', 'rb')}
headers={'Filename': 'bigFile.zip'}
data = {'data': 'Hello World!'}
timeout = httpx.Timeout(None, read=180.0)
with httpx.Client(timeout=timeout) as client:
start = time.time()
r = client.post(url, data=data, files=files, headers=headers)
end = time.time()
print(f'Time elapsed: {end - start}s')
print(r.status_code, r.json(), sep=' ')

File object (image) as value in a dictionary python

i'm building a web application that send some information to an API (API Gateway of AWS) and it receive back an image and some informations (strings) about that image.
The strings and the image are generated by a lambda function (AWS service) written in python.
The idea is to have a simple html page where I enter information, press a button and after processing in the cloud I am shown an image and some information.
The management of the json received by the API gateway is done in javascript.
I already have the code for the management of the html page, it is already tested and it works, I show it for completeness:
function getImageFromLink(){
return fetch("https://cors-anywhere.herokuapp.com/http://media.gta-series.com/images/gta2/maps/downtown.jpg");
}
async function buttonClick2(){
const returned = await getImageFromLink();
console.log(returned);
let immagine = await returned.blob();
outside = URL.createObjectURL(immagine);
document.getElementById("image").src = outside;
Now, i wanted to do it returning a json: all kyes have strings as values except for one that is for the image.
How can i do that?
I mean: how can i put the image into the json in python (in the lambda function)? And how do i have to handle this json in javascript?
Option 1 (Recommended and easy)
Send url of image instead of sending the whole blob of image in your API response. the url can be a cloud location. This is recommended way.
Option 2 (For your case)
Convert your image into Base64 encoding in Python using base64 library and send it as a part of your json response.
{'image': '<your base64 encoded>'}
And decode the base64 string on JS side:
var image = new Image();
image.src = 'data:image/png;base64,iVBORw0K...';
document.body.appendChild(image);
Option 3 (Bit tricky and not preferred)
Here you can send the image as FormData, as multipart/form-data, which is not a great way to do.
Refer to his on how to achive it - https://julien.danjou.info/handling-multipart-form-data-python/

How to pass data between Django module/app functions without using database in asynchronous web service

I've got a web service under development that uses Django and Django Channels to send data across websockets to a remote application. The arrangement is asynchronous and I pass information between the 2 by sending JSON formatted commands across websockets and then receive replies back on the same websocket.
The problem I'm having is figuring out how to get the replies back to a Javascript call from a Django template that invokes a Python function to initiate the JSON websocket question. Since the command question & data reply happen in different Django areas and the originating Javascript/Python functions call does not have a blocking statement, the Q&A are basically disconnected and I can't figure out how to get the results back to the browser.
Right now, my idea is to use Django global variables or store the results in the Django models. I can get either to work, but I beleive the Django global variables would not scale beyond multiple workers from runserver or if the system was eventually spread across multiple servers.
But since the reply data is for different purposes (for example, list of users waiting in a remote lobby, current debugging levels in remote system, etc), the database option seems unworkable because the reply data is varying structure. That, plus the replies are temporal and don't need to be permanently stored in the database.
Here's some code showing the flow. I'm open to different implementation recommendations or a direct answer to the question of how to share information between the 2 Django functions.
In the template, for testing, I just have a button defined like this:
<button id="request_lobby">Request Lobby</button>
With a Javascript function. This function is incomplete as I've yet to do anything about the response (because I can't figure out how to connect it):
$("#request_lobby").click(function(){
$.ajax({
type: "POST",
url: "{% url 'test_panel_function' %}",
data: { csrfmiddlewaretoken: '{{ csrf_token }}', button:"request_lobby" },
success: function(response){
}
});
});
This is the Django/Python function in views.py . The return channel for the remote application is pre-stored in the database as srv.server_channel when the websocket is initially connected (not shown):
#login_required
def test_panel_function(request):
button = request.POST.get('button', '')
if button == "request_lobby" :
srv = Server.objects.get(server_key="1234567890")
json_res = []
json_res.append({"COMMAND": "REQUESTLOBBY"})
message = ({
"text": json.dumps(json_res)
})
Channel(srv.server_channel).send(message)
return HttpResponse(button)
Later, the remote application sends the reply back on the websocket and it's received by a Django Channels demultiplexer in routing.py :
class RemoteDemultiplexer(WebsocketDemultiplexer):
mapping = {
"gLOBBY" : "gLOBBY.receive",
}
http_user = True
slight_ordering = True
channel_routing = [
route_class(RemoteDemultiplexer, path=r"^/server/(?P<server_key>[a-zA-Z0-9]+)$"),
route("gLOBBY.receive" , command_LOBBY),
]
And the consumer.py :
#channel_session
def command_LOBBY(message):
skey = message.channel_session["server_key"]
for x in range(int(message.content['LOBBY'])):
logger.info("USERNAME: " + message.content[str(x)]["USERNAME"])
logger.info("LOBBY_ID: " + message.content[str(x)]["LOBBY_ID"])
logger.info("OWNER_ID: " + message.content[str(x)]["IPADDRESS"])
logger.info("DATETIME: " + message.content[str(x)]["DATETIME"])
So I need to figure out how to get the reply data in command_LOBBY to the Javascript/Python function call in test_panel_function
Current ideas, both of which seem bad and why I think I need to ask this question for SO:
1) Use Django global variables:
Define in globals.py:
global_async_result = {}
And include in all relevant Django modules:
from test.globals import global_async_result
In order to make this work, when I originate the initial command in test_panel_function to send to the remote application (the REQUESTLOBBY), I'll include a randomized key in the JSON message which would be round-tripped back to command_LOBBY and then global_async_result dictionary would be indexed with the randomized key.
In test_panel_function , I would wait in a loop checking a flag for the results to be ready in global_async_result and then retrieve them from the randomized key and delete the entry in global_async_result.
Then the reply can be given back to the Javascript in the Django template.
That all makes sense to me, but uses global variables (bad), and seems that it wouldn't scale as the web service is spread across servers.
2) Store replies in Django mySQL model.py table
I could create a table in models.py to hold the replies temporarily. Since Django doesn't allow for dynamic or temporary table creations on the fly, this would have to be a pre-defined table.
Also, because the websocket replies would be different formats for different questions, I could not know in advance all the fields ever needed and even if so, most fields would not be used for differing replies.
My workable idea here is to create the reply tables using a field for the randomized key (which is still routed back round-trip through the websocket) and another large field to just store the JSON reply entirely.
Then in test_panel_function which is blocking in a loop waiting for the results, pull the JSON from the table, delete the row, and decode. Then the reply can be given back to the Javascript in the Django template.
3) Use Django signals
Django has a signals capability, but the response function doesn't seem to be able to be embedded (like inside test_panel_function) and there seems to be no wait() function available for an arbitrary function to just wait for the signal. If this were available, it would be very helpful

Parsing a large JSON array in Javascript

I'm supposed to parse a very large JSON array in Javascipt. It looks like:
mydata = [
{'a':5, 'b':7, ... },
{'a':2, 'b':3, ... },
.
.
.
]
Now the thing is, if I pass this entire object to my parsing function parseJSON(), then of course it works, but it blocks the tab's process for 30-40 seconds (in case of an array with 160000 objects).
During this entire process of requesting this JSON from a server and parsing it, I'm displaying a 'loading' gif to the user. Of course, after I call the parse function, the gif freezes too, leading to bad user experience. I guess there's no way to get around this time, is there a way to somehow (at least) keep the loading gif from freezing?
Something like calling parseJSON() on chunks of my JSON every few milliseconds? I'm unable to implement that though being a noob in javascript.
Thanks a lot, I'd really appreciate if you could help me out here.
You might want to check this link. It's about multithreading.
Basically :
var url = 'http://bigcontentprovider.com/hugejsonfile';
var f = '(function() {
send = function(e) {
postMessage(e);
self.close();
};
importScripts("' + url + '?format=json&callback=send");
})();';
var _blob = new Blob([f], { type: 'text/javascript' });
_worker = new Worker(window.URL.createObjectURL(_blob));
_worker.onmessage = function(e) {
//Do what you want with your JSON
}
_worker.postMessage();
Haven't tried it myself to be honest...
EDIT about portability: Sebastien D. posted a comment with a link to mdn. I just added a ref to the compatibility section id.
I have never encountered a complete page lock down of 30-40 seconds, I'm almost impressed! Restructuring your data to be much smaller or splitting it into many files on the server side is the real answer. Do you actually need every little byte of the data?
Alternatively if you can't change the file #Cyrill_DD's answer of a worker thread will be able to able parse data for you and send it to your primary JS. This is not a perfect fix as you would guess though. Passing data between the 2 threads requires the information to be serialised and reinterpreted, so you could find a significant slow down when the data is passed between the threads and be back to square one again if you try to pass all the data across at once. Building a query system into your worker thread for requesting chunks of the data when you need them and using the message callback will prevent slow down from parsing on the main thread and allow you complete access to the data without loading it all into your main context.
I should add that worker threads are relatively new, main browser support is good but mobile is terrible... just a heads up!

How do I run a basic GET / synch request in Backbone?

I'm no sure I'm using the correct words, but I've looked at the localTodos app, and a few other online tutorials.
I'm reading in to Addy's free online book here:
http://addyosmani.github.io/backbone-fundamentals/#implementation-specifics
but right now I'm getting too much theory and just need to do a basic GET from my server and populate my Collection.
Can someone provide a hello World for a GET / synch request. All the mysql tables are set up and so is the code that provides a nice JSON stream of my table, neatly organized.
I shouldn't need to install a PHP framework as I can respond with the JSON stream just fine on my own.
I just need a starting point as I'm guessing it will be a few weeks before the book hits this if it does at all.
I tagged this PHP but I don't think it should matter, as all Backbone will see is a JSON stream.
Ok the basics are.
use "fetch" to get something from server.
use "save" to put or post something from server.
use "destroy" to delete something from server.
To perform fetch you'll need a code like this:
Inside your Model
//Coffescript
url: "pathToYourAPi/"
getAllFromServer:->
#fetch()
//Javascript
url: "pathToYourAPi/",
getAllFromServer: function() {
return this.fetch();
}
This is the simplest way to get data from server. But if you want to get an specific data from server, you maybe should pass an Id or something.
//Coffeescript
url:"/pathToYourAPi/"
setAttributes:->
#set("id": 1)
getItenFromServer:->
#fetch()
// Javascript
setAttributes: function() {
return this.set({"id": 1});
},
getItenFromServer: function() {
return this.fetch();
}
It will request to your api path passing the number 1 as "parameter" to server.
If you want to specify the data that you want to sendo to server in another way, you need pass a Object called data when you're "fetching"
example inside model.
//Coffescript
GetSomeData: ->
#fetch({ data:{ id: 1}})
//Javascript
GetSomeData: function() {
return this.fetch({data: {"id": 1}
});
I have a post about tips using backbone, unfortunately it's only available in portuguese.
try to use google to translate it.
http://www.rcarvalhojs.com/dicas/de/backbone/2014/06/04/5dicas-backbone.html.
Hope it helps.

Categories

Resources