PIXI: browser sends no cookies with Texture.fromImage() - javascript

I am trying to use PIXI to create an image-based sprite, thus:
var s = new PIXI.Sprite(PIXI.Texture.fromImage("bunny.png"))
My server can only locate the correct image file if the request for "bunny.png" arrives with a session cookie. Unfortunately, no cookies are sent (which is evident from server side debugging, and clearly evident in Chrome's developer console).
If I add a simple img tag in the html, I observe (in Chrome's developer console) that cookies are sent and the image is returned without any trouble:
<img src="bunny.png">
I am using PIXI 3.0.5.
What am I failing to understand? Why would these two bunnies behave so differently?

var s = new PIXI.Sprite(PIXI.Texture.fromImage("bunny.png", false))
The default behavior is to pretend that we want to avoid cross-site scripting abuse, so cookies are suppressed. This is how the PIXI tutorials work, apparently (and who cares about cookies in that case?)
If you want the cookies, you must set the crossdomain parameter to false.
I thought I had tried that already, but evidently I was mistaken! Bunnies everywhere now..

Related

How can I cancel consecutive requests to my server? [duplicate]

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

HTML5 localStorage Showing Different Data in Different Frames (Same Domain)

I am seeing an issue that, by my reading, shouldn't be happening. In short, I have a web application with nested iframes. The frame containership is as shown below:
http://mysite.doh
https://othersite.duh
https://mysite.doh/Panel.html?urlparam1=x
https://mysite.doh/Panel.html?urlparam2=y
Note: I have been careful about indicating the http vs. https, since protocol is part of what is considered for the same origin policy. These are indeed HTTPS iframes inside of an HTTP main page. When the Panel.html opens, it attempts to record a couple of query parameters so that its twin panel(s) also knows them. So there is code like:
urlparam1 = $.urlParam('urlparam1 ');
if (urlparam1 == null || urlparam1 == ''){
urlparam1 = localStorage.getItem("urlparam1");
} else {
localStorage.setItem("urlparam1", urlparam1);
}
My goal is to transfer the value of urlparam1 to all versions of Panel.html. This is actually the case any time a panel opens on the browser, and each version of the Panel has a polling function to look for changed data in the localStorage.
However, localStorage seems to not be getting the job done correctly. Changes to localStorage on one panel are not reflected in its counterpart. This seems mighty odd, since it is my understanding that localStorage is to be shared by all pages with the same origin. The two panels are definitely the same origin: they're literally the same URL, just with different query parameters.
Anyone know why this might be the case? I have looked at the localStorage dictionaries, and they have completely different data inside of them (e.g., one has urlparam1, the other urlparam2- they should both have the same data, with both values available). I can think of no reason why this might be the case. The origins are identical (including the protocol). The browsers I have been testing with are FireFox and Chrome (mostly FireFox).
EDIT: As an update, it seems like it might be due to the settings for the iframe built by otherside.duh. They are allowing scripts, but may be disallowing allow-same-origin. (Source: https://readable-email.org/list/whatwg/topic/cross-origin-iframe-and-sandbox-allow-same-origin) This appears to be advised on many older sites, since it indicates that if both allow scripts and allow-same-origin are enabled, the iframe can then remove its own sandbox property. Is that still true (sources were 2013)? Because that sounds like an incredibly bad design decision, if so. I can't imagine why anyone would want that to be desired behavior. To clarify what allow-same-origin does, it is
If you don't have allow-same-origin, the content ends up in a unique origin, not its "real" origin.
So it's as if your frame runs on some arbitrary site, rather than the one that you actually use.

A couple of requests with user# in URL lead to "Policy breach notice" from Google AdSense

I've recently got an email from Google, saying that they are going to ban my AdSense account because I'm sending Personally Identifiable Information to them with my Google AdSense tag requests. It says that around 1% of requests from my website have a referrer of:
some_user#my_website.com/some/subpage
and they consider some_user#my_website.com to be PII (even though it can be completely made up abcd1234#my_website.com). More on this here: https://support.google.com/adsense/answer/6163366?hl=en .
I never link to this kind of URLs (the only form I use is my_website.com/some/subpage), but I guess my users sometimes enter it manually (since product-wise my website is providing an email service, it may seem reasonable by some logic).
I figured URI of some_user#my_website.com/some/subpage is legal since http basic auth allows for specifying user like this. When I entered it manually to Firefox, some_user# disappears from the location bar but in the Net panel of Firebug I can see all files are indeed requested from some_user#my_website.com/some/subpage and that's how Google sees it too.
I though that as a brute-force solution even something like:
if uri contains '#':
redirect to my_website.com
would do.
I'm using NGINX/UWSGI/Python Paste + JS. I've tried to implement the above condition both on server side and in JS, but my URI always says my_website.com/some/subpage even if I manually put some_user#my_website.com/some/subpage in the browser address bar.
I've also tried configuring basic_auth in NGINX to disallow providing any user but with no effect.
How do I get rid of these requests?
How do I get the FULL URI (with some_user#) in JS? I tried document.URI and window.location.href but they didn't contain the user part...
Apparently presence of user# part in the URI can be detected by examining window.location.href. I haven't noticed it before since window.location.href only contains user# in Webkit-based browsers (e.g. Chrome, Opera, Safari) but not in Firefox!
To resolve the problem I've added a check on that in JS + a JS redirect to an URL without user[:password]#.
Hopefully Google uses the same variable to figure out referrer for the ad requests, so it get PII only from Webkit browsers & fixing it for Webkit suffices. Will keep you posted.

Is possible to disable Javascript/Jquery from the browser inspector console?

Hi i was thinking about if there could be any way of disable the ability to change the javascript/jquery from the inspector console?
Just in case you want to avoid that a user interacts and change things from the DOM using the console, or maybe send forms avoiding some checks from javascript.
Or is impossible to do that and you just have to do all the security or this kind of things on the serverside?
Thanks!
Anything on the client side is never going to be fully secure. This is because it can be manipulated not only by the browser's developer tools, but by any number of other 3rd party tools.
The server itself must be fully secured, because there is no way of guaranteeing that a request is even being made from the web site itself, let alone that the javascript validation was not tampered with.
Yes to disable the console just run this on the client
Object.defineProperty(console, '_commandLineAPI', {
get : function() {
throw "Console is disabled";
}
});
This won't leave then to use the console.
Note: There isn't a 100% secure option to get around this, but at least doing this won't allow console usage. Add security to your server to see which request are legit.
Also this will only work in Chrome this is because Chrome wraps all the console code in:
with ((console && console._commandLineAPI) || {}) {
<code area>
}
Firefox has a different way to wrap the code from the console. This is why this is not a 100% secure protection from console commands

Load Wikipedia page and print locally

This is a weird one. I am attempting the following.
I have a local HTML and JavaScript file which generates a random Wikipedia page. When I get the URL for the random Wikipedia page I want to send it to the printer. However, both Chrome and Firefox seem to have a real problem with this.
In Chrome I get an error:
Unsafe JavaScript attempt to access frame with URL https://secure.wikimedia.org/wikipedia/en/w/index.php?title=Popran_National%20Park&printable=yes from frame with URL my local
file. Domains, protocols and ports must match. </br>
gol.js:99Uncaught TypeError: Object [object DOMWindow] has no method 'print'
In Firefox:
Permission denied to access property 'print' </br>
[Break On This Error] </br>
infoWindow.print();
Do you think this could be a because I am running things locally?
My code for spawning the new window is:
var printURL = "https://secure.wikimedia.org/wikipedia/en/w/index.php?"
infoWindow = window.open(printURL,'wiki');
setTimeout ( "printWin()", 2000 );
where printWin() is:
function printWin(){
infoWindow.print();
infoWindow.close();
}
It's the security policy stuff that you are running into. Read this and this.
What you need to do is run the GET request for the Wiki page through a server. So the server acts as a proxy. The browser will allow this because, from it's perspective, the content is from the same origin as your hosting page.
You might get broken links still. You might have to come up with way to proxy all of that as well -- or rewrite the HTML. If you do that, now you are getting into the land of copyright and I'm not sure what's what when it comes to all that.
Are you allowed to proxy Wikipedia content through a server, thereby masking its origin? Maybe you are as long as you don't change the content. But if you adjust the HTML to make it look like it was meant to look, then are you being a bad boy or a good boy? I have no idea whatsoever on this.
I think I answered your technical question though.

Categories

Resources