Trying to use the StockTwits Widget in a web page that is (and must remain) served over SSL.
Since the widget-loader.min.js script link was being called via http, I copied the code to our domain so it would be served over our SSL. Problem still not solved. Chrome 25 says my page "ran insecure content" and completely refused to run or even display the widget.
So I dug into the .js file and found this little bit:
m=b.ssl?"https://":"http://"
Figuring the warning was coming from the widgets CALL to the service, I hacked this line as follows:
// add one character ↓
m=b.ssl?"https://":"https://"
Initially I thought I had success, because Chrome 25 loaded the widget! But my glee was short lived once I saw that the nice GREEN https:// in the address bar had changed to a yellow warning sign. Clicking it for info revealed a warning: "...displayed insecure content from http://assets1.stocktwits.net....". {darn}
Since the .js is secure, and the CALL to stocktwits is made with https thanks to the hack, I have to conclude that stocktwits either isn't able or configured to reply over HTTPS.
Do you have any experience with this widget, or see something I am not doing correctly? TIA.
We've fixed the widget to use protocol-less URLs now. Let us know if that fixes this issue for you.
Currently the widget is not fully SSL compatible.
We will look into seeing what would be involved to have an SSL option with the widget. Seems like it could be done with minimal impact.
Related
Apologies if this is a roundabout way of asking this question, but I am a little confused about how the web and javascript work.
What I want to do: execute javascript on all pages of a list of urls I have found. (Specifically use jquery to pull info from them)
Problem I can't execute Javascript on these pages because they aren't mine and don't have the Access-Control-Allow-Origin header. So I can't load them (with AJAX) in order to use JQuery on them.
BUT Google Chrome can both load pages and execute javascript on them (with their developer's console). So if I wanted too, I could go to each page, open the developers console, and pull the information from there. If there's nothing stopping Chrome from accessing these, then why am I stopped? And, is there a way around this?
Thank you, and I hope my description makes sense. I've been researching this for a while but have found nothing that explains how seemingly inconsistent CORS is.
I could go to each page, open the developers console, and pull the information from there. If there's nothing stopping Chrome from accessing these, then why am I stopped?
You're not stopped. You, the human at the keyboard, can do exactly as you say, by visiting each page as a top-level page.
What is stopped -- happily -- is any and all scripts on the Web you happen to run having the same level of visibility that you do. Based on your cookies and your network topology, you have a unique view into the Web. You can see your home router's control interface (on 192.168.1.1 or similar). You can see any local web server you're running on 127.0.0.1. No one else can see these. If the same-origin policy were not in place, then any script that you loaded on the Web could inspect these.
And, is there a way around this?
If you have some scripts that you trust absolutely (hopefully a significant subset of "all scripts that exist on the Web") that you want to be able to bypass the same-origin policy and see your full, cross-domain view of the Web, you could load them as an extension, which can act with elevated permissions beyond the abilities of normal web pages. (See How does Same Origin Policy apply to browser extensions?)
I'm going to assume that you are looking to grab data from these pages that aren't yours and store it somewhere. I have done this before with curl using php. If you are looking to display these sites for users to interact in a different way, but starting from a page that is yours, you may be able to render these pages by grabbing the source html using curl and rendering it as a sort of proxy.
I've used this tutorial for something similar https://www.youtube.com/watch?v=_kQN-3aNCeI . Hopefully this gives you a start. I think you should be a little more detailed in your question though to get more help.
It looks like AJAX is indeed unable (at least for all practical purposes) to write foreign HTML to the current page. But what if your CDN website had, say, a JS that would simply document.write() everything? Then your HTML document would have nothing but a remote script.
<html>
<script src="https://pastebin.com/raw.php?i=0wm5v7i6">
</script>
</html>
I tried this. Funny thing is, sometimes it works and other times it does a kind of security error:
Why doesn't this work? What if, on your own website, you simply put everything on an easy host like Google Drive?
What if, on your own website, you simply put everything on an easy host like Google Drive?
That is possible, unless
You want control over your website and don't want to depend on the security and availibility of another site, or that somebody reports your pastebin as abuse and it gets deleted.
You want to make proper use of security features like content security policy and don't want to allow everything from pastebin.com.
You want search engines to find you. Although at least google does limited interpretation of JavaScript I doubt that they will handle this content the way you like.
From the looks of it, PasteBin doesn't supply content over SSL (https). You've put https in the URL to your script, but PasteBin just redirects this request to http, and the net effect is that you are trying to access a script over http when the page is accessed over https, and Chrome prevents that.
Just try going to https://pastebin.com/raw.php?i=0wm5v7i6: your browser will be redirected to http://pastebin.com/raw.php?i=0wm5v7i6.
I created a new website using a newly registered domain.
When trying to share it as a link in Facebook, it is classed as "spammy" and I'm unable to share it.
After a few weeks of research and reporting to FB I copied the site entirely and placed on a new TLD.
This has instantly become blocked on facebook which made me think there's something within the structure of the site which is causing it to be marked as spam.
Using object debugger on the original URL has given a number of various responses such as:
"Error parsing input URL, no data was scraped"
Response code 206
Response code 203
I read that using chrome can bug it out so I used firefox and safari to check.
Does anyone have any idea why the response codes vary for a static site?
Are there any specific site setups which are currently causing FB to block?
I have read that certain .htaccess configs, such as www>non-www can upset FB, is this true?
The sites in question are:
Link 1 (this was intended to be the only domain)
Link 2 (this was setup only when original domain was blocked)
These domain are new, never been used for spamming or mail.
I have checked all the blacklists I could possibly search and have not found anything that indicates problems.
It really does seem that there is something in the configuration of the site that is causing it to be blocked. Does anyone have any idea or experience in this??
I was able to scrape the page correctly using the Debug Lint tool:
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fwww.sophie-mcelligott.com%2F
Maybe facebook block newly registered domains for a fixed time before letting you share it on Facebook, presumably to stop spammers. Are you able to scrape both sites correctly?
I am using Simple Facebook Connect for Worpress.
However I am getting some javascript errors.
View Image Full Size
www.connect.facebook.com/widgets/fan.php?api_key=xxxx&channel_url=http%3A%2F%2Fjquery.webspirited.com%2F%3Fxd_receiver%3D1&id=189373481094312&name=&width=285&connections=10&stream=0&logobar=1&css=
GET (same url as above) undefined (undefined) Unsafe
JavaScript attempt to access frame with URL http://jquery.webspirited.com/ from frame with URL
http://www.facebook.com/extern/login_status.php?api_key=xxxx&extern=2&channel=http%3A%2F%2Fjquery.webspirited.com%2F%3Fxd_receiver%3D1&locale=en_US.
Domains, protocols and ports must
match.
How can I fix these errors?
Short answer: You can't. This error happens in Safari and sometimes Chrome. The webkit based browsers have a somewhat tighter security model for cross domain same-origin policies. The way Facebook Connect works is that it tries one method to make things work, then if that fails, it falls back to another approach.
The fall back means that the code still works, but the error comes up because they try that method first.
This is how Facebook's code works. You can't fix it. You can't work around it. If you're going to use Facebook's code, then you learn to live with it.
last time, when i got an error like this, i forgot to set up the url in my facebook-application.
http://www.facebook.com/developers/ > Application settings > Web Site > Site URL, Site Domain
The api-key is alway linked with your url. The url of the website, where u implement the iframe must have the same URL like this.
You might like my Simple Facebook Comments For Wordpress wordpress plugin I recently released. It makes the whole process of adding facebook connect comments to your wordpress site super easy and fast.
http://www.davidswordpressplugins.com/simple-facebook-comments-for-wordpress/
I wrote a Google Maps API wrapper in JS, did some local tests with static html, and everything worked just fine. Then I loaded the files into a local web server running in localhost:8080, tested the map panels, and once again, everything worked just fine.
Then a week went by, I added a map to a page, and I couldn't get it to load. Nothing has changed (that I know of anyway, obviously something has), and static tests continue to work just fine, but when I try to load the API from the server I get this in the debug console:
XMLHttpRequest cannot load http://maps.google.com/maps/api/js?sensor=false.
Origin http://localhost:8080 is not allowed by Access-Control-Allow-Origin.
I read a bit around, but I still can't understand the error, much less fix it. Can someone please give me a hand?
edit: I use a simple tag to load google's js. No jQuery, no nothing.
Thanks in advance!
#Santiago hopefully I can help you now :) The error you are receiving is due to Google's prohibiting of cross site scripting. You can find the info here: Google which also references:Wikipedia Article
It looks like you'll need to create a proxy service for your client on your public facing webserver. Since the request comes from your public webserver and the reply goes back to your webserver it will meet the same domain requirements that Google requires. I do not know enough python yet to create a pythonic cgi proxy but I have to think that there are many different solutions out there already. Plus you will be limited by your server as to what type of solution you employ.
HTH!
~MWR