So I'm working on some mobile application that will represent WordPress page.
Working it like web page, then spreading it to all mobile platforms using phonegap (mainly just taking webkit, moving all his functionalities, and putting that small webpage to applications)
Web is written in jQuery/JavaScript/html/css.
I come to some security problems. All data on original WordPress site is secret, and only members can see stuff from there.
And I need to get data in JSON format.
And there is my problem. :)
I can install (i did install) JSON api, and i can get every bit of data I want. But the problem is that anyone can (just type some "get" stuff in URL (for example: mysite.com/json/get_post/?id=1)).
I need to "secure" that data, and let it go only if the user who is asking for it is logged in.
What is the best solution?
I know there is a lot of security problems in this stuff. And only some encryption would be useful. But I need quick and easy solution that will at least make it harder than just typing url. :)
I found something about oAuth, but didn't really understand the way to use it. Any ideas? Any WordPress plugin? Anything?
Thank you. :)
What you are describing is an OWASP A4 - Insecure Direct Object Refernce vulnerability. Encryption does not solve this problem and is by no means the right tool. The problem is use access control. The user needs to login to the wordpress install in order to gain this type of information. Wordpress has a session system built in and you'll have to read up on their API documentation.
Related
I'm not an experienced web developer so I'm having some issues connecting some dots with regards to ClickUp API (In my case) but it can be probably be more generalized as well.
I've been doing a bit of research, but can't seem to get exactly what I'm looking for which makes me think I'm missing something.
In my case, the goal is to make certian information on ClickUp publicly VEIWABLE (not writable) and format the html/css ourselves. This would be veiwable to strangers without requiring authentication with ClickUp in any way.
ClickUp provides public links and embedd options:
My attempt(JS): postMessage(content, url) and document.getElementById(frameID)
Issue: These will fail in browsers that enable that cross-origin protection thing. Forcing strangers to disable this security is not viable for us. I've also tried alloworigin, anyorigin, etc with no luck.
ClickUp API 2.0 for POST, GET, etc
Attempt: I wanted to use this unofficial JS wrapper https://www.npmjs.com/package/clickup.js but didn't write anything out yet.
Issues: From my understanding, this requires the stranger to authenticate with ClickUp before being able to read information. This would require server-side code.
Other issues: This read-only page is intended to be hosted on github which doesn't support server-side calls (from my understanding) so the solution I'm looking for may not exist.
The most straight-forward way I can think to solve this is the parsing embedded iframe method but the cross-origin problems kills it dead in the water. At this point, I'm stuck and would appreciate any additional feedback.
To formalize my question:
What architecture/pieces would I need to connect a front-end web page with private ClickUp information in a read-only fashion that allows front-end HTML/CSS modifications without user authentication?
My web development knowledge doesn't really go beyond javascript and maybe a tiny bit of PHP so if there's other tools available for this please let me know and I'll look into them.
Thanks for the time
I have an iOS app in which I use parse.com as backend service. Now, I hired someone to do a website interface using HTML and CSS. I want to share the same data between iOS app and website, I know parse.com offers me a few ways to do this, including creating a javaScriptapplication. The problem is, my programmer doesn't have any experience in JavaScript, nor do I.
My question is: Is it possible to use what I have (objective-c, xcode) as far as retrieving data from parse.com and showing on website? Even if I need to code something new, is it possible to use objective-c together with HTML and CSS?
Thanks.
Parse has several APIs, one of which is REST. Your web developer should use the REST API to get data from Parse
https://www.parse.com/docs/rest
If there is will there is way, but you'll be making something really specific to your use and will be non standard and will be immediately hard to maintain, I recommend that you hire another developer and do things properly using the technologies given to you by parse !. if the cost will be high now I can promise you it'll be much higher if you went the path you're going to now.
So my answer is:
Yes, everything is possible and no, don't do it ! :)
Edit: Added an example to a possible way to do it to actually answer OP's question.
Example case:
1-Create a simple Mac Application in Xcode that fetches data exactly like you do it on iOS, and store the needed data into a database of your choice on your server
2-You now have access to the data you needed from parse, but on a local mirror. you will need some tool to fetch that data though, I recommend a simple PHP script.
Note that this will require an OSX server to always be running to fetch that data, you'll also need of find a way to fetch data on demand when a user needs it Vs. polling at specified intervals, this will hardly scale and will be costly as I said.
I have been searching for an answer to this problem now for several weeks. I also previously tried to research this a few years ago to no avail.
Problem Summary:
My company has developed a web-based data analytics suite for a major beverage distributor. They have recently asked for a feature that allows the user to print or download a visually pleasing version of the rendered app as a PDF. I have had no luck in finding a solid, controllable, or reliable method to do this. I was hoping the stack community might be able to point me in the right direction.
Current Tech Stack:
Plack servers
Perl base on the Dancer framework
Standard web dev front-ends: HTML5, CSS3, Javascript, Jquery/UI
Client is using IE9/10 and Chrome.
Attempted Solutions Summary:
Obviously I started with the window.print() and tried to control what printed using classes and a specialized print.css but the output was still awful.
I looked in to pdfmachine and pdfbox and even contacted Adobe's acrobat development team directly to see if they had an out of the box solution our company could purchase. I was informed that such a product would be counter intuitive to their desired business model of putting an acrobat subscription on each client computer rather than a single server side application.
I have extensively searched the stack articles but did not feel that the articles I found covered what I was looking for.
At present, I am all out of ideas and am hoping somebody out there has had better luck at this than I have.
tl;dr = I need a pdf version of the rendered output of a complex reporting app.
Thanks for your time stack, I appreciate it.
A solution I have used in the past is to use PhantomJS running on a server to generate the PDF for download/email. Usually if the content is sensitive the server (that handles authentication) would provide a single use viewing token that is then passed to a PhantomJS process. It loads the URL with the viewing token then saves as a PDF.
Further info on Phantoms screen cap API can be found here on GitHub.
https://github.com/ariya/phantomjs/wiki/Screen-Capture
Is it something you can create in Perl using PDF::API2 or PDF::Create? You can load and modify and existing PDF (handy if you want standard headers and footers), and then insert the relevant content. The learning curve can be a bit steep, but simple reports should be easy enough.
See PDF::TextBlock and PDF::Table too - they are great little helpers.
Consider this service http://pdfmyurl.com/ . I try to use many perl modules, but they dont satisfy my problems.
A big and general question, though NOT a discussion
Me and a friend are discussing a web application being developed. Currently it uses PHP, but the PHP doesn't store anything and it is all OAuth based. The whole thing talks to an independent API. The PHP is really just mirroring alot of the Javascript logic for browsers without Javascript support.
If it were decided to enforce Javascript as a requirement (let's not go into that ... whole other issue)
Are there any technical, fundamental problems serving the app as HTML+Javascript hosted on a CDN? That is, 100% Static javascript and HTML with no server-side logic. As the Javascript is just as capable of doing all the API calls as the PHP. Are any existing sites doing this?
We can't think of any show-stoppers, but it seems like a scary thought to make a "web" app 100% client side ... so looking for more input.
(To clarify, the question is about deploying using ONLY javascript and HTML and abandoning server-side processing outside the JSON API or whatever)
Thanks in advance!
One issue is with search engines.
Search engine crawlers index the raw HTML source code of a web-page. If you use JavaScript to load new data and generate new content, crawlers won't come into play, so your content won't get indexed.
However, Google is offering a solution for this - read here: http://code.google.com/web/ajaxcrawling/
Other than this, I can't think of any other issue...
Amazon has been offering the service on its S3 for a little while now. http://aws.typepad.com/aws/2011/02/host-your-static-website-on-amazon-s3.html . Essentially this allows you to specify a default index page and error pages. Otherwise you just load up your html on the S3 and point your www CNAME on your domain to the Amazon S3 bucket or cloudfront CDN.
The only thing that is not possible this way is if a user ends up typing example.com instead of www.example.com, you need to ensure that you have your DNS correctly forward these to www. Also the S3 will not be able to handle a naked domain (http://example.com/).
Regarding how good an idea it is, it sounds good to us as well. And we are currently exploring the option. So far it appears to work fine. What we have done is to setup beta.example.com to point to a CDN hosted site (S3) and are testing to see if it gives us everything we need. Performance is great though !
Does anyone know how disqus works?
It manages comments on a blog, but the comments are all held on third-party site. Seems like a neat use of cross-site communication.
The general pattern used is JSONP
Its actually implemented in a fairly sophisticated way (at least on the jQuery site) ... they defer the loading of the disqus.js and thread.js files until the user scrolls to the comment section.
The thread.js file contains json content for the comments, which are rendered into the page after its loaded.
You have three options when adding Disqus commenting to a site:
Use one of the many integrated solutions (WordPress, Blogger, Tumblr, etc. are supported)
Use the universal JavaScript code
Write your own code to communicate with the Disqus API
The main advantage of the integrated solutions is that they're easy to set up. In the case of WordPress, for example, it's as easy as activating a plug-in.
Having the ability to communicate with the API directly is very useful, and offers two advantages over the other options. First, it gives you as the developer complete control over the markup. Secondly, you're able to process comments server-side, which may be preferable.
Looks like that using easyXDM library, which uses the best available way for current browser to communicate with other site.
Quoting Anton Kovalyov's (former engineer at Disqus) answer to the same question on a different site that was really helpful to me:
Disqus is a third-party JavaScript application that runs in your browser and injects itself on publishers' websites. These publishers need to install a small snippet of JavaScript code that makes the first request to our servers and loads initial JavaScript loader. This loader then creates all necessary iframe elements, gets the data from our servers, renders templates and injects the result into some element on the page.
As you can probably guess there are quite a few different technologies supporting what seems like a simple operation. On the back-end you have to run and scale a gigantic web application that serves millions of requests (mostly read). We use Python, Django, PostgreSQL and Redis (for our realtime service).
On the front-end you have to minimize your payload, make sure your app is super fast and that it doesn't break in extremely hostile environments (you will be surprised how screwed up publisher websites can be). Cross-domain communication—ability to send messages from hosting website to your servers—can be tricky as well.
Unfortunately, it is impossible to explain how everything works in a comment on Quora, or even in an article. So if you're interested in the back-end side of Disqus just learn how to write, run and operate highly-scalable websites and you'll be golden. And if you're interested in the front-end side, Ben Vinegar and myself (both front-end engineers at Disqus) wrote a book on the topic called Third-party JavaScript (http://thirdpartyjs.com/).
I'm planning to read the book he mentioned, I guess it will be quite helpful.
Here's also a link to the official answer to this question on the Disqus site.
short answer? AJAX, you get your own url eg "site.com/?comments=ID" included via javascript... but with real time updates like that you would need a polling server.
I think they keep the content on their site and your site will only send & receive the data to/from disqus. Now I wonder what happens if you decide that you want to bring your commenting in house without losing all existing comments!. How easy would you get to your data I wonder? They claim that the data belongs to you, but they have the control over it, and there is not much explanation on their site about this.
I'm always leaving comment in disqus platform. Sometimes, comment seems to be removed once you refreshed it and sometimes it's not. I think the one that was removed are held for moderation without saying it.