Connecting to an HTTPS service with SproutCore - javascript

I'm building a web app that requires me to connect to a service (https) to get some data. The web app is to be built using SproutCore. Now, I'm super new to SproutCore, and haven't been able to get to connect to the service. Some folks on the irc channel were super helpful and told me that to connect to an https service, I need to add a line in my BUILDFILE that says:
proxy '/path', :to => "https://myWebService.com", :secure => true
I did that. However, if I try to navigate to the url using:
SC.Request.getUrl('/path').notify(this, 'notifyMe').send();
I get a 404:
GET http://localhost:4020/path 404 (Not Found).
Any idea as to how I can connect to my HTTPS service. Additionally, I would like to state that once I connect, I would need to authenticate with it using basic auth (username and password).
Thanks all!
EDIT: Just wanted to mention that this request is Cross-Domain. I was under the impression that GET worked just fine cross domain.

Change:
proxy '/path', :to => "https://myWebService.com", :secure => true
to
proxy '/', :to => "myWebService.com", :secure => true
then
SC.Request.getUrl('/path').notify(this, 'notifyMe').send();
should work

Related

How to use proxy and ignore specific request ssl errors

Good day, I am trying to connect to a third party rest API which is kinda old and has its certificate expired. I am using NodeJS and axios for sending request which at the moment looks something like this.
agent = new HttpsProxyAgent({ host, port, auth: `${uname}:${pass}` });
this.client = axios.create({
baseURL: this.baseUrl,
headers: this.headers,
httpsAgent: agent,
});
the agent above creates a proxy agent so my request would tunnel through that host which i am later mounting on axios, i am using this package https-proxy-agent. Now the proxy works fine and all, but it throws an error certificate has expired. I have tried to use rejectUnauthorized: false but that didn't work at all, however i found one more solution and that is to use NODE_TLS_REJECT_UNAUTHORIZED=0. But it is widely described as dangerous to use on the server, i also wonder if i will use this environment variable would it make client requests visible when intercepting?
What i need here is to just connect to the proxy and reject certificate errors only for this axios client, if there are solutions for other http clients feel free to share, thank you very very much.
Managed to solve the issue with a different package.
import { getProxyHttpAgent } from 'proxy-http-agent';
const proxyUrl = `http://${host}:${port}`;
agent = getProxyHttpAgent({
proxy: proxyUrl,
endServerProtocol: 'https:',
rejectUnauthorized: false,
});
setting agent with this package works great, except for some reason i have to use non authentication based proxy otherwise it would throw some weird ssl error which looks like this
tunneling socket could not be established, cause=write EPROTO 20920:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:c:\ws\deps\openssl\openssl\ssl\record\ssl3_record.c:332:.

Cookie set by Flask app sent but not stored

I have a backend Flask app running on localhost:3000 and a React front-end app running on localhost:5000. In my backend app I am using Flask's 'Response.set_cookie' to set a cookie:
resp = make_response({}, 200)
resp.set_cookie('my_cookie_name', 'my_val', max_age=604800, domain='127.0.0.1', samesite='Lax', secure=None, httponly=None)
I am also allowing cross-origin for all responses in my flask app as follows:
# Child class of Flask to override some features
class TailoredFlask(Flask):
# Override make_response
def make_response(self, rv):
# Call default version from partent
resp = super().make_response(rv)
# Add CORS header to every response
resp.headers["Access-Control-Allow-Origin"] = "*"
resp.headers["Access-Control-Allow-Methods"] = "GET,POST,OPTIONS,HEAD"
resp.headers["Access-Control-Allow-Headers"] = "Origin, X-Requested-With, Content-Type, Accept, Authorization"
return resp
My client accesses my flask cookie endpoint with a call to fetch.
In the Chrome dev tools I can see that the cookie is sent with the HTTP response from my backend. It is visible when on the Network->Cookies tab when I select the request to my backend. However, if I go to the Application tab in the dev tools, my cookie is not there.
It seems like chrome is silently discarding my cookie. I have seen several simiar issues here on SO but none of them seem to explain what is going on or provide a solution to my issue.
I'm also confused about the cookie options. There is a 'domain' option which I've read is to allow cross domain operation for the cookie. However, everything is running on localhost so I feel that I shouldn't need this unless the port is causing issues. However, I have also read that the port should not be included in the cookie 'domain' field.
If anyone can help to explain this to me I would greatly appreciate it because I'm just going round in circles with this stuff.
One more thing to note: I am pointing the browser at 'localhost', but the API call to my backend and the cookie domain both use '127.0.0.1', since I've read elsewhere that the 'domain' field must have at least two dots in it. (I don't have a choice in the browser URL since I am using AWS cognito login UI to redirect to my app after login. Cognito allows http for 'localhost', but only allows https for '127.0.0.1' so I have to use 'localhost' for development.) Could the missmatch between the browser url and cookie domain be causing this issue? Or is there something else that I'm missing?
Ok, so I think I now understand what's going on here, although I don't think there's a fix for my specific problem. As described in this thread browsers (including Chrome) will not allow a domian of 'localhost' within a cookie (I just wish there was a message in the console or something to indicate why the cookie is not being saved, rather than a silent fail!)
There are various suggestions for workarounds, such as using '.app.localhost' to access the application. Unfortunately this is not an option for me as I am redirecting to my front-end app from AWS Cognito, and the only domain that is supported with HTTP (rather than HTTPS) is 'localhost'. Variants such as '.app.localhost' or '127.0.0.1' are not allowed.

Access-Control-Allow-Origin is not * but Chrome insists it is (after upgrading apollo-server)

There were some features I wanted from apollo-server and spent some time refactoring my code to it (was previously using express-graphql). The only problem now is a "CORS" problem between my web app (using apollo-client) for authenticated requests. I recall having this problem as well back in version 1.x, and spent a lot of time wrestling with it, but my previous solution does not work in 2.x and would greatly appreciate some help for anyone who has managed to get this to work.
my webapp is hosted on localhost:8081
my server is hosted on localhost:4000
Following their code verbatim from here, I have the following:
Client code
const client = new ApolloClient({
link: createHttpLink({
uri: 'http://localhost:4000/graphql',
credentials: 'include'
})
})
Server code
// Initializing express app code
var app = ....
// Using cors
app.use(cors({
credentials: true,
origin: 'http://localhost:8081',
}));
// Apollo Server create and apply middleware
var apolloServer = new ApolloServer({ ... });
apolloServer.applyMiddleware({ app })
When the client queries the backend via. apollo-client, I get the following error in Chrome:
Yet in my server code, I am explicitly setting the origin. When I inspect the network request in console, I get a contradictory story as well as Access-Control-Allow-Origin explicitly is set to http://localhost:8081
I don't encounter this problem when using Insomnia to query my backend server and get the expected results. Does anyone have any experience setting up apollo on client and server on localhost and successfully authenticating?
I'm actually just dumb. I was looking at the wrong response, sideshowbarker was onto it, apollo-server-v2 automatically overrides CORS and sets the Access-Control-Allow-Origin header to * by default. Even if your express app specifies CORS and you apply that as a middleware to the apollo-server, this setting will be overridden for GraphQL routes. If anyone faces this problem, hopefully this'll save some time:
https://github.com/apollographql/apollo-server/issues/1142
You can now specify CORS explicitly for apollo-server in its applyMiddleware method:
https://www.apollographql.com/docs/apollo-server/api/apollo-server.html#Parameters-2

Slack oauth.access API returns bad_redirect_uri without requesting a redirect_uri

Trying to setup oAuth with slack for custom app and slack's API is returning {"ok":false,"error":"bad_redirect_uri"}. I'm not setting a redirect for oauth.access, even if I do set one I still get same error.
The app is configured to allow localhost and public domain.
This is the request I am doing
const res = await request({
method: 'post',
uri: 'https://slack.com/api/oauth.access',
auth: {
user: process.env.SLACK_CLIENT_ID,
pass: process.env.SLACK_CLIENT_SECRET,
},
form: {
code,
// redirect_uri: 'http://localhost:3100',
},
});
To kick off auth flow I am calling this URL from browser:
https://slack.com/oauth/authorize?client_id=XXX&scope=commands,im:read,im:write&redirect_uri=http://localhost:3100/integrations/slack.request&state=ID
You can not use localhost as redirect_uri.
If you want to use OAuth with your local PC, I would suggest installing a VPN tunnel like ngrok, that allows you to expose your local PC to the Internet in a relatively safe way. ngrok is also mentioned in one of the official Slack tutorials as example on how setup a local development environment with Slack.
Also make sure you have a webserver installed on your PC.

NodeJS: Send HTTPS request but get HTTP

I am building a website using NodeJS, and I deploy it to Heroku. But when I open the website, something went wrong. Here is the problem:
Code:
In the main source file of my web:
app.get('/', (req, res) => {
var data = {
rootURL: `${req.protocol}://${req.get('Host')}`,
};
res.render('home.html', data);
});
Then, in home.html, I include the following script:
<script type="text/javascript">
$.getJSON('{{rootURL}}'+'/about', {}, function(data){
// Code here is deleted for now.
}).fail(function(evt) {
// Code here is deleted for now.
});
</script>
Here I use hbs template, so {{rootURL}} is equal to the 'rootURL' property within the 'data' object rendered along with the 'home.html' page.
The '/about' is one of the APIs I designed for my web. It basically sends back something about the website itself and this information is wrapped in JSON.
Then, here comes the problem. The code works fine locally, and works well when I send HTTP request instead of HTTPS to Heroku. But if I send HTTPS request to Heroku, I'll get 'Mixed Content' Errors:
Errors I get in Chrome Console.
I then switched to 'Elements' tab in the developers tool, and I saw this:
The schema is HTTP, not HTTPS!
I'm very confused here. I just grab the 'protocol' property within the 'req' object, and fill in the template with it. So, I'm assuming if I enter '[my-website-name].herokuapp.com' with 'https' schema in my Chrome Browser, my nodeJS app deployed on Heroku should get 'https' for req.protocol. But Apparently it's not the case. What is wrong here?
I assume you don't actually have an SSL certificate? Heroku will be providing the HTTPS, but it will then translate it to normal HTTP internally when it hits your express endpoint, which is why it sees req.protocol as HTTP.
Is there any point in even providing the URL to getJSON? Why not just send it $.getJSON('/about', callback) and let the browser handle that?
Also, you haven't hidden your URL in that first image you uploaded, if that's what you were intending.
Heroku router is doing SSL termination, so no matter if you connect via http or https, you get http on your side. Original protocol is however set in X-Forward-Proto header. You need use this value.

Categories

Resources