CCTV webserver javascript - javascript

Bit of a strange one here, i have a CCTV system and contacted the manufacturers to asked if there was an API. The answer was no.
I've been trying to understand how i can take the live jpeg picture and use it in my own app (c#).
here is a link to the liveview page that displays the the live feeds; http://pastebin.com/jCp4jZRh
The line i'm interested in is;
img_buf[0].src = "ivop.get?action=live&piccnt=0&THREAD_ID=" + thd_id;
Now piccnt seems to be for stopping browsers caching the data, so this number keeps changing and thd_id seems to be the channel number. When trying to access this i get the following message;
Authentication Error:Access Denied, authentication error
Even if i log in first, then try the above url with my own contect i still retrieve the access denied message.
Heres the source to the login page; http://pastebin.com/q7nLJ4tk
heres the source to the md5.js file; http://pastebin.com/du1ggaQB
I'm just a little stuck on how to auth then display the feed, does anyone have any pointers?
thanks

I answered a similar question awhile back, and the solution ended up being that you had to set the referrer.
In any case, to find your solution, download a copy of Fiddler.
Once running, hit your camera page, and you will see several requests. When you find one of the requests for ivop.get, drag it into the request builder and execute it a second time.
If after executing it a second time it still works (check using the inspectors), then start changing the headers, removing bits one by one until you find the key element. I suspect there will either be a cookie, or referrer that is required.
Once you have figured out those elements, it should be easy to make the appropriate request in your application.
If you can post a live URL, I can help you with this.

Best guess given the limited info available: they're checking the referrer. You can check the details of the requests using Fiddler (you can even replay the request with a slightly different referrer, confirm if that's what's happening, etc). If this is it, you can set the referrer in HTTPWebRequest: http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.referer.aspx

There are many possibilities, and without having access to the source code of the CCTV server, it's hard to say which one it might be.
I'd suggest popping open an HTTP Header sniffing utility (such as https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/ for firefox) and watch the headers for the successful IMG request. Then replay that request using netcat or curl. Once you've got that working, try removing HTTP headers one at a time (you're probably sending some kind of session ID, HTTP Referrer, etc - these may all be important to the CCTV server)
In any case, it's almost certainly going to be important that you at least authenticate with mlogin.get and pass along the resulting session ID in subsequent requests.

Whilst this may be old - i had the same problem. The DVR requires an authenticated login with a key sent in the url during the first redirect to the login page, and the password in hex_hmac_md5. I have a python function to login and retrieve two channel images and then logout below:
def getcamimg():
baseurl = 'http://<IPADDRESS>/'
content = str(getUrl(baseurl))
x = re.search("key=(\w+)", content)
keystr = x.group(1)
key = bytes(keystr,'utf-8')
password = bytes(<YOURPASS>,'utf-8')
hmacobj = hmac.new(key,password)
hmacpass = hmacobj.hexdigest()
#-----------------------------------------------------
loginurl = baseurl + 'mlogin.get?account=<USERNAME>&passwd='+ str(hmacpass) + '&key=' + keystr + '&Submit=Login'
lcontent = str(getUrl(loginurl))
if("another administrator" in lcontent):
print("another admin online")
exit()
y = re.search('href="([\w\d\.\?=&_-]+)"',lcontent)
finalurl = baseurl + y.group(1)
z = re.search('id=(\w+)',lcontent)
thid = z.group(1)
#-----------------------------------------------------
imgurl = baseurl + "ivop.get?action=live&piccnt=1&THREAD_ID=" + thid
imgcontent = getUrl(imgurl)
ctime = datetime.datetime.today().strftime("%Y%m%d%H%M%S")
with open("chan1_"+ctime+".jpg", "wb") as file0:
file0.write(imgcontent)
#-----------------------------------------------------
chanset = "showch.set?channel=3&THREAD_ID=" + thid
getUrl(baseurl + chanset)
#-----------------------------------------------------
icontent1 = getUrl(imgurl)
with open("chan3_"+ctime+".jpg", "wb") as file1:
file1.write(icontent1)
#-----------------------------------------------------
logout = "Forcekick.set?ITSELF=1&Logout=Logout&THREAD_ID=" + thid
getUrl(baseurl + logout)
#-----------------------------------------------------
def getUrl(url):
try:
response = requests.get(url)
response.raise_for_status()
except HTTPError as http_err:
print('HTTP error occurred: '+ str(http_err))
except Exception as err:
print('Other error occurred:' + str(err))
else:
return response.content

Related

Instagram ?__a=1 url not working anymore & problems with graphql/query to get data

Update 19 April
After a few days using cookie ig_pr two days ago is block. Looks like the only way to get the data now is use sessionid with a specific value
Original
I was using instagram ?__a=1 url to read all the post of instagram's users.
A few hours ago there was a change in the response and now doesn't allow me to use max_id to paginate.
Before I usually sent a request to
https://www.instagram.com/{{username}}/?__a=1
and using the graphql.edge_owner_to_timeline_media.page_info.end_cursor in the response I called the same page with a new max_id
https://www.instagram.com/{{username}}/?__a=1&max_id={{end_cursor}}
Now the end_cursor changes in each call & max_id is not working.
Please help :)
The query_hash does not change, at least in the past few days. It indicate what TYPE of query it is.
Below listed 4 query types I knew, hope these help.
Load more media under https://www.instagram.com/someone/?__a=1
https://www.instagram.com/graphql/query/?query_hash=472f257a40c653c64c666ce877d59d2b&variables={"id":"93024","first":12,"after":"XXXXXXXX"}
(Instagram blocked the above access since 2018-04-12. You have to remove the __a=1 and extract the JSON inside a block. Look for "window._sharedData" in the HTML)
Load more media under https://www.instagram.com/explore/tags/iphone/?__a=1
https://www.instagram.com/graphql/query/?query_hash=298b92c8d7cad703f7565aa892ede943&variables={"tag_name":"iphone","first":12,"after":"XXXXXXXX"}
Load more media under https://www.instagram.com/explore/locations/703629436462521/?__a=1
https://www.instagram.com/graphql/query/?query_hash=ac38b90f0f3981c42092016a37c59bf7&variables={"id":"703629436462521","first":12,"after":"XXXXXXXX"}
Load more comments for https://www.instagram.com/p/Bf-I2P6grhd/
https://www.instagram.com/graphql/query/?query_hash=33ba35852cb50da46f5b5e889df7d159&variables={"shortcode":"Bf-I2P6grhd","first":20,"after":"XXXXXXXX"}
where XXXXXXXX is the end_cursor from the original request
Edit 15/03
NOT WORKING ANYMORE
Seems like instagram changed again their API, now it gives a CORS error.
As of 2 february 2021, I have found a solution
Instead of using
https://www.instagram.com/username/?__a=1
which it asks for a login.
Justing adding a /channel seems to make it work, like so:
https://www.instagram.com/username/channel/?__a=1
i fixed it by adding &__d=dis. for example like this https://www.instagram.com/p/Ch656GRoyuO/?__a=1&__d=dis
I just came by the same issue.
Looks like they just changed their private api by removing the max_id.
Their website seems to have replaced the old api with the graphql api.
https://www.instagram.com/graphql/query/?query_hash=472f257a40c653c64c666ce877d59d2b&variables={"id":"111","first":12,"after":"xxx"}
id: user ID,
first: amount of nodes to get,
after: the 'end_cursor' you can get from data['user']['edge_owner_to_timeline_media']['page_info']['end_cursor']
use either query_hash or query_id
query_hash: 472f257a40c653c64c666ce877d59d2b
query_id: 17888483320059182
I have no idea how long that query_id/query_hash will work, it's up to Instagram. They will eventually change it.
Updated 4/8/2018 - Before FB didn't check any cookies, but looks like they added quick validation. Try adding ig_pr=2 to the request cookie, when sending your API. This quick fix works for me. Who knows when FB will change it again.
As long as FB doesn't enforce login for these basic APIs, there always will be an easy workaround.
Translated some of the folks' code to PHP:
<?php
function getPublicInfo($username) {
$url = sprintf("https://www.instagram.com/$username");
$content = file_get_contents($url);
$content = explode("window._sharedData = ", $content)[1];
$content = explode(";</script>", $content)[0];
$data = json_decode($content, true);
return $data['entry_data']['ProfilePage'][0];
}
Not sure for how long it's gonna work. For my small project it does the job for now. The result is very similar (if not equal) to the one at the URL: instagram.com/{user}/?__a=1
The main problem with using graph/query is that I only have the username, to extract the userId & the last post like we use to do with ?__a=1 we have to get the instagram's user page & extract _sharedData
Javascript
let url = "https://www.instagram.com/"+username;
$.ajax({
type: 'GET',
url: url,
error: function () {
//..
},
success: function (data) {
data = JSON.parse(data.split("window._sharedData = ")[1].split(";</script>")[0]).entry_data.ProfilePage[0].graphql;
console.log(data);
}
})
After get all this data we can call graph/query (not in client side)
As of 21 May 2021, using a /channel will make it work, but only if using a browser User-Agent header with your request, for example with a curl:
curl -H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36" https://www.instagram.com/{username}/channel/?__a=1
For pagination you can now use ?__a=1&page=2
This answer is not directly helping the question but posting because someone might benefit from the answer. As of the current date 12 April 2018, the load more APIs will not work without a Cookie header set.
Below are some codes for fetching Instagram public APIS
let url = "https://www.instagram.com/explore/";
if (payload.type == 'location') {
url = url + "locations/" + payload.location_id + "/" + payload.location_name + "/?__a=1";
} else if (payload.type == 'hashtag') {
url = url + "tags/" + payload.hashtag + "/?__a=1";
} else { //profile
url = "https://www.instagram.com/" + payload.user_name + "/?__a=1";
}
request(url, function (error, response, body) {
body = JSON.parse(body);
//below are params which are required for load more pagination payload
paginationData = {
has_next_page: body.data.user.edge_owner_to_timeline_media.page_info.has_next_page,
end_cursor: body.data.user.edge_owner_to_timeline_media.page_info.end_cursor
};
//user.edge_owner_to_timeline_media for profile posts,
//hashtag.edge_hashtag_to_media for hashtag posts
//location.edge_location_to_media for location posts
});
and for load more items, I am using:
let url = "https://www.instagram.com/graphql/query/";
if (payload.type == 'location') {
let variables = encodeURIComponent('{"id":"' + payload.pagination.id + '","first":50,"after":"' + payload.pagination.end_cursor + '"}');
url = url + "?query_hash=ac38b90f0f3981c42092016a37c59bf7&query_id=17865274345132052&variables=" + variables;
} else if (payload.type == 'hashtag') {
let variables = encodeURIComponent('{"tag_name":"' + payload.pagination.tag_name + '","first":50,"after":"' + payload.pagination.end_cursor + '"}');
url = url + "?query_hash=298b92c8d7cad703f7565aa892ede943&query_id=17875800862117404&variables=" + variables;
} else { //profile
let variables = encodeURIComponent('{"id":"' + payload.pagination.owner_id + '","first":50,"after":"' + payload.pagination.end_cursor + '"}');
url = url + "?query_hash=472f257a40c653c64c666ce877d59d2b&query_id=17888483320059182&variables=" + variables;
}
let options = {
url: url,
headers: {
Cookie: "Cookie value which i copied from my logged in instagram browser window"
}
};
request(options, function (error, response, body) { });
It seems query_id is no longer required and query_hash is sufficient now. I'm not sure though but it seems working without them too for me.
Now it returns this response and doesn't work:
for (;;);{"__ar":1,"error":1357004,"errorSummary":"Sorry, something
went wrong","errorDescription":"Please try closing and re-opening your
browser
window.","payload":null,"hsrp":{"hblp":{"consistency":{"rev":1006107013}}},"lid":"7136695846284496053"}
URL: https://www.instagram.com/instagram/?__a=1
URL: https://www.instagram.com/instagram/channel/?__a=1
As of the current date 12 April 2018, 4:00PM (GMT+1), API queries work without any cookie. I have no idea what they're doing...
Just try this link in private navigation.
I faced a similar problem in that I was unable to parse the JSON file using "?__a=1" and ended up with JSONDecodeError: Expecting value. Searched in many places and finally found a catch, using Header solved the probelm. Try using this, it worked for me
link = 'http://instagram.com/instagram/?__a=1'
headers = {'User-Agent': 'Mozilla'}
r = requests.get(link, headers=headers)
data = r.json()
100% working as of now
It can be circumvented using the session ID.
It also works on a never logged in ip.
I sent 10K queries and it didn't give any errors.
Instagram Api Curl Request
actually the position and tag changed if you look clearly we dont require any channel or anything url change at all the data is present under video versions attribute with many video quality actually
but sometimes ?_a=1 working normally i.e you can see the short code in the start
Currently https://www.instagram.com/username/channel/?__a=1 seems to be working. However, after a few minutes of trying this URL, it may still request for login. In this case, if you convert the word channel to reels, the problem will be fixed.
For example:
https://www.instagram.com/username/reels/?__a=1
It still works if you use residential proxies, for example via https://webscraping.ai/ API (note that the url parameter should be URL-encoded):
$ curl https://api.webscraping.ai/html?proxy=residential&api_key=test-api-key&url=https%3A%2F%2Fwww.instagram.com%2Fapple%2F%3F__a%3D1
{"seo_category_infos":[["Beauty","beauty"],["Dance & Performance","dance_and_performance"],["Fitness","fitness"],["Food & Drink","food_and_drink"],["Home & Garden","home_and_garden"],["Music","music"],["Visual Arts","visual_arts"]],"logging_page_id":"profilePage_5821462185","show_suggested_profiles":false,"graphql":{"user":{"biography":"Everyone has a story to tell. \nTag #ShotoniPhone to take part.","blocked_by_viewer":false,...

No login required but still I see Session Expired Response in JMeter

I am automating load tests using Jmeter, I have very simple page where I'm putting data into input fields and buttons clicks.
I see following response data:
function handleBack() { var programGroup = document.getElementById('programGroupName').value; if(programGroup == 'SE') { var url = $('#fundraiserPageURL').val();; var index = url.lastIndexOf('/'); if (index > 0) { newurl = url.substring(0, index); } var action = newurl + "/decodeCheckOutDetails.action"; encodeCheckoutFRHttpSession(action, 'backFormIdFR'); } }
Visit LLS.ORG
0
VISIT LLS.ORG
Sorry! the session expired, please try again.
Make sure to upgrade to latest JMeter version (JMeter 3.3 as of now) as looking into i.e. HTTP Cookie Manager GUI it appears you're sitting on the outdated version. Or at least make sure you use the following settings in order to comply with the RFC 6265:
Implementation: HC4CookieHandler
Policy: standard
along with HttpClient4 implementation in the HTTP Request Defaults.
Also double check you performed correlation of all dynamic values using suitable JMeter PostProcessors.

How to create a cross domain HTTP request

I have a website, and I need a way to get html data from a different website via an http request, and I've looked around for ways to implement it and most say via an ajax call instead.
An ajax call is blocked by linked in so I want to try a plain cross domain http request and hope it's not blocked one way or another.
If you have a server running and are able to run code on it, you can make the HTTP call server side. Keep in mind though that most sites only allow so many calls per IP address so you can't serve a lot of users this way.
This is a simple httpListener that downloads an websites content when the QueryString contains ?site=http://linkedin.com:
// setup an listener
using(var listener = new HttpListener())
{
// on port 8080
listener.Prefixes.Add("http://+:8080/");
listener.Start();
while(true)
{
// wait for a connect
var ctx = listener.GetContext();
var req = ctx.Request;
var resp = ctx.Response;
// default page
var cnt = "<html><body>click me </body></html>";
foreach(var key in req.QueryString.Keys)
{
if (key!=null)
{
// if the url contains ?site=some url to an site
switch(key.ToString())
{
case "site":
// lets download
var wc = new WebClient();
// store html in cnt
cnt = wc.DownloadString(req.QueryString[key.ToString()]);
// when needed you can do caching or processing here
// of the results, depending on your needs
break;
default:
break;
}
}
}
// output whatever is in cnt to the calling browser
using(var sw = new StreamWriter(resp.OutputStream))
{
sw.Write(cnt);
}
}
}
To make above code work you might have to set permissions for the url, if you'r on your development box do:
netsh http add urlacl url=http://+:8080/ user=Everyone listen=yes
On production use sane values for the user.
Once that is set run the above code and point your browser to
http://localhost:8080/
(notice the / at the end)
You'll get a simple page with a link on it:
click me
Clicking that link will send a new request to the httplistener but this time with the query string site=http://linkedin.com. The server side code will fetch the http content that is at the url given, in this case from LinkedIn.com. The result is send back one-on-one to the browser but you can do post-processing/caching etc, depending on your requirements.
Legal notice/disclaimer
Most sites don't like being scraped this way and their Terms of Service might actually forbid it. Make sure you don't do illegal things that either harms site reliability or leads to legal actions against you.

Weird (caching) issue with Express/Node

I've built an angular/express/node app that runs in google cloud which currently uses a JSON file that serves as a data source for my application. For some reason, (and this only happens in the cloud) when saving data through an ajax call and writing it to the json file, everything seems to work fine. However, when refreshing the page, the server (sometimes!) sends me the version before the edit. I can't tell whether this is an Express-related, Node-related or even Angular-related problem, but what I know for sure is that I'm checking the JSON that comes in the response from the server, and it really is sometimes the modified version, sometimes not, so it most probably isn't angular cache-related.
The GET:
router.get('/concerts', function (request, response) {
delete require.cache[require.resolve('../database/data.json')];
var db = require('../database/data.json');
response.send(db.concerts);
});
The POST:
router.post('/concerts/save', function (request, response) {
delete require.cache[require.resolve('../database/data.json')];
var db = require('../database/data.json');
var concert = request.body;
console.log('Received concert id ' + concert.id + ' for saving.');
if (concert.id != 0) {
var indexOfItemToSave = db.concerts.map(function (e) {
return e.id;
}).indexOf(concert.id);
if (indexOfItemToSave == -1) {
console.log('Couldn\'t find concert with id ' + concert.id + 'in database!');
response.sendStatus(404);
return;
}
db.concerts[indexOfItemToSave] = concert;
}
else if (concert.id == 0) {
concert.id = db.concerts[db.concerts.length - 1].id + 1;
console.log('Concert id was 0, adding it with id ' + concert.id + '.');
db.concerts.push(concert);
}
console.log("Added stuff to temporary db");
var error = commit(db);
if (error)
response.send(error);
else
response.status(200).send(concert.id + '');
});
This probably doesn't say much, so if someone is interested in helping, you can see the issue live here. If you click on modify for the first concert and change the programme to something like asd and then save, everything looks fine. But if you try to refresh the page a few times (usually even up to 6-7 tries are needed) the old, unchanged programme is shown. Any clue or advice greatly appreciated, thanks.
To solve: Do not use local files to store data in cloud! This is what databases are for!
What was actually the problem?
The problem was caused by the fact that the App Engine had 2 VM instances running for my application. This caused the POST request to be sent to one instance, it did its job, saved the data by modifying its local JSON file, and returned a 200. However, after a few refreshes, the load balancing causes the GET to arrive at the other machine, which has its individual source code, including the initial, unmodified JSON. I am now using a MongoDB instance, and everything seems to be solved. Hopefully this discourages people who attempt to do the same thing I did.

Is it possible to control Firefox's DNS requests in an addon?

I was wondering if it was possible to intercept and control/redirect DNS requests made by Firefox?
The intention is to set an independent DNS server in Firefox (not the system's DNS server)
No, not really. The DNS resolver is made available via the nsIDNSService interface. That interface is not fully scriptable, so you cannot just replace the built-in implementation with your own Javascript implementation.
But could you perhaps just override the DNS server?
The built-in implementation goes from nsDNSService to nsHostResolver to PR_GetAddrByName (nspr) and ends up in getaddrinfo/gethostbyname. And that uses whatever the the system (or the library implementing it) has configured.
Any other alternatives?
Not really. You could install a proxy and let it resolve domain names (requires some kind of proxy server of course). But that is a very much a hack and nothing I'd recommend (and what if the user already has a real, non-resolving proxy configured; would need to handle that as well).
You can detect the "problem loading page" and then probably use redirectTo method on it.
Basically they all load about:neterror url with a bunch of info after it. IE:
about:neterror?e=dnsNotFound&u=http%3A//www.cu.reporterror%28%27afew/&c=UTF-8&d=Firefox%20can%27t%20find%20the%20server%20at%20www.cu.reporterror%28%27afew.
about:neterror?e=malformedURI&u=about%3Abalk&c=&d=The%20URL%20is%20not%20valid%20and%20cannot%
But this info is held in the docuri. So you have to do that. Here's example code that will detect problem loading pages:
var listenToPageLoad_IfProblemLoadingPage = function(event) {
var win = event.originalTarget.defaultView;
var docuri = window.gBrowser.webNavigation.document.documentURI; //this is bad practice, it returns the documentUri of the currently focused tab, need to make it get the linkedBrowser for the tab by going through the event. so use like event.originalTarget.linkedBrowser.webNavigation.document.documentURI <<i didnt test this linkedBrowser theory but its gotta be something like that
var location = win.location + ''; //I add a " + ''" at the end so it makes it a string so we can use string functions like location.indexOf etc
if (win.frameElement) {
// Frame within a tab was loaded. win should be the top window of
// the frameset. If you don't want do anything when frames/iframes
// are loaded in this web page, uncomment the following line:
// return;
// Find the root document:
//win = win.top;
if (docuri.indexOf('about:neterror') == 0) {
Components.utils.reportError('IN FRAME - PROBLEM LOADING PAGE LOADED docuri = "' + docuri + '"');
}
} else {
if (docuri.indexOf('about:neterror') == 0) {
Components.utils.reportError('IN TAB - PROBLEM LOADING PAGE LOADED docuri = "' + docuri + '"');
}
}
}
window.gBrowser.addEventListener('DOMContentLoaded', listenToPageLoad_IfProblemLoadingPage, true);

Categories

Resources