I am automating load tests using Jmeter, I have very simple page where I'm putting data into input fields and buttons clicks.
I see following response data:
function handleBack() { var programGroup = document.getElementById('programGroupName').value; if(programGroup == 'SE') { var url = $('#fundraiserPageURL').val();; var index = url.lastIndexOf('/'); if (index > 0) { newurl = url.substring(0, index); } var action = newurl + "/decodeCheckOutDetails.action"; encodeCheckoutFRHttpSession(action, 'backFormIdFR'); } }
Visit LLS.ORG
0
VISIT LLS.ORG
Sorry! the session expired, please try again.
Make sure to upgrade to latest JMeter version (JMeter 3.3 as of now) as looking into i.e. HTTP Cookie Manager GUI it appears you're sitting on the outdated version. Or at least make sure you use the following settings in order to comply with the RFC 6265:
Implementation: HC4CookieHandler
Policy: standard
along with HttpClient4 implementation in the HTTP Request Defaults.
Also double check you performed correlation of all dynamic values using suitable JMeter PostProcessors.
Related
I have a website, and I need a way to get html data from a different website via an http request, and I've looked around for ways to implement it and most say via an ajax call instead.
An ajax call is blocked by linked in so I want to try a plain cross domain http request and hope it's not blocked one way or another.
If you have a server running and are able to run code on it, you can make the HTTP call server side. Keep in mind though that most sites only allow so many calls per IP address so you can't serve a lot of users this way.
This is a simple httpListener that downloads an websites content when the QueryString contains ?site=http://linkedin.com:
// setup an listener
using(var listener = new HttpListener())
{
// on port 8080
listener.Prefixes.Add("http://+:8080/");
listener.Start();
while(true)
{
// wait for a connect
var ctx = listener.GetContext();
var req = ctx.Request;
var resp = ctx.Response;
// default page
var cnt = "<html><body>click me </body></html>";
foreach(var key in req.QueryString.Keys)
{
if (key!=null)
{
// if the url contains ?site=some url to an site
switch(key.ToString())
{
case "site":
// lets download
var wc = new WebClient();
// store html in cnt
cnt = wc.DownloadString(req.QueryString[key.ToString()]);
// when needed you can do caching or processing here
// of the results, depending on your needs
break;
default:
break;
}
}
}
// output whatever is in cnt to the calling browser
using(var sw = new StreamWriter(resp.OutputStream))
{
sw.Write(cnt);
}
}
}
To make above code work you might have to set permissions for the url, if you'r on your development box do:
netsh http add urlacl url=http://+:8080/ user=Everyone listen=yes
On production use sane values for the user.
Once that is set run the above code and point your browser to
http://localhost:8080/
(notice the / at the end)
You'll get a simple page with a link on it:
click me
Clicking that link will send a new request to the httplistener but this time with the query string site=http://linkedin.com. The server side code will fetch the http content that is at the url given, in this case from LinkedIn.com. The result is send back one-on-one to the browser but you can do post-processing/caching etc, depending on your requirements.
Legal notice/disclaimer
Most sites don't like being scraped this way and their Terms of Service might actually forbid it. Make sure you don't do illegal things that either harms site reliability or leads to legal actions against you.
I was wondering if it was possible to intercept and control/redirect DNS requests made by Firefox?
The intention is to set an independent DNS server in Firefox (not the system's DNS server)
No, not really. The DNS resolver is made available via the nsIDNSService interface. That interface is not fully scriptable, so you cannot just replace the built-in implementation with your own Javascript implementation.
But could you perhaps just override the DNS server?
The built-in implementation goes from nsDNSService to nsHostResolver to PR_GetAddrByName (nspr) and ends up in getaddrinfo/gethostbyname. And that uses whatever the the system (or the library implementing it) has configured.
Any other alternatives?
Not really. You could install a proxy and let it resolve domain names (requires some kind of proxy server of course). But that is a very much a hack and nothing I'd recommend (and what if the user already has a real, non-resolving proxy configured; would need to handle that as well).
You can detect the "problem loading page" and then probably use redirectTo method on it.
Basically they all load about:neterror url with a bunch of info after it. IE:
about:neterror?e=dnsNotFound&u=http%3A//www.cu.reporterror%28%27afew/&c=UTF-8&d=Firefox%20can%27t%20find%20the%20server%20at%20www.cu.reporterror%28%27afew.
about:neterror?e=malformedURI&u=about%3Abalk&c=&d=The%20URL%20is%20not%20valid%20and%20cannot%
But this info is held in the docuri. So you have to do that. Here's example code that will detect problem loading pages:
var listenToPageLoad_IfProblemLoadingPage = function(event) {
var win = event.originalTarget.defaultView;
var docuri = window.gBrowser.webNavigation.document.documentURI; //this is bad practice, it returns the documentUri of the currently focused tab, need to make it get the linkedBrowser for the tab by going through the event. so use like event.originalTarget.linkedBrowser.webNavigation.document.documentURI <<i didnt test this linkedBrowser theory but its gotta be something like that
var location = win.location + ''; //I add a " + ''" at the end so it makes it a string so we can use string functions like location.indexOf etc
if (win.frameElement) {
// Frame within a tab was loaded. win should be the top window of
// the frameset. If you don't want do anything when frames/iframes
// are loaded in this web page, uncomment the following line:
// return;
// Find the root document:
//win = win.top;
if (docuri.indexOf('about:neterror') == 0) {
Components.utils.reportError('IN FRAME - PROBLEM LOADING PAGE LOADED docuri = "' + docuri + '"');
}
} else {
if (docuri.indexOf('about:neterror') == 0) {
Components.utils.reportError('IN TAB - PROBLEM LOADING PAGE LOADED docuri = "' + docuri + '"');
}
}
}
window.gBrowser.addEventListener('DOMContentLoaded', listenToPageLoad_IfProblemLoadingPage, true);
I'm developing chrome extension. I need the ability to identify each client as a unique client.
I can't store guid in a cookie since cookie can be deleted. I need something to be read from the system itself which is unique.
Now - I know that JS doesn't has access to client resources ( local resources) but - and here is my question :
Question
Does chrome extensions Js's provide API for getting unique client information ( I dont care what data - as long as it is unique).
Edit :
Just to clarify :
The user will be shown a unique key ( which is a hash data of his computer). this code will be sent to me , and I will provide matching result which the user will be sent (via email) and only then - he will be able to use the extension.
(no , not all countries support extension payment via wallet , im at one of those countries)
To uniquely identify a user, I would suggest to generate a random token and store it in your extension's storage (chrome.storage). The userid has to be generated only once, when the token does not exist in storage.
For example:
function getRandomToken() {
// E.g. 8 * 32 = 256 bits token
var randomPool = new Uint8Array(32);
crypto.getRandomValues(randomPool);
var hex = '';
for (var i = 0; i < randomPool.length; ++i) {
hex += randomPool[i].toString(16);
}
// E.g. db18458e2782b2b77e36769c569e263a53885a9944dd0a861e5064eac16f1a
return hex;
}
chrome.storage.sync.get('userid', function(items) {
var userid = items.userid;
if (userid) {
useToken(userid);
} else {
userid = getRandomToken();
chrome.storage.sync.set({userid: userid}, function() {
useToken(userid);
});
}
function useToken(userid) {
// TODO: Use user id for authentication or whatever you want.
}
});
This mechanism relies on chrome.storage.sync, which is quite reliable. This stored ID will only be lost in the following scenarios:
The user re-installs the extension. Local storage will be cleared when uninstalling the extension.
One of the storage quotas has been exceeded (read the documentation).
This is not going to happen because the only write operation occurs at the first run of your extension.
Chrome's storage gets corrupted and fails to save the data.
Even if the user does not have Chrome Sync enabled, data will still be saved locally. There have been bugs with Chrome's internals that resulted in data loss, but these are incidents.
The user has opened the developer tools for your extension page and ran chrome.storage.sync.clear() or something similar.
You cannot protect against users who possess the knowledge to mess with the internals of Chrome extensions.
The previous method is sufficient if you want to uniquely identify a user. If you really want to get a hardware-based ID, use chrome.storage.cpu and chrome.storage.memory as well. I don't see any benefits in using these additional sources though, because they can change if the user replaces hardware, and they are not unique either (two identical laptops would report the same values, for instance).
As Xan suggested, the chrome.identity API is probably your best choice. You can get the users e-mail address and use that as a random seed to generate a code of your choosing. The user info also includes an "id" field which I believe is unique but I haven't ever seen any documentation that substantiates that. You can then use the chrome.storage.sync API to store the generated key in the users online data storage for your app. This way the user will be able to access their private key whenever and where ever they log in on any device.
Please note that you will have to enable the oAuth2 api's in the developers console for your application and include the application key and proper scopes in your app manifest.
Here is a crude example:
function getUserInfo (interactive, callback )
{
var xmlhttp = new XMLHttpRequest();
var retry = true;
var access_token;
getToken();
/**
* Request the Auth Token
*/
function getToken()
{
chrome.identity.getAuthToken( { 'interactive': interactive }, function (token) {
if ( chrome.runtime.lastError )
{
console.log( "ERROR! " + chrome.runtime.lastError.message );
return;
}
if ( typeof token != 'undefined ')
{
access_token = token;
sendRequest( );
}
else
callback( );
});
}
function sendRequest()
{
xmlhttp.open('GET', 'https://www.googleapis.com/userinfo/v2/me' );
xmlhttp.setRequestHeader('Authorization','Bearer ' + access_token );
xmlhttp.onload = requestComplete;
xmlhttp.send();
}
function requestComplete()
{
if ( this.status == 401 && retry )
{
retry = false; // only retry once
console.log( "Request failed, retrying... " + this.response );
}
else
{
console.log( "Request completed. User Info: " + this.response );
callback(null, this.status, this.response );
var userInfo = JSON.parse( this.response );
storeUniqueKey( userInfo );
}
}
}
function storeUniqueKey( info )
{
var key;
// TODO: Generate some key using the user info: info.loginName
// user info here contains several fields you might find useful.
// There is a user "id" field here which is numeric and I believe that
// is a unique identifier that could come in handy rather than generating your
// own key.
...
chrome.storage.sync.set ( { user_key: key } );
}
To add to Rob W's answer. In his method, the saved string would propagate to every Chrome instance signed in with the same Google user account - with a lot of big and small if's.
If you need to uniquely identify a local user profile, and not all Chrome profiles with the same Google user, you want to employ chrome.storage.local in the same manner. This will NOT be a unique Chrome install identifier though - only a profile within that install.
What also needs to be noted is that all this data is not in any way or form tied to anything - it just has a good probability of being unique. But absolutely nothing stops user from reading and cloning this data as he sees fit. You cannot, in this scenario, secure the client side.
I'm thinking that a more secure way would be to use chrome.identity API to request and maintain an offline (therefore, not expiring) token as proof of license. The user cannot easily clone this token storage.
I'm not versed in OAuth yet, so if anyone can point out what's wrong with this idea - they are welcome to.
We can also use Crypto.randomUUID() for generating a UUID and then save it to web storage. Refer to MSDN for details this API.
let uuid = self.crypto.randomUUID();
console.log(uuid); // for example "36b8f84d-df4e-4d49-b662-bcde71a8764f"
I'm trying to implement sms functionality in Dynamics CRM 2011. I've created a custom activity for this and added a button to the form of an SMS. When hitting the button, a sms should be send.
I need to make an http request for this and pass a few parameters. Here's the code triggered:
function send() {
var mygetrequest = new ajaxRequest()
mygetrequest.onreadystatechange = function () {
if (mygetrequest.readyState == 4) {
if (mygetrequest.status == 200 || window.location.href.indexOf("http") == -1) {
//document.getElementById("result").innerHTML = mygetrequest.responseText
alert(mygetrequest.responseText);
}
else {
alert("An error has occured making the request")
}
}
}
var nichandle = "MT-1234";
var hash = "md5";
var passphrase = "[encryptedpassphrase]";
var number = "32497123456";
var content = "testing sms service";
mygetrequest.open("GET", "http://api.smsaction.be/push/?nichandle=" + nichandle + "&hash=" + hash + "&passphrase=" + passphrase + "&number=" + number + "&content=" + content, true)
mygetrequest.send(null)
}
function ajaxRequest() {
var activexmodes = ["Msxml2.XMLHTTP", "Microsoft.XMLHTTP"] //activeX versions to check for in IE
if (window.ActiveXObject) { //Test for support for ActiveXObject in IE first (as XMLHttpRequest in IE7 is broken)
for (var i = 0; i < activexmodes.length; i++) {
try {
return new ActiveXObject(activexmodes[i])
}
catch (e) {
//suppress error
}
}
}
else if (window.XMLHttpRequest) // if Mozilla, Safari etc
return new XMLHttpRequest()
else
return false
}
I get the "access is denied error" on line:
mygetrequest.open("GET", "http://api.smsaction.be/push/?nichandle=" ......
Any help is appreciated.
The retrieving site has to approve cross domain AJAX requests. Usually, this is not the case.
You should contact smsaction.be or check their FAQ to see if they have any implementation in place.
Usually JSONP is used for cross domain requests, and this has to be implemented on both ends.
A good way to overcome this, is using your own site as a proxy. Do the AJAX requests to an script on your side, and let it do the call. In example PHP you can use cURL
I suppose the SMS-service is in different domain. If so, you cannot make AJAX-call to it, because it violates same origin policy. Basically you have two choices:
Do the SMS-sending on server-side
Use JSONP
Also, is it really so that the passphrase and other secrets are visible in HTML? What prevents people from stealing it and using it for their own purposes?
Your AJAX requests by default will fail because of Same Origin Policy.
http://en.wikipedia.org/wiki/Same_origin_policy
Modern techniques allow CORS ( see artilce by Nicholas ) http://www.nczonline.net/blog/2010/05/25/cross-domain-ajax-with-cross-origin-resource-sharing/
jQuery's Ajax allow CORS.
Another way to do it is to get the contents and dynamically generate a script element and do an insertBefore on head.firstchild ( refer jQuery 1.6.4 source line no : 7833 )
Google analytics code does some thing similar as well. you might want to take a look at that too.
Cheers..
Sree
For your example, when requesting from different domain error is:
XMLHttpRequest cannot load http://api.smsaction.be/push/?nichandle=??????&hash=?????&passphrase=[???????????]&number=????????????&content=???????????????. Origin http://server is not allowed by Access-Control-Allow-Origin.
For cross domains XMLHttp requests destination server must send Access-Control-Allow-Origin response header.
MDN: https://developer.mozilla.org/en/http_access_control
Bit of a strange one here, i have a CCTV system and contacted the manufacturers to asked if there was an API. The answer was no.
I've been trying to understand how i can take the live jpeg picture and use it in my own app (c#).
here is a link to the liveview page that displays the the live feeds; http://pastebin.com/jCp4jZRh
The line i'm interested in is;
img_buf[0].src = "ivop.get?action=live&piccnt=0&THREAD_ID=" + thd_id;
Now piccnt seems to be for stopping browsers caching the data, so this number keeps changing and thd_id seems to be the channel number. When trying to access this i get the following message;
Authentication Error:Access Denied, authentication error
Even if i log in first, then try the above url with my own contect i still retrieve the access denied message.
Heres the source to the login page; http://pastebin.com/q7nLJ4tk
heres the source to the md5.js file; http://pastebin.com/du1ggaQB
I'm just a little stuck on how to auth then display the feed, does anyone have any pointers?
thanks
I answered a similar question awhile back, and the solution ended up being that you had to set the referrer.
In any case, to find your solution, download a copy of Fiddler.
Once running, hit your camera page, and you will see several requests. When you find one of the requests for ivop.get, drag it into the request builder and execute it a second time.
If after executing it a second time it still works (check using the inspectors), then start changing the headers, removing bits one by one until you find the key element. I suspect there will either be a cookie, or referrer that is required.
Once you have figured out those elements, it should be easy to make the appropriate request in your application.
If you can post a live URL, I can help you with this.
Best guess given the limited info available: they're checking the referrer. You can check the details of the requests using Fiddler (you can even replay the request with a slightly different referrer, confirm if that's what's happening, etc). If this is it, you can set the referrer in HTTPWebRequest: http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.referer.aspx
There are many possibilities, and without having access to the source code of the CCTV server, it's hard to say which one it might be.
I'd suggest popping open an HTTP Header sniffing utility (such as https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/ for firefox) and watch the headers for the successful IMG request. Then replay that request using netcat or curl. Once you've got that working, try removing HTTP headers one at a time (you're probably sending some kind of session ID, HTTP Referrer, etc - these may all be important to the CCTV server)
In any case, it's almost certainly going to be important that you at least authenticate with mlogin.get and pass along the resulting session ID in subsequent requests.
Whilst this may be old - i had the same problem. The DVR requires an authenticated login with a key sent in the url during the first redirect to the login page, and the password in hex_hmac_md5. I have a python function to login and retrieve two channel images and then logout below:
def getcamimg():
baseurl = 'http://<IPADDRESS>/'
content = str(getUrl(baseurl))
x = re.search("key=(\w+)", content)
keystr = x.group(1)
key = bytes(keystr,'utf-8')
password = bytes(<YOURPASS>,'utf-8')
hmacobj = hmac.new(key,password)
hmacpass = hmacobj.hexdigest()
#-----------------------------------------------------
loginurl = baseurl + 'mlogin.get?account=<USERNAME>&passwd='+ str(hmacpass) + '&key=' + keystr + '&Submit=Login'
lcontent = str(getUrl(loginurl))
if("another administrator" in lcontent):
print("another admin online")
exit()
y = re.search('href="([\w\d\.\?=&_-]+)"',lcontent)
finalurl = baseurl + y.group(1)
z = re.search('id=(\w+)',lcontent)
thid = z.group(1)
#-----------------------------------------------------
imgurl = baseurl + "ivop.get?action=live&piccnt=1&THREAD_ID=" + thid
imgcontent = getUrl(imgurl)
ctime = datetime.datetime.today().strftime("%Y%m%d%H%M%S")
with open("chan1_"+ctime+".jpg", "wb") as file0:
file0.write(imgcontent)
#-----------------------------------------------------
chanset = "showch.set?channel=3&THREAD_ID=" + thid
getUrl(baseurl + chanset)
#-----------------------------------------------------
icontent1 = getUrl(imgurl)
with open("chan3_"+ctime+".jpg", "wb") as file1:
file1.write(icontent1)
#-----------------------------------------------------
logout = "Forcekick.set?ITSELF=1&Logout=Logout&THREAD_ID=" + thid
getUrl(baseurl + logout)
#-----------------------------------------------------
def getUrl(url):
try:
response = requests.get(url)
response.raise_for_status()
except HTTPError as http_err:
print('HTTP error occurred: '+ str(http_err))
except Exception as err:
print('Other error occurred:' + str(err))
else:
return response.content