Simulate delayed response for specific address - javascript

I would like to test how loading external javascripts affect the page when remote servers are slow to respond.
I looked for tools that can slow down connection for specific sites but I could only find tools that slow down the whole network or that don't exist for Mac (like here or here)
Are there tools like that?

Using the Detours App for Mac, you can redirect certain hosts to your own local web server. From your server, you can then fetch the resource (via curl, etc.), sleep for a certain amount of time, and then return the response.

Its not the easy way out, but you could use IPTABLES (unix ip-router) in conjunction with TC (traffic control)?
This is quite extensive if you dont know how terminal bash-scripting works but you will need a terminal 100% for a proper solution.
If this does not work for you, try a simpler method: http://lartc.org/howto/lartc.ratelimit.single.html
Store this in for instance your home folder, call it bwm.sh
#!/bin/bash
# through this interface
IF=$1
# on this HOST
HOST=$2
# get the IP from HOST
HOSTIP="`nslookup $HOST|grep Address|grep -v "#"|cut -d " " -f2`"
# with this rate
your_rate=$3
# defaults /sbin/tc
TC="`whereis tc | sed 's/[^\ ]*.\([^\ ]*\).*/\1/'`"
# defaults /sbin/iptables
IPTABLES="`whereis iptables | sed 's/[^\ ]*.\([^\ ]*\).*/\1/'`"
#some number
PRIO="123"
# you create a new rule in the mangle table
IPT="$IPTABLES -t mangle"
echo "Program locations found: iptables: $IPTABLES and tc: $TC"
echo "down-rating bandwidth\n on $HOST\n to $your_rate whilst marking packages that origins\n from $HOSTIP\n with $PRIO on interface\n named $IF"
echo -n "starting setup.."
# apply custom filter
$IPT -N myfilter
# add it to the POSTROUTING chain
$IPT -A POSTROUTING -j myfilter
# if conntrack is used - restore a mark and allow the packets, which already have been marked, through - no need to check again
$IPT -A myfilter -p tcp -j CONNMARK --restore-mark
$IPT -A myfilter -m mark --mark $PRIO -j ACCEPT
# add to it your matching rule
$IPT -A myfilter -p tcp -s $HOSTIP -j MARK --set-mark $PRIO
# conntrack it optionally, so not every packet has to be rematched
$IPT -A myfilter -j CONNMARK --save-mark
# use that mark in a tc filter rule
echo qdisc add
$TC qdisc add dev $IF root handle 1: htb default 30
echo class add
$TC class add dev $IF parent 1: classid 1:1 htb rate $your_rate # <<<<<<<< fill in rate
echo sfq add
# add an SFQ qdisc to the end - to which you then attach the actual filter
$TC qdisc add dev $IF parent 1:1 sfq perturb 10
echo filter add
$TC filter add dev $IF parent 1:1 prio 1 handle $PRIO fw flowid 1:1
echo "done"
Now open terminal window and achieve root permissions
finder > terminal > open, we will go to user home and enter super user
cd; su
enter root password
start program with Interface, Hostname, Rate parameters
sh bwm.sh IF HOST RATE

Related

Get real architecture of M1 Mac regardless of Rosetta

I need to retrieve the real architecture of a Mac regardless of if the process is running through Rosetta or not.
Right now in Node.js, process.arch returns x64 and in shell, uname -m returns x86_64.
Thanks to #Ouroborus, this note describes how to figure out if your app is translated.
If it's translated:
$ sysctl sysctl.proc_translated
sysctl.proc_translated: 1
If not:
$ sysctl sysctl.proc_translated
sysctl.proc_translated: 0
On non-ARM Macs:
$ sysctl sysctl.proc_translated
sysctl: unknown oid 'sysctl.proc_translated'
As #Elmo's answer indicates, the command line sysctl -n sysctl.proc_translated or the native equivalent sysctlbyname() call will indicate whether you are running under Rosetta.
Two other sysctl values are relevant. On M1 hardware without Rosetta, these values are returned:
hw.cputype: 16777228
hw.cpufamily: 458787763
hw.cputype is 0x0100000C (CPU_TYPE_ARM64) and hw.cpufamily is 0x1b588bb3 (CPUFAMILY_ARM_FIRESTORM_ICESTORM).
However, when executed under Rosetta, the low-level machine code which collects CPUID takes precendence and following two values are returned, both via sysctlbyname() and the command line:
hw.cputype: 7
hw.cpufamily: 1463508716
These correspond to 0x7 (CPU_TYPE_X86) and 0x573b5eec (INTEL_WESTMERE).
It appears Rosetta reports an x86-compatible Westmere chip under Rosetta, but this choice seems consistent everywhere I've seen. This "virtual architecture" may be useful information for some programs.
Another possibility presents itself in the IO Registry. While the default IOService plane collects data in real-time, the IODeviceTree plane is stored at boot, and includes these entries in the tree (command line ioreg -p IODeviceTree or ioreg -c IOPlatformDevice):
cpu0#0 <class IOPlatformDevice, id 0x10000010f, registered, matched, active, busy 0 (180 ms), retain 8>
| | | {
...
| | | "compatible" = <"apple,icestorm","ARM,v8">
(for CPUs 0-3)
and
cpu4#100 <class IOPlatformDevice, id 0x100000113, registered, matched, active, busy 0 (186 ms), retain 8>
| | | {
...
| | | "compatible" = <"apple,firestorm","ARM,v8">
(for CPUs 4-7)
This clearly indicates the ARMv8 Firestorm + Icestorm M1 chip.
The same approach should work for the M1 Pro and M1 Max.

Can't control BeagleBone Green's LEDs on Debian 8.6

I'm trying to use usr LEDs on BeagleBone Green Wireless on Debian 8.6 (default image from http://beagleboard.org/green with kernel 4.4.26-ti-r59) for my own debug purposes, but some script make it impossible. It is caused most likely by a JS script (after uninstalling nodejs leds stopped beating, but JS is crucial for me).
I've already tried to switch mode of led blink to none:
# cd /sys/class/leds/beaglebone\:green\:usr0
# echo none > trigger
# cat trigger
[none] rc-feedback kbd-scrollock ...
and to control brightness:
# echo 0 > brightness
# cat brightness
0
# echo 255 > brightness
# cat brightness
0
# cat brightness
1
As you can see - value of brightness is simply being overwritten by another program (script) running in loop. Does anyone have an idea which script may cause this situation?

Snort rule for Javascript detection

I have a few Snort rules to signal an alert when a JS file is being downloaded from a web page, however none of them are triggering. I'm not quite sure of the details of writing snort rules, so these were some guesses pooled from various readings.
Not sure if having gzip encoded JS files makes a huge difference, but I did check my snort.conf file and it does contain the following options under
preprocessor http_inspect_server: server default \
.....
extended_response_inspection \
inspect_gzip \
normalize_utf \
unlimited_decompress \
normalize_javascript \
None of the 3 alerts below trigger even though the JS files contain the word "function", and the html file contains js with the following words "snort team!"
alert tcp $HOME_NET $HTTP_PORTS -> $HOME_NET any (msg:"JS-Detect1"; file_data; content:"function"; sid:1000000)
alert tcp $HOME_NET $HTTP_PORTS -> $HOME_NET any (file_data; content:"snort team!"; nocase; msg:"JS-Detect2"; sid:1000001)
alert tcp $HOME_NET $HTTP_PORTS -> $HOME_NET any (file_data;content:"<script>"; nocase; msg:"JS-Detect3"; sid:1000002)
The following alert is triggering which is contained in the same rule file.
alert tcp $HOME_NET $HTTP_PORTS -> $HOME_NET any (msg:"JS-Detect4"; sid:1000003)
Any advice or help for writing a snort rule that will trigger an alert when downloading a JS file, or necessary modifications to the snort.conf would be be of great help! Thanks.

what's the best practice to control linux process from web page( Rails)?

currently I am facing to a problem that: we need to control the linux scripts running in background from web page. e.g. we could start/stop a 'script' via the button on the webpage.
also another good example is:
I am an expert of web development, and very familiar with javascript(setTimeout to refresh the progress), ruby on rails.
thanks a lot.
if it is linux systems then you could use
system(path_to_script.sh)
or
`path_to_script.sh`
in your controller
thanks to #xvidun, I found the solution: using god.
the key process is: how to make a non-daemon process start as a daemon. "nohup ... &" doesn't work for me. so I tried god and it is great!
in my case, the process I want to run is:
$ cd /sg552/workspace/m-video-fetcher
$ nohup ruby script/start_fetch.rb &
here is how I did the job:
step1 : create a god config file:
# file name: fetcher.god
RAILS_ROOT = '/sg552/workspace/m-video-fetcher'
God.watch do |w|
w.name = 'fetcher'
w.dir = RAILS_ROOT
w.start = "ruby script/start_fetch.rb"
w.log = "#{RAILS_ROOT}/fetcher_stdout.log"
w.keepalive
end
step2:
$ god start -c fetcher.god
step3:
# in the view, give users the interface to restart this job.
<a href='/settings/restart_fetch_for_all_plans'>restart</A>
# in the controller:
def restart_fetch_for_all_plans
result = `god stop fetcher && god start fetcher`
redirect_to :back, :notice => "fetcher restarted, #{result}"
end

How to inspect result of `node --prof` on x64 MacOS X 10.7?

I try to profile my node.js script at CLI.
As written at https://code.google.com/p/v8/wiki/V8Profiler, or http://blog.arc90.com/2012/03/05/profiling-node-programs-on-mac-os-x/ a do:
$ node --prof my_script.js
All ok, I get file named v8.log with bunch of lines.
But then in inspection tools all go wrong.
$ tools/mac-tick-processor v8.log
show to me
Statistical profiling result from v8.log, (298 ticks, 237 unaccounted, 0 excluded).
and empty JavaScript section.
[JavaScript]:
ticks total nonlib name
Also I try https://github.com/bnoordhuis/node-profiler, but get some results too.
How I can work with --prof results?
$ node -v
v0.8.18
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.7.4
BuildVersion: 11E53
As suggested by #Dogbert you can use github.com/sidorares/node-tick
Feel free to create pull request if you missing any functionality. I haven't updated it for quite a while and it still seems to work
After Node 4.4.0:
node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt
From: https://nodejs.org/en/docs/guides/simple-profiling/

Categories

Resources