How to view the assembly code generated from my JavaScript (in Chrome)? - javascript

is it somehow possible, to view the final machine code (x86 instruction) that a browser generates from my Javascript? E.g.
--- Raw source ---
function add(a, b){
return a + b;
}
...
--- Code ---
source_position = 0
kind = FUNCTION
Instructions (size = 456)
0x36953100 0 8b4c2404 mov ecx,[esp+0x4]
0x36953104 4 81f991806049 cmp ecx,0x49608091 ;; object: 0x49608091 <undefined>
0x3695310a 10 750a jnz 22 (0x36953116)
0x3695310c 12 8b4e13 mov ecx,[esi+0x13]
0x3695310f 15 8b4917 mov ecx,[ecx+0x17]
0x36953112 18 894c2404 mov [esp+0x4],ecx
0x36953116 22 55 push ebp
Thanks!

Your script doesn't transform to machine code directly. JavaScript runs on virtual machine V8 (it's true for chrome and classic nodejs) and you can get VM byte code using:
node --print-bytecode script.js
Then V8 executes and optimizes the byte code and calls external C libraries and OS API (system calls) or WEB API. Final machine code may vary even with the same javascript code (for example before and after optimization).
You can also start Chrome from the command line with
--js-flags="--print-bytecode"
UPD:
As #PeterCordes noticed nodejs allow to see Turbofan generated machine code using
node --print-opt-code script.js
Chrome:
--js-flags="--print-opt-code"
Also you can use HTML visualizer like https://github.com/v8/v8/tree/master/tools/turbolizer

Related

Get real architecture of M1 Mac regardless of Rosetta

I need to retrieve the real architecture of a Mac regardless of if the process is running through Rosetta or not.
Right now in Node.js, process.arch returns x64 and in shell, uname -m returns x86_64.
Thanks to #Ouroborus, this note describes how to figure out if your app is translated.
If it's translated:
$ sysctl sysctl.proc_translated
sysctl.proc_translated: 1
If not:
$ sysctl sysctl.proc_translated
sysctl.proc_translated: 0
On non-ARM Macs:
$ sysctl sysctl.proc_translated
sysctl: unknown oid 'sysctl.proc_translated'
As #Elmo's answer indicates, the command line sysctl -n sysctl.proc_translated or the native equivalent sysctlbyname() call will indicate whether you are running under Rosetta.
Two other sysctl values are relevant. On M1 hardware without Rosetta, these values are returned:
hw.cputype: 16777228
hw.cpufamily: 458787763
hw.cputype is 0x0100000C (CPU_TYPE_ARM64) and hw.cpufamily is 0x1b588bb3 (CPUFAMILY_ARM_FIRESTORM_ICESTORM).
However, when executed under Rosetta, the low-level machine code which collects CPUID takes precendence and following two values are returned, both via sysctlbyname() and the command line:
hw.cputype: 7
hw.cpufamily: 1463508716
These correspond to 0x7 (CPU_TYPE_X86) and 0x573b5eec (INTEL_WESTMERE).
It appears Rosetta reports an x86-compatible Westmere chip under Rosetta, but this choice seems consistent everywhere I've seen. This "virtual architecture" may be useful information for some programs.
Another possibility presents itself in the IO Registry. While the default IOService plane collects data in real-time, the IODeviceTree plane is stored at boot, and includes these entries in the tree (command line ioreg -p IODeviceTree or ioreg -c IOPlatformDevice):
cpu0#0 <class IOPlatformDevice, id 0x10000010f, registered, matched, active, busy 0 (180 ms), retain 8>
| | | {
...
| | | "compatible" = <"apple,icestorm","ARM,v8">
(for CPUs 0-3)
and
cpu4#100 <class IOPlatformDevice, id 0x100000113, registered, matched, active, busy 0 (186 ms), retain 8>
| | | {
...
| | | "compatible" = <"apple,firestorm","ARM,v8">
(for CPUs 4-7)
This clearly indicates the ARMv8 Firestorm + Icestorm M1 chip.
The same approach should work for the M1 Pro and M1 Max.

Install OpenSSL for MSVC2017 on 64-bit Windows 10

.pro
LIBS += -LC:\Qt\Tools\OpenSSL\Win_x86\lib -llibssl
LIBS += -LC:\Qt\Tools\OpenSSL\Win_x86\lib -llibcrypto
INCLUDEPATH += C:\Qt\Tools\OpenSSL\Win_x86\include
main.qml
import QtQuick 2.12
import QtQuick.Window 2.12
import QtQuick.Controls 2.0
Window {
visible: true
width: 640
height: 480
Component.onCompleted: getPage(logResults)
function logResults(results) {
console.log("RESULTS: " + results)
}
function getPage(callback) {
var xhttp = new XMLHttpRequest();
var url = "https://www.google.com/"
xhttp.onreadystatechange = function() {
if (xhttp.readyState === 4 && xhttp.status === 200) {
console.log("calling callback")
callback(xhttp.responseText)
}
};
xhttp.open("GET", url);
xhttp.send();
}
}
expected output
qml: calling callback
qml: RESULTS: <html>
actual output
qt.network.ssl: QSslSocket: cannot resolve SSL_CTX_set_ciphersuites
qt.network.ssl: QSslSocket: cannot resolve SSL_set_psk_use_session_callback
qt.network.ssl: QSslSocket: cannot call unresolved function SSL_set_psk_use_session_callback
qml: calling callback
Windows 10 64-bit OS, running MSVC2017 QML project
I ran C:\Qt\MaintenanceTool.exe to install Developer and Designer Tools > OpenSSL 1.1.1d Toolkit
I've tried following a previous tutorial and another one for MSVC2017 but no luck in resolving the errors or getting xhttp.responseText. Found out the code works in ubuntu 19.4 so it just has to be that I'm running it on my windows machine that something funky is happening with the OpenSSL. I couldn't find any resolution by googling the outputted error messages. I've read that accidentally installing openSSL to "the windows directory" can cause errors, but I've not been able to actually locate "the windows directory" in question to check if I did.
edit
From C:\Qt\Tools\OpenSSL\Win_x64\bin I copied libcrypto-1_1-x64.dll and libssl-1_1-x64.dll to my project's \debug and \release folders. This removed the qt.network.ssl errors, however I am still not getting the expected output of qml: RESULTS: <html>
You ran C:\Qt\MaintenanceTool.exe to install Developer and Designer
Tools > OpenSSL 1.1.1d Toolkit
That is also my recommendation. The alternative is to compile OpenSSL yourself, or download a binary package from a third party provider. I have one of those packages installed at "C:\Program Files\OpenSSL-Win64\bin", and programs using Qt+=network are able to locate and load the libraries when their path is included in the PATH environment variable. The problem is that you will need to take care of the updates yourself, but the Qt packages are automatically updated with the MaintenanceTool along with Qt and Qt Creator. So pick your choice.
Anyway, even if you have another set of OpenSSL DLLs in your path, if you copy the libraries to the output directory of your executable, these libraries will be loaded instead. Two questions need to be answered here: 1) how do you copy the DLLs automatically each time they are needed?, and 2) how do you verify which DLLs are loaded when you run your program?
1) You may add the following to your project .pro:
win32 {
CONFIG += file_copies
CONFIG(debug, debug|release) {
openssllibs.path = $$OUT_PWD/debug
} else {
openssllibs.path = $$OUT_PWD/release
}
contains(QMAKE_TARGET.arch, x86_64) {
openssllibs.files = C:/Qt/Tools/OpenSSL/Win_x64/bin/libcrypto-1_1-x64.dll \
C:/Qt/Tools/OpenSSL/Win_x64/bin/libssl-1_1-x64.dll
} else {
openssllibs.files = C:/Qt/Tools/OpenSSL/Win_x86/bin/libcrypto-1_1.dll \
C:/Qt/Tools/OpenSSL/Win_x86/bin/libssl-1_1.dll
}
COPIES += openssllibs
}
That is it. Now your program will always have the latest libraries from Qt/Tools copied to the output directory of your project, without worrying if you compile in debug or release mode, or 32/64 bits, or another Qt Kit.
2) Run your program while inspecting the loaded DLLs with Process Explorer, by Mark Russinovich. To do so, in Process Explorer->View->Show lower pane, and select your running program in the upper pane. The lower pane lists all your loaded DLLs and origins. There are other similar utilities out there, like the open source Process Hacker.
Even understanding all of the above, and following exactly the recipe, your program still does not print the desired output. Please change the function logResults() in your qml like this:
function logResults(results) {
console.log("RESULTS Length=", results.length);
console.log("results.substr=", results.substr(0, 20));
}
You will get the following output:
qml: calling callback
qml: RESULTS Length= 47932
qml: results.substr= <!doctype html><html
Explanation: looks like console.log() has a limitation of about 32K in Windows (it doesn't on Linux). The document retrieved from the remote host is much larger, and this breaks the logging function. This is probably a bug in Qt (it should not fail silently like that).
Another advice for anybody coming here in the future: It is not strictly needed, but you may want to verify in your main() function that SSL is available, with something like this code:
#include <QDebug>
#include <QSslSocket>
int main(int argc, char *argv[])
{
QGuiApplication app(argc, argv);
if (!QSslSocket::supportsSsl()) {
qDebug() << "Sorry, OpenSSL is not supported";
return -1;
}
qDebug() << "build-time OpenSSL version:" << QSslSocket::sslLibraryBuildVersionString();
qDebug() << "run-time OpenSSL version:" << QSslSocket::sslLibraryVersionString();
[...]
}

Push window to front with hidden powershell

I have a scheduled task that starts a hidden-window PowerShell script. The script does some stuff, starts an HTA and waits for the HTA to close:
Start-Process $HTApath -Wait
and then the PoSh script does some other stuff. Everything works great on many machines (Win7 & 10), but other (fairly stock) systems, the HTA often doesn't come to the front and is not seen for hours/days. That's a problem.
I have included javascript:
<script type="text/javascript">
[...]
window.focus();
AND vbscript:
<script language='vbscript'>
Sub window_toFront
Set Shell = CreateObject("WScript.Shell")
Shell.AppActivate("UAlbany Security Updates Policy Reboot Notice")
End Sub
window_toFront
in the HTA, and I've put in timed loops to repeat the attempts. On the machines where the window comes to the front initially, the loops do a very nice job forcing the window back to the front, but on the machines that ignore the directive, repeating the attempt does nothing. Those machines are running the other VBS/JS scripts in the HTA; It is ONLY the push-to-front commands that are not working.
I also tried this "evil" method. It also failed to work on some machines (and crashed a VM).
I've tried testing PoSh options that could be used externally to the HTA (i.e. replacing the Start-Process call in the scheduled script):
Add-Type #"
using System;
using System.Runtime.InteropServices;
public class Tricks {
[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool SetForegroundWindow(IntPtr hWnd);
}
"#
$PID = [diagnostics.process]::start("HTApath").id
start-sleep -Seconds 1
$Handle = (Get-Process -Id $PID).MainWindowHandle
While ((Get-Process -Id $PID -ErrorAction SilentlyContinue) -ne $null) {
Start-Sleep -Seconds 1
[void] [Tricks]::SetForegroundWindow($Handle)
}
but it only seems to work if the PoSh window is made active (which won't happen for this process).
Does anyone have any suggestions for getting this to work? I'm particularly interested in understanding why the internal-HTA methods are so hit-or-miss; Is there a Windows/IE setting that could cause this?
Thanks!

PhantomJS performance

My question is, what is the most efficient configuration for PhantomJS tests.
Currently I have 1 instance of PhantomJS running and every instance can have 2 tabs opened (dev env).
Is it better to have more instance of phantomjs or opened tabs, and if tabs, what is the upper limit of PhantomJS.
CPU:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3210M CPU # 2.50GHz
stepping : 9
microcode : 0x19
cpu MHz : 2494.316
cache size : 3072 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc up rep_good nopl xtopology nonstop_tsc pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx rdrand hypervisor lahf_lm
bogomips : 4988.63
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
Memory:
total used free shared buffers cached
Mem: 2515896 1155828 1360068 0 171648 622668
More information:
I would like to handle multiple tests at once with as less memory as possible. Now I am running two pages per phantom instance, and already I am having issues with network requests. I have a timeout of 20s and if a specific network request is not finished in that time, test fails.
Test is successful if I only run one page in one PhantomJS instance, but that is not optimal, because we will be running more then 1000 tests, and I would like to arrange tests in multiple pages across multiple phantomjs instances.
Example:
10 phantomjs instances
every phantomjs instance can run 30 pages
Now why when running multiple pages in one instance, does the network request lag so much?

How to stream audio from browser to WebRTC native C++ application

I have so far managed to run the following sample:
WebRTC native c++ to browser video streaming example
The sample shows how to stream video from a native C++ application (peerconnection_client.exe) to the browser (I am using Chrome). This works fine and I can see myself in the browser.
What I would like to do is to stream audio from the browser to the native application but I am not sure how. Can anyone give me some pointers please?
I'm trying to find a way to stream both video and audio from browser to my native program. and here is my way so far.
To stream video from browser to your native program without gui, just follow the example here. https://chromium.googlesource.com/external/webrtc/+/refs/heads/master/examples/peerconnection/client/
use AddOrUpdateSink to add your own VideoSinkInterface and you will receive your frame data in callback void OnFrame(const cricket::VideoFrame& frame). Instead of render the frame to GUI as the example does, you can do whatever you want.
To stream audio from browser to your native program without real audio device. you can use a fake audio device.
modify variable rtc_use_dummy_audio_file_devices to true in file https://chromium.googlesource.com/external/webrtc/+/master/webrtc/build/webrtc.gni
invoke the global static function to specify the filename webrtc::FileAudioDeviceFactory::SetFilenamesToUse("", "file_to_save_audio");
patch file_audio_device.cc with the code blew. (as I write this answer, FileAudioDevice has some issues, may already be fixed)
recompile your program, touch file_to_save_audio and you will see pcm data in file_to_save_audio after webrtc connection is established.
patch:
diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.cc b/webrtc/modules/audio_device/dummy/file_audio_device.cc
index 8b3fa5e..2717cda 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.cc
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.cc
## -35,6 +35,7 ## FileAudioDevice::FileAudioDevice(const int32_t id,
_recordingBufferSizeIn10MS(0),
_recordingFramesIn10MS(0),
_playoutFramesIn10MS(0),
+ _initialized(false),
_playing(false),
_recording(false),
_lastCallPlayoutMillis(0),
## -135,12 +136,13 ## int32_t FileAudioDevice::InitPlayout() {
// Update webrtc audio buffer with the selected parameters
_ptrAudioBuffer->SetPlayoutSampleRate(kPlayoutFixedSampleRate);
_ptrAudioBuffer->SetPlayoutChannels(kPlayoutNumChannels);
+ _initialized = true;
}
return 0;
}
bool FileAudioDevice::PlayoutIsInitialized() const {
- return true;
+ return _initialized;
}
int32_t FileAudioDevice::RecordingIsAvailable(bool& available) {
## -236,7 +238,7 ## int32_t FileAudioDevice::StopPlayout() {
}
bool FileAudioDevice::Playing() const {
- return true;
+ return _playing;
}
int32_t FileAudioDevice::StartRecording() {
diff --git a/webrtc/modules/audio_device/dummy/file_audio_device.h b/webrtc/modules/audio_device/dummy/file_audio_device.h
index a69b47e..3f3c841 100644
--- a/webrtc/modules/audio_device/dummy/file_audio_device.h
+++ b/webrtc/modules/audio_device/dummy/file_audio_device.h
## -185,6 +185,7 ## class FileAudioDevice : public AudioDeviceGeneric {
std::unique_ptr<rtc::PlatformThread> _ptrThreadRec;
std::unique_ptr<rtc::PlatformThread> _ptrThreadPlay;
+ bool _initialized;;
bool _playing;
bool _recording;
uint64_t _lastCallPlayoutMillis;
I know this is an old question, but I struggled myself to find a solution currently so I thought sharing is appreciated.
There's is one more or less simple way to get an example running which streams from the browser to native code.You need the webrtc source http://www.webrtc.org/native-code/development
The two tools you need are the peerconnection server and client. Both can be found in the folder talk/example/peerconnection
To get it working you need to patch it to enable DTLS for the peerconnection client. So patch it with the patch provided here https://code.google.com/p/webrtc/issues/detail?id=3872 and rebuild the client. Now you are set up on the native site!
For the browser I recommend the peer2peer example from here https://github.com/GoogleChrome/webrtc after starting the peerconnection_server and connection the peerconnection_client try to connect with the peer2peer example.
Maybe a connection constraint is necessary:
{
"DtlsSrtpKeyAgreement": true
}
you could use the following example which implement a desktop client for appRTC.
https://github.com/TemasysCommunications/appRTCDesk
this completes and interop with the web client, android client and iOs client provided by the open source implementation at webrtc.org, giving you a full suite of clients to work with their free server. peer connection_{client|server} is an old example from the lib jingle time (pre webrtc) and does not interop with anything else.

Categories

Resources