InDesign CS6 Socket return Empty - javascript

I have this code (InDesign CS6), and it's not working as expected. I'm using Mac OS and I need to make the code compatible with Windows and Mac.
Trying to get a text/JSON over my localhost, and the socket return an empty string:-
function getData(host, path) {
var reply = '';
var conn = new Socket;
var request = "GET " + path + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "\n";
if (conn.open (host)) {
conn.write (request);
reply = conn.read(999999);
var close = conn.close();
}
return reply;
}
var host = 'localhost:80';
var path = '/test/test/json.php';
var test = getData(host, path);
alert(typeof(test) + ' Length:' + test.length);
Edit: Finally I manage to find out what causing the problem. I create a VMware, and try to run the script, and it's working. Not sure why it doesn't work on my machine. Download Wireshark, and saw InDesign send the request, but something blocks the request from accessing the server. I will update if I able to detect what causing the block.

When it comes to Socket, I guess the simpliest is to take advantage of that script written by Rorohiko:
https://rorohiko.blogspot.fr/2013/01/geturlsjsx.html
Or have a try with IDExtenso library:
https://github.com/indiscripts/IdExtenso
I find those convenient as they deal with the inner socket mechanisms for you.

You do not need to use a socket just to get JSON from your server.
Instead refer to the XMLHttpRequest documentation or just a library such as jQuery which greatly simplifies making Ajax calls for JSON.

Related

XHR POST to Philips Hue Bridge

I'm working on a bridge-like solution to communicate from an HbbTV-application with some Philips Hue lights (correctly with the gateway-hardware).
As the process is moving forward and the system was working, I'm now at the point that I use a plugin for Firefox that simulates a TV with HbbTV. To do so, I start an apache via XAMPP, on this i have my files which are loaded into Firefox.
Since I did that, I can't send any POST-requests to he Philips gateway, what is correct due same origin policy. I have no access to settings on Philips Hue and so my workarround has to be from clientside only.
My actual try looks like this:
var stringState = "http://" + this.Ip + "/api/" + this.UserId + "/lights/" + this.LightId;
var httpxml = new XMLHttpRequest();
var valueRequest;
console.log("in GetState:" + this.LightId);
httpxml.onreadystatechange = function() {
if (httpxml.readyState == XMLHttpRequest.DONE) {
valueRequest = JSON.parse(httpxml.responseText);
console.log(valueRequest);
console.log(valueRequest.state.on);
that.switchState(valueRequest.state.on);
}
}
httpxml.open('GET',stringState,true);
httpxml.withCredentials = true;
httpxml.setRequestHeader("Content-Type", "application/json");
httpxml.send();
I'm pretty new to developing in JavaScript and Web. I hope someone could lead me on the right road, with some advice and maybe a clear example.
Best regards
Adrian
One of the P-s in XAMPP is for PHP. So a workaround you can do is hosting a PHP page right next to your HTML one (and there will not be any issues with CORS), and let it do the job.
Something like
<?php
$ip=$_REQUEST['ip'];
$user=$_REQUEST['user'];
$light=$_REQUEST['light'];
$ch=curl_init("http://".$ip."/api/".$user."/lights/".$light);
curl_exec($ch);
curl_close($ch);
This is far from anything safe and nice, but may help to get started. Some trivial clues: variables start with $, and . is the operator for string concatenation. $_REQUEST is an array which gets the url parameters, what you should supply in your modified request (where xy.php is the filename of the PHP page):
var stringState = "xy.php?ip=" + this.Ip + "&user=" + this.UserId + "&light=" + this.LightId;
curl is a utility for issuing web requests (you can find it in your XAMPP folders, xampp\apache\curl.exe), and it has bindings for PHP. https://curl.haxx.se/. By default it returns whatever your contoller provides, so the JSON should pass-through. Content-type may or may not be an issue, if it does not work, you can try putting a header("Content-Type: application/json"); right before the curl_exec line.

Running an Executable file from an ASP.NET web application

I am trying to create a web application that can read certain files (logs) provided by the users and then Use the LogParser 2.2 exe by microsoft to parse the logs and provide the requested output.
The idea i have is to run the Local LogParser.exe present in the Users system and then use the same generated output to ready my output.
I don not know if this approach is correct , However i am trying to do the same and somewhere my code is not correctly being followed and i am not able to find any output/Error .
My code segment is as follows :
protected void Button2_Click(object sender, EventArgs e)
{
try
{
string fileName = #"C:\Program Files (x86)\Log Parser 2.2\LOGPARSER.exe";
string filename = "LogParser";
string input = " -i:IISW3C ";
string query = " Select top 10 cs-ur-stem, count(cs-ur-stem) from " + TextBox1.Text + " group by cs-uri-stem order by count(cs-ur-stem)";
string output = " -o:DATAGRID ";
string argument = filename + input + query + output;
ProcessStartInfo PSI = new ProcessStartInfo(fileName)
{
UseShellExecute = false,
Arguments = argument,
RedirectStandardInput = true,
RedirectStandardOutput = true,
CreateNoWindow = false
};
Process LogParser = Process.Start(PSI);
LogParser.Start();
}
catch (Exception Prc)
{
MessageBox.Show(Prc.Message);
}
I might be doing something wrong , but can someone point me in correct direction ? Can Javascript ActiveX control may be the way forward ?
All the help is appreciated
(( I am making it as an internal application for my organisation and it is assumed that the log parser will be present in the computer this web application is being used )0
Thanks
Ravi
Add a reference to Interop.MSUtil in your project and then use the COM API as described in the help file. The following using statements should allow you to interact with LogParser through your code:
using LogQuery = Interop.MSUtil.LogQueryClass;
using FileLogInputFormat = Interop.MSUtil.COMTextLineInputContextClass;
Then you can do something like:
var inputFormat = new FileLogInputFormat();
// Instantiate the LogQuery object
LogQuery oLogQuery = new LogQuery();
var results = oLogQuery.Execute(yourQuery, inputFormat);
You have access to a bunch of predefined input formats and output formats (like IIS and W3C)), so you can pick the one that best suits your needs. Also, you will need to run regsvr on LogParser.dll on the machine you are executing on if you have not installed LogParser. The doc is actually pretty good to get you started.

How to make these links to download files when clicked in node.js?

I'm new to Node and i'm really stuck.
I'm trying to make my web server to look up files in current directory and to display them in browser as links so i could download them.
However the list of files gets just updated with every request and displayed over and over. I can't use any frameworks or external components and i have been stuck on this for two days. I really have done a lot of research and tried a lot of things and still can't get it working.
I'm gonna add my last attempt's code below and if anyone could help me with even a little bit of information, it would be most appreciated. Thanks!
var http = require("http");
var fs = require("fs");
var currentServerDir = ".";
var port = 8080;
var content = "";
var server = http.createServer(handlerRequest).listen(8080);
function handlerRequest(request, response) {
fs.readdir(currentServerDir, function getFiles(error, items) {
items.forEach(function getItems(item) {
content += "<br>" + item + "<br>";
});
});
response.writeHead(200, {
'Content-Type': 'text/html'
});
response.write(content);
response.end();
}
EDIT:
I followed Node.js Generate html
and borrowed some code from there. Now i can click on a file, but instead of downloading or viewing it just says "undefined".
var http = require('http');
var content
function getFiles()
{
fs.readdir(currentServerDir, function getFiles(error, items)
{
items.forEach(function getItems(item)
{
content+= "<br>" +item + "<br>";
});
});
}
http.createServer(function (req, res) {
var html = buildHtml(req);
res.writeHead(200, {
'Content-Type': 'text/html',
'Content-Length': html.length,
'Expires': new Date().toUTCString()
});
res.end(html);
}).listen(8080);
function buildHtml(req) {
var header = '';
var body = content;
return '<!DOCTYPE html>'
+ '<html><header>' + header + '</header><body>' + body + '</body> </html>';
};
There are a few issues in your code.
You never call getFiles() resulting that content will never be filled
Should you call getFiles() content will still be empty in the end because you use fs.readdir(). This is an async function and will not be able to fill content before it used to build your html page.
You handle every request to your server the same, so you will not be able download any files because you will always just display your page.
You can fix the first 2 easy by using the gist AlexS posted as a comment on your question. The third one will require a bit more setup, but can be made easy if you use Express.
You could add the download tag to your HTML link :
items.forEach(function getItems(item)
{
content+= "<br><a href= " + "\" " + item + "\" " + " download>" +item + "</a><br>";
});
MDN reference
If you dont care about browser support you can use the download attribute in the a tag.
It looks like this :
DL me!
Supported by Firefox 14+ and Chrome 20+
But if you want a general solution then, you can zip your files before sending them, as far as I know .zip are always downloaded when "clicking on them".
You can use 'node-zip or archiver to zip your files.

How to get all of the cookies dropped in a session as a txt file?

I'm working on a digital art project that involves gathering cookies from a set of websites that I visit. I'm dabbling in writing some code to help me with this but overall I'm just looking for the easiest/fastest way to gather all of the contents of the cookies dropped in a single visit into a text file for re-use later.
Right now - I'm using this script in a JavaScript bookmarklet which replaces the page I'm on with the contents of the cookies in an array (I'm later putting this array into a python script I wrote...).
The contents of the bookmarklet is below but the problem right now is it only returns the contents of the cookies from the single domain.
So for example - if I run this script on the NYTimes.com homepage I get approx 48 cookies dropped by the domain. But if I look in Chrome I see that all of the 3rd party tracking scripts have hundreds of cookies. How do I gather them all? Not just the NYtimes.com ones?
This is the current JavaScript code I'm running via a bookmarklet right now:
function get_cookies_array() {
var cookies = { };
if (document.cookie && document.cookie != '') {
var split = document.cookie.split(';');
for (var i = 0; i < split.length; i++) {
var name_value = split[i].split("=");
name_value[0] = name_value[0].replace(/^ /, '');
cookies[decodeURIComponent(name_value[0])] = decodeURIComponent(name_value[1]);
}
}
return cookies;
}
function quotationsanitize(cookie){
if(cookie.indexOf('"') === -1)
{
return cookie;
}
else{
alert("found a quotation!");
return encodeURIComponent(cookie);
}
}
function sanitize(cookie){
if(cookie.indexOf(',') === -1)
{
return quotationsanitize(cookie);
}
else{
alert("found a comma!");
return quotationsanitize(encodeURIComponent(cookie));
}
}
function appendCookies(){
$("body").empty();
var cookies = get_cookies_array();
$("body").append("[");
for(var name in cookies) {
//$("body").append(name + " : " + cookies[name] + "<br />" );
var cookieinfo = sanitize(cookies[name]);
$("body").append('"' + cookieinfo + '",<br />' );
}
$("body").append("]");
}
var js = document.createElement('script');
js.src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js";
document.head.appendChild(js);
jqueryTimeout = window.setTimeout(appendCookies, 500);
I'm removing " and , from the output because I'm putting this data into an array in Python by copying and pasting it. I admit that it's a hack. If anyone has any better ideas I'm all ears!
I'd write a simple little HTTP proxy. And then set your browser to use the proxy, and have it record all the cookies as they pass through.
There's a question about writing a simple proxy here, seriously simple python HTTP proxy?
which might get you started.
You'd need to extend it to read the headers, and extract the cookies, but that's relatively easy, and if you're happy in python, you''l find libraries that do most of what you want already. You would want to record the Related header too, so you knew which cookies came from which page request, but you could then record and entire browsing session quite simply.

WIX: Where and how should my CustomAction create and read a temporary file?

I have a script CustomAction (Yes, I know all about the opinions that say don't use script CustomActions. I have a different opinion.)
I'd like to run a command, and capture the output. I can do this using the WScript.Shell COM object, then invoking shell.Exec(). But, this flashes a visible console window for the executed command.
To avoid that, I understand I can use the shell.Run() call, and specify "hidden" for the window appearance. But .Run() doesn't give me access to the StdOut of the executed process, so that means I'd need to create a temporary file and redirect the exe output to the temp file, then later read that temp file in script.
Some questions:
is this gonna work?
How do I generate a name for the temporary file? In .NET I could use a static method in the System.IO namespace, but I am using script here. I need to insure that the use has RW access, and also that no anti-virus program is going to puke on this.
Better ideas? I am trying very hard to avoid C/C++.
I could avoid all this if there were a way to query websites in IIS7 from script, without resorting to the IIS6 Compatibility pack, without using .NET (Microsoft.Web.Administration.ServerManager), and without execing a process (appcmd list sites).
I already asked a separate question on that topic; any suggestions on that would also be appreciated.
Answering my own question...
yes, this is going to work.
Use the Scripting.FileSystemObject thing within Javascript. There's a GetTempName() method that produces a file name suitable for temporary use, and a GetSpecialFolder() method that gets the location of the temp folder. There's even a BuildPath() method to combine them.
so far I don't have any better ideas.
Here's the code I used:
function GetWebSites_IIS7_B()
{
var ParseOneLine = function(oneLine) {
...regex parsing of output...
};
LogMessage("GetWebSites_IIS7_B() ENTER");
var shell = new ActiveXObject("WScript.Shell");
var fso = new ActiveXObject("Scripting.FileSystemObject");
var tmpdir = fso.GetSpecialFolder(SpecialFolders.TemporaryFolder);
var tmpFileName = fso.BuildPath(tmpdir, fso.GetTempName());
var windir = fso.GetSpecialFolder(SpecialFolders.WindowsFolder);
var appcmd = fso.BuildPath(windir,"system32\\inetsrv\\appcmd.exe") + " list sites";
// use cmd.exe to redirect the output
var rc = shell.Run("%comspec% /c " + appcmd + "> " + tmpFileName, WindowStyle.Hidden, true);
// WindowStyle.Hidden == 0
var ts = fso.OpenTextFile(tmpFileName, OpenMode.ForReading);
var sites = [];
// Read from the file and parse the results.
while (!ts.AtEndOfStream) {
var oneLine = ts.ReadLine();
var line = ParseOneLine(oneLine);
LogMessage(" site: " + line.name);
sites.push(line);
}
ts.Close();
fso.DeleteFile(tmpFileName);
return sites;
}

Categories

Resources