WebTorrent creating a server through adding multiple torrents magnet's - javascript

I am trying to create a Node.js server that loads multiple torrent magnets and then serves a static directory to that .mp4 (endpoint), similar to what the demo is doing for a single torrent.
const WebTorrent = require('webtorrent')
var client = new WebTorrent()
var torrentId = '#'
const util = require('util')
client.add(torrentId, function (torrent) {
// Create HTTP server for this torrent
var server = torrent.createServer()
// console.log(util.inspect(torrent.createServer(), {showHidden: true, depth: null})) // To see what's going on
// Visit http://localhost:<port>/ to see a list of files
// Access individual files at http://localhost:<port>/<index> where index is the index
// in the torrent.files array
server.listen(8000) // s tart the server listening to a port
})
My end goal was to eventually have a database of magnet URL's, then have my server create a direct endpoint to each .mp4 file. This demo is working for the most basic re-creatable example for a single magnet, but I would like to load up multiple and serve the endpoints like:
client.add(magnet.forEach(), function(torrent) {
// Create server after multiple torrents loaded
})
I guess I really need to know how torrent.createServer() is able to make the static directory, or is there a way to load up multiple magnets?
Here is what it is creating for a single magnet url.
I know torrent.createServer() is making a simple HTTP server, I just do not understand also how it is indexing and serving the .mp4's directly without downloading them prior to the server.

Related

How to list directory contents with a sas token for directory

In a react project I am trying to list all the files in blob storage using a sas token created for the given directory. My understanding is I need to create a DataLakeFileSystemClient but I only have a url for the directory and a DataLakeDirectoryClient and somehow need to create the DataLakeFileSystemClient.
The url passed is something along the lines of: https://myaccount.dfs.core.windows.net/mycontainer/mydirectory{sastoken}
I have found a way to do this, although I don't know if it's the best way.
To get the directory client to a FileSystem client I wrote a helper method
const getFileSystemClient = (directoryClient: DataLakeDirectoryClient) => {
const url = new URL(directoryClient.url);
url.pathname = directoryClient.fileSystemName;
return new DataLakeFileSystemClient(url.toString());
}
To list directories I use the following code
const fsClient = getFileSystemClient(directoryClient);
for await(const path of fsClient.listPaths({path: directoryClient.name})) {
console.log(path.name);
}

Bulletproof way to prevent user specified file names from using relative path elements in a Node.JS app?

I am creating a Node.JS app that allows users to edit various documents. A sub-directory is created on the server for each user using their user ID for the sub-directory name. I don't want to use a database at this time because I am creating a prototype on a tight deadline so I'm using a file based system for now to get things done quickly.
Note, users access the system from a web browser from one of my web pages.
When a user creates a document they specify the document name. Right now, on the server side the document name is sanitized of any characters that are not supported by the operating system (Linux), but that's it. My concern is that a user might try to access a directory that doesn't belong to them using relative path components inserted into the document name, in an attempt to "walk up and down" the directory tree to break out of the sub-directory reserved from them.
I have read several horror stories of users figuring out clever ways to do this via exotic UTF-8 codes, etc., so I'm looking for a Node.JS code sample or library that has a function that eliminates all relative path elements from a file name in a thorough and robust way.
How can I, on the server side, make absolutely sure that a user created document name submitted in a POST from the browser is a primary file name and nothing else?
I'm using express, I'm not sure if this is bulletproof but it seems to be doing what I need it to (making sure that a file exists before processing the request)
const fs = require('fs');
const path = require('path');
checkUserSuppliedFilenameExists(req, res) {
if(!req.body.fileName) {
return res.status(400).json({ message: "missing required parameters" });
}
/**
* strip any potentially harmful stuff from the start of the string
* will return the following:
* "foo.txt" => "foo.txt"
* "foo" => "foo"
* "../../foo.txt" => "foo.txt"
* "/etc/foo.txt" => "foo.txt"
* undefined => TypeError
* null => TypeError
*/
let suppliedFilename = path.basename(req.body.fileName);
// build the path to the file in the expected directory
// using __dirname to build relatively from the script currently executing
let filePath = path.resolve(__dirname, `../my-specified-directory/${suppliedFilename}`);
// check if it exists
if(!fs.existsSync( filePath ) ) {
return res.status(400).json({ message: "file doesn't exist stoopid" });
}
}

store and access image file paths when templating (from cloudinary or other service)

I’m using gulp and nunjucks to automate some basic email templating tasks.
I have a chain of tasks which can be triggered when an image is added to the images folder e.g.:
images compressed
new image name and dimensions logged to json file
json image data then used to populate template when template task is run
So far so good.
I want to be able to define a generic image file path for each template which will then concatenate to each image name (as stored in the json file). So something like:
<img src="{{data.path}}{{data.src}}" >
If I want to nominate a distinct folder to contain the images for each template generated then cloudinary requires a mandatory unique version component to be applied in the file path. So the image path can never be consistent throughout a template.
if your public ID includes folders (elements divided by '/'), the
version component is mandatory, (but you can make it shorter. )
For example:
http://res.cloudinary.com/demo/image/upload/v1312461204/sample_email/hero_image.jpg
http://res.cloudinary.com/demo/image/upload/v1312461207/sample_email/footer_image.jpg
Same folder. Different path.
So it seems I would now need to create a script/task that can log and store each distinct file path (with its unique id generated by cloudinary) for every image any time an image is uploaded or updated and then rerun the templating process to publish them.
This just seems like quite a convoluted process so if there’s an easier approach I’d love to know?
Else if that really is the required route it would great if someone could point me to an example of the kind of script that achieves something similar.
Presumably some hosting services will not have the mandatory unique key which makes life easier. I have spent some time getting to know cloudinary and it’s a free service with a lot of scope so I guess I'm reluctant to abandon ship but open to all suggestions.
Thanks
Note that the version component (e.g., v1312461204) isn't mandatory anymore for most use-cases. The URL could indeed work without it, e.g.,:
http://res.cloudinary.com/demo/image/upload/sample_email/hero_image.jpg
Having said that, it is very recommended to include the version component in the URL in cases where you'd like to update the image with a new one while keeping the exact same public ID. In that case, if you'd access the exact same URL, you might get a CDN cached version of the image, which may be the old one.
Therefore, when you upload, you can get the version value from Cloudinary's upload response, and store it in your DB, and the next time you update your image, also update the URL with the new version value.
Alternatively, you can also ask Cloudinary to invalidate the image while uploading. Note that while including the version component "busts" the cache immediately, invalidation may take a while to propagate through the CDN. For more information:
http://cloudinary.com/documentation/image_transformations#image_versions
This is the solution I came up with. It's based on adapting the generic script I use to upload images from a folder to cloudinary and now stores the updated file paths from cloudinary and generates a json data file to publish the hosted src details to a template.
I'm sure it could be a lot better semantically so welcome any revisions offered if someone stumbles on this but it seems to do the job:
// points to the config file where we are defining file paths
var path = require('./gulp.path')();
// IMAGE HOSTING
var fs = require('fs'); // !! not installed !! Not required??
var cloudinary = require('cloudinary').v2;
var uploads = {};
var dotenv = require('dotenv');
dotenv.load();
// Finds the images in a specific folder and retrurns an array
var read = require('fs-readdir-recursive');
// Set location of images
var imagesInFolder = read(path.images);
// The array that will be populated with image src data
var imgData = new Array();
(function uploadImages(){
// Loop through all images in folder and upload
for(var i = 0; i < imagesInFolder.length;i++){
cloudinary.uploader.upload(path.images + imagesInFolder[i], {folder: path.hosted_folder, use_filename: true, unique_filename: false, tags: 'basic_sample'}, function(err,image){
console.log();
console.log("** Public Id");
if (err){ console.warn(err);}
console.log("* Same image, uploaded with a custom public_id");
console.log("* "+image.public_id);
// Generate the category title for each image. The category is defined within the image name. It's the first part of the image name i.e. anything prior to a hyphen:
var title = image.public_id.substr(image.public_id.lastIndexOf('/') + 1).replace(/\.[^/.]+$/, "").replace(/-.*$/, "");
console.log("* "+title);
console.log("* "+image.url);
// Add the updated src for each image to the output array
imgData.push({
[title] : {"src" : image.url}
});
// Stringify data with no spacing so .replace regex can easily remove the unwanted curly braces
var imgDataJson = JSON.stringify(imgData, null, null);
// Remove the unwanted [] that wraps the json imgData array
var imgDataJson = imgDataJson.substring(1,imgDataJson.length-1);
// Delete unwanted braces "},{" replace with "," otherwise what is output is not valid json
var imgDataJson = imgDataJson.replace(/(},{)/g, ',');
var outputFilename = "images2-hosted.json"
// output the hosted image path data to a json file
// (A separate gulp task is then run to merge and update the new 'src' data into an existing image data json file)
fs.writeFile(path.image_data_src + outputFilename, imgDataJson, function(err) {
if(err) {
console.log(err);
} else {
console.log("JSON saved to " + outputFilename);
}
});
});
}
})();
A gulp task is then used to merge the newly generated json to overide the existing json data file:
// COMPILE live image hosting data
var merge = require('gulp-merge-json');
gulp.task('imageData:comp', function() {
gulp
.src('src/data/images/*.json')
.pipe(merge('src/data/images.json'))
.pipe(gulp.dest('./'))
.pipe(notify({ message: 'imageData:comp task complete' }));
});

How to create file(.apk) from URL in Jaggery?

I have application store and applications have their url. I want to download apks from those urls to my jaggery server. Although below code(my first solution) create myApp.apk successfully, its not work properly.
First i tried to below code,
var url = "http://img.xxx.com/006/someApp.apk";
var data = get(url, {});
var file = new File("myApp.apk");
file.open("w");
file.write(data.data);
file.close();
when i print data.data value, its look like
i also tried,
var file = new File("http://img.xxx.com/006/someApp.apk");
file.saveAs("myApp.txt");
Can anyone help me?
.apk files are Android application files, and they are expected to start with PK, because they are actually zip archives!
They're not meant to be unzipped, although you can do it to see some of the application resources (but there are better ways for reverse engineering .apk files such as Apktool, if that's what you're looking for).
According to jaggery documentations, file.write is writing the String representation of the object to the file. So that's why you are getting an apk file which cannot be installed.
However you can make it work using copyURLToFile in apache commons-io java library as follows since jaggery supports java itself and all of WSO2 products have apache commons-io library in their class path.
<%
var JFileUtils = Packages.org.apache.commons.io.FileUtils;
var JUrl = Packages.java.net.URL;
var JFile = Packages.java.io.File;
var url = new JUrl("http://img.xxx.com/006/someApp.apk");
JFileUtils.copyURLToFile(url, new JFile("myApp.apk"));
print("done");
%>
Your file will be stored on $CARBON_HOME directory by default, unless you specified relative or absolute path to the file.

Deconstruct/decode Websocket frames like Google Chrome does

I'm connecting to a website via websocket connection (client to server), I know how to encode the data and write it to the server (using the net module in node.js) but when I'm reading the data back I get odd characters in front of the important data, like if I'm suppose to get:
// Data needed on the left and data I'm receiving from websocket on the right
'inited\r\n' -> '�inited\r\n'
'n:2\r\n' -> '�n:2\r\n'
This is how I am getting the data from the server
Klass.prototype.connect = function(){
// this.port is equal to 8080 and the exact server varys, but it's not that important anyways since the problem is decoding the data properly.
var that = this;
var buffer = "";
this.socket = new net.createConnection(this.port, this.server);
this.socket
.on("connect", function(){
that.sendHandshake(); // just sends a standard client to server handshake
})
.on("data", function(recv){
// .split('\r\n\r\n').join('\r\n') needed to separate the server handshake from the data I am trying to parse
buffer += recv.toString('utf-8').split('\r\n\r\n').join('\r\n');
while (buffer){
var offset = buffer.indexOf('\r\n');
if (offset < 0)
return;
var msg = buffer.slice(0, offset);
// parseMsg(msg)
buffer = buffer.slice(offset + 3);
}
});
};
I am probably doing a lot of things improperly in the code above, but I'm not quite sure how to do it exactly so that is the best I got for now.
Problem is I don't know how to remove the mystery/special characters. Sometimes there is only 1 mystery/special character, but other times there is multiple ones depending on the data but they are never after the important data I need to check.
When I use Google Chrome and view the data on through tools->JavaScript console->network tab and find the websocket stream I'm looking for Google parses it correctly. I know it's possible since Google Chrome shows the correct frames, how do I deconstruct/decode the data so I can view the correct frames on the terminal?
I don't really need it in a particular language as long as it works I should be able to port it, but I would prefer examples/answers in node.js since that is the programming language I am using to connect to the server.

Categories

Resources