Downloading Torrent with Node.JS - javascript

I was wondering if anyone had an example of how to download a torrent using NodeJS? Essentially, I have an RSS Feed of torrents that I iterate through and grab the torrent file url, then would like to initiate a download of that torrent on the server.
I've parsed and looped through the RSS just fine, however I've tried a few npm packages but they've either crashed or were just unstable. If anyone has any suggestions, examples, anything... I would greatly appreciate it. Thanks.
router.get('/', function(req, res) {
var options = {};
parser.parseURL('rss feed here', options, function(err, articles) {
var i = 0;
var torrent;
for (var title in articles.items) {
console.log(articles.items[i]['url']);
//download torrent here
i++;
}
});
});

You can use node-torrent for this.
Then, to download a torrent:
var Client = require('node-torrent');
var client = new Client({logLevel: 'DEBUG'});
var torrent = client.addTorrent('a.torrent');
// when the torrent completes, move it's files to another area
torrent.on('complete', function() {
console.log('complete!');
torrent.files.forEach(function(file) {
var newPath = '/new/path/' + file.path;
fs.rename(file.path, newPath);
// while still seeding need to make sure file.path points to the right place
file.path = newPath;
});
});
Alternatively, for more control, you can use transmission-dæmon and control it via its xml-rpc protocol. There's a node module called transmission that does the job! Exemple:
var Transmission = require('./')
var transmission = new Transmission({
port : 9091,
host : '127.0.0.1'
});
transmission.addUrl('my.torrent', {
"download-dir" : "/home/torrents"
}, function(err, result) {
if (err) {
return console.log(err);
}
var id = result.id;
console.log('Just added a new torrent.');
console.log('Torrent ID: ' + id);
getTorrent(id);
});

If you are working with video torrents, you may be interested in Torrent Stream Server. It a server that downloads and streams video at the same time, so you can watch the video without fully downloading it. It's based on torrent-stream library.
Another interesting project is webtorrent. It's a nice torrent library that works in both: NodeJs & browser and has streaming support. From my experience, it doesn't have very good support in the browser, but should fully work in NodeJS.

Related

NodeJS sending an image(which may be modified) to the client with middleware

I want to mention that the image file is changing continuously.
I'm using the middleware of NodeJS:
app.use("/image.jpg",express.static(_dirname+"/image.jpg")
The problem is that the Node conveys the image.jpg without really telling that the file has been modified.
Pressing a button this part will occur.
var image=new Image();
image.onload=function(){rendering to canvas}
image.src="/image.jpg";
Somehow there is a problem...
The server's picture file gets modified then it emits to the client to draw the image, regarding to the results, the client is rendering the first image it has loaded again, although the image on the url has been changed.
I think the client is caching the image? and thinks that the image is unmodified so it keeps using it.
Is there a way to draw the current image on the url?
Are there even better methods?
You can use the fetch() API to implement cache-busting without cluttering your client's browser cache with a bunch of /image.jpg?bust=... resources.
After deciding on a folder in your server that you want to allow static access to changing files (this is preferable to the pattern where you allowed static access to a single file), you can implement your real-time updates using fs.watch() like so:
Node app.js (with express 3/4):
const fs = require('fs')
const path = require('path')
const app = require('express')()
const server = require('http').Server(app)
const io = require('socket.io')(server)
server.listen(process.env.PORT || 8080)
app.use('/', express.static(path.resolve(__dirname, './watched-directory')))
//...
io.on('connection', (socket) => {
//...
});
fs.watch('watched-directory', {
//don't want watch process hanging if server is closed
persistent: false,
//supported on Windows OS / Mac OSX
recursive: true,
}, (eventType, filename) => {
if (eventType === 'change') {
io.emit('filechange', filename)
}
})
Browser index.html:
<script src="/socket.io/socket.io.js"></script>
<script>
let socket = io.connect()
socket.on('filechange', async (filename) => {
console.log(filename)
let response = await fetch(filename, { cache: 'no-store' })
let blob = await response.toBlob()
let url = URL.createObjectURL(blob)
//modified from your question
let image = new Image()
image.addEventListener('load', () => {
//your canvas rendering code here
})
image.src = url
})
</script>

Split video file to stream from browser

I split a video file into two using the split-file module.
There are no file part extensions. They seem like: gan-1, gan-2
I am hosting these two files on my own server.
http://bilketay.com/download/gan-1
http://bilketay.com/download/gan-2
I try to stream these two files through the browser like a single video file. Like;
// Dependencies
var express = require('express');
var app = express();
var CombinedStream = require('combined-stream2');
var request = require('request');
// Some routes
app.get('/', function(req, res) {
// Set header
res.set({
"Content-Type": 'video/mp4'
});
res.writeHead(200);
var combinedStream = CombinedStream.create();
// This function is to call gan-1 first, then gan-2
var recursive = function(param) {
var req = request('http://bilketay.com/download/' + param);
// First add gan-1, then gan-2
combinedStream.append(req);
req.on('end', function() {
if (param != 'gan-2') {
recursive('gan-2')
}
});
}
// Start recursive
recursive('gan-1');
// Start stream browser
// But, It does not start until it is completely loaded :(
combinedStream.pipe(res);
});
// Listen port
app.listen(3000);
I created this code with restricted node.js information. No problem for me, but I think Google Chrome is different. :)
The problem is, the two parts do not stream without being loaded. The stream starts after two parts have been uploaded. What I want to do is start the stream right away. A short note; gan-1 and gan-2 files are working locally. But it does not work on the remote server. What am I doing wrong?
I used the combined-stream2 module to merge the parts.
This module simplifies streaming by adding two different files. But because I can not get the result I want, I might have used it wrong.
In short, I want to stream two different files through the browser, respectively.
I need the help of ninjas. Thank you.
Screen shot describing the problem;
stream.gif

"Echoing" an image in Node.js

I have a fully functioning PHP application that I am trying to make a Node.js version of. It deals with serving image tiles. When it's ready to display the image it does:
// Stream out the image
echo self::$tile;
How would I do something similar in Node.js? I understand this is a broad question, but I think my biggest issue is that I don't understand how PHP "echoes" an image.
Details:
I'm using AWS to get the image. The AWS call returns a Buffer. At this point of time, in the Javascript I have left the image as a Buffer.
The site populates a map with tiled images, so there are multiple calls with the image placed at a particular location on the page. I am using express to handle the requests.
app.get(/^\/omb\/1.0.0\/(.+)\/(.+)\/(.+)\/(.+)\.[a-zA-Z]*$/, function(req, res){
var MosaicStreamer = require('./models/MosaicStreamer.js');
var ms = new MosaicStreamer;
var configs = {library: req.params[0], zoom: req.params[1], column: req.params[2], row: req.params[3]};
ms.handleTile(configs);
});
handleTile grabs the image and ultimately brings me to where I am now. The image is grabbed using the following:
var aws = new AWS.S3();
var params = {
Bucket: this.bucket,
Key: this.tileDirectory + this.filepath,
Range: 'bytes=' + (this.toffset + 4) + "-" + (this.tsize + this.toffset + 4)
};
var ts = this;
aws.getObject(params, function(err, data){
if(ts.tile == null){
ts.tile = data.Body; //S3 get object
}
}
I think what you want to do is take a given URL which represents closely the naming convention of folders/files in your S3 Bucket. So assuming that you've established a client connection to your S3, you can use the readFile method. The 2nd argument is an imageStream which you can pass in the response. Once the stream has ended from S3, it will automatically end the res from the client, outputting the image directly to the client (as you intend).
Some psuedo code:
app.get(/^\/omb\/1.0.0\/(.+)\/(.+)\/(.+)\/(.+)\.[a-zA-Z]*$/, function(req, res){
var MosaicStreamer = require('./models/MosaicStreamer.js');
var ms = new MosaicStreamer;
var configs = {library: req.params[0], zoom: req.params[1], column: req.params[2], row: req.params[3]};
return ms.handleTile(configs, res);
//return the handleTile function, add 2nd argument and pass res through
});
Inside of handleTile function you can make the call for the S3
function handleTile(configs, res){
client.readFile('filename', function(error, imageStream){
imageStream.pipe(res);
});
}
Now requests to images like this:
<img src="/path/to/my/file/that/matches/regexp/expression"/>
It will request that image from the S3 Bucket and stream the resource back to the client directly.
To successfully render an image, you have to implement three steps:
Retrieve the image data (for instance as a Buffer read via fs.readFile) or a stream (for instance via fs.createReadStream
Set the appropriate headers in the web request handler with the arguments (req, res); something like
res.writeHead(200, {'Content-Type': 'image/png'});
Write the file. If you have the file in a Buffer, with
res.end(buf, 'binary');
If you have a stream via
read_stream.pipe(res)
The whole code may look like (assuming you want to serve the file image.jpg from the current directory):
'use strict';
var fs = require('fs');
var http = require('http');
http.createServer(function(req, res) {
fs.readFile('image.jpg', function(err, buf) {
if (err) {
res.writeHead(500);
res.end('Cannot access file.');
return;
}
res.writeHead(200, {'Content-Type': 'image/jpeg'});
res.end(buf, 'binary');
});
}).listen(8002, '');
Using a stream, a very simple version (beware: no error handling, with error handling it can get a little bit more complex, depending how you want to handle errors occurring while the file is being read)
'use strict';
var fs = require('fs');
var http = require('http');
http.createServer(function(req, res) {
var stream = fs.createReadStream('image.jpg');
// Error handling omitted here
res.writeHead(200, {'Content-Type': 'image/jpeg'});
stream.pipe(res);
}).listen(8003, '');
Code that uses a Buffer is easier to write, but means that your server must hold the whole file in memory - for instance, you will be unable to serve a 320 Gigapixel image file. You also only start sending data once you have the whole file.
Using a stream allows sending the file as soon as you get it, so it will be a little faster. If you're reading from file or a local fast server the speed difference is likely negligible. In addition, you'll only need a little bit of memory. On the other hand, error handling is more complex.

Node.js/Mongodb/GridFS resize images on upload

I am saving uploaded images in Mongodb GridFS with Node.js/Express/gridfs-stream/multyparty using streams.
Works fine.
Now I would like to "normalize" (resize) images to some standard format before storing to database.
I could use gm https://github.com/aheckmann/gm and have streaming but I would have to install native ImageMagic (not an option) or
Use something like lwip https://github.com/EyalAr/lwip and have a "pure Node" setup, but then I cannot have streaming
So is there a solution to have a streaming solution to request -> resize -> store to GridFS without installing external libraries?
Current solution (missing the resize step):
function storeImage(req, err, succ){
var conn = mongoose.connection;
var gfs = Grid(conn.db);
var context = {};
var form = new multiparty.Form();
form.on('field', function(name, value){
context[name] = value;
console.log(context);
});
form.on('part', function(part){
// handle events only if file part
if (!part.filename) { return; }
var options =
{
filename: part.filename,
metadata: context,
mode: 'w',
root: 'images'
};
var ws = gfs.createWriteStream(options);
// success GridFS
ws.on('close', function (file) {
console.log(file.filename + file._id);
succ(file._id);
});
// error GridFS
ws.on('error', function (errMsg) {
console.log('An error occurred!', errMsg);
err(errMsg);
});
part.pipe(ws);
});
// Close emitted after form parsed
form.on('close', function() {
console.log('Upload completed!');
});
form.parse(req);
}
For posterity
1) Initially I used lwip while I was storing images locally. When people started uploading bigger images (which was added as requirement) lwip started exploding my instance on Heroku and I switched to
2) gm over ImageMagick running on AWS Lambda that has ImageMagick preconfigured in the default instance. Images now stored on S3 and distributed via CloudFront.

Accessing Google Drive from a Firefox extension

I'm trying to access (CRUD) Google Drive from a Firefox extension. Extensions are coded in Javascript, but neither of the two existing javascript SDKs seem to fit; the client-side SDK expects "window" to be available, which isn't the case in extensions, and the server-side SDK seems to rely on Node-specific facilities, as a script that works in node no longer does when I load it in chrome after running it through browserify. Am I stuck using raw REST calls? The Node script that works looks like this:
var google = require('googleapis');
var readlineSync = require('readline-sync');
var CLIENT_ID = '....',
CLIENT_SECRET = '....',
REDIRECT_URL = 'urn:ietf:wg:oauth:2.0:oob',
SCOPE = 'https://www.googleapis.com/auth/drive.file';
var oauth2Client = new google.auth.OAuth2(CLIENT_ID, CLIENT_SECRET, REDIRECT_URL);
var url = oauth2Client.generateAuthUrl({
access_type: 'offline', // 'online' (default) or 'offline' (gets refresh_token)
scope: SCOPE // If you only need one scope you can pass it as string
});
var code = readlineSync.question('Auth code? :');
oauth2Client.getToken(code, function(err, tokens) {
console.log('authenticated?');
// Now tokens contains an access_token and an optional refresh_token. Save them.
if(!err) {
console.log('authenticated');
oauth2Client.setCredentials(tokens);
} else {
console.log('not authenticated');
}
});
I wrap the node GDrive SDK using browserify on this script:
var Google = new function(){
this.api = require('googleapis');
this.clientID = '....';
this.clientSecret = '....';
this.redirectURL = 'urn:ietf:wg:oauth:2.0:oob';
this.scope = 'https://www.googleapis.com/auth/drive.file';
this.client = new this.api.auth.OAuth2(this.clientID, this.clientSecret, this.redirectURL);
}
}
which is then called using after clicking a button (if the text field has no code it launches the browser to get one):
function authorize() {
var code = document.getElementById("code").value.trim();
if (code === '') {
var url = Google.client.generateAuthUrl({access_type: 'offline', scope: Google.scope});
var win = Components.classes['#mozilla.org/appshell/window-mediator;1'].getService(Components.interfaces.nsIWindowMediator).getMostRecentWindow('navigator:browser');
win.gBrowser.selectedTab = win.gBrowser.addTab(url);
} else {
Google.client.getToken(code, function(err, tokens) {
if(!err) {
Google.client.setCredentials(tokens);
// store token
alert('Succesfully authorized');
} else {
alert('Not authorized: ' + err); // always ends here
}
});
}
}
But this yields the error Not authorized: Invalid protocol: https:
It is possible though, depending on the use case, it might also of limited interest.
Firefox ships with a tiny http server, just the bare bones. It is included for test purposes but this is not a reason to overlook it.
Lets follow the quickstart guide for running a Drive app in Javascript
The tricky part is to set the Redirect URIs and the Javascript Origins. Obviously the right setting is http://localhost, but how can you be sure that every user has port 80 available?
You can't and, unless you have control over your users, no port is guaranteed to work for everyone. With this in mind lets choose port 49870 and pray.
So now Redirect URIs and the Javascript Origins are set to http://localhost:49870
Assuming you use Add-on SDK, save the quickstart.html (remember to add your Client ID) in the data directory of your extension. Now edit your main.js
const self = require("sdk/self");
const { Cc, Ci } = require("chrome");
const tabs = require("sdk/tabs");
const httpd = require("sdk/test/httpd");
var quickstart = self.data.load("quickstart.html");
var srv = new httpd.nsHttpServer();
srv.registerPathHandler("/gdrive", function handler(request, response){
response.setHeader("Content-Type", "text/html; charset=utf-8", false);
let converter = Cc["#mozilla.org/intl/scriptableunicodeconverter"].createInstance(Ci.nsIScriptableUnicodeConverter);
converter.charset = "UTF-8";
response.write(converter.ConvertFromUnicode(quickstart));
})
srv.start(49870);
tabs.open("http://localhost:49870/gdrive");
exports.onUnload = function (reason) {
srv.stop(function(){});
};
Notice that quickstart.html is not opened as a local file, with a resource: URI. The Drive API wouldn't like that. It is served at the url http://localhost:49870/gdrive. Needless to say that instead of static html we can use a template or anything else. Also the http://localhost:49870/gdrive can be scripted with a regular PageMod.
I don't consider this a real solution. It's just better than nothing.
From here https://developer.mozilla.org/en/docs/Working_with_windows_in_chrome_code you could try window = window || content || {}
Use the JavaScript client API and not the node.js client. Although browserify will make it work. You will have to expose your client secret in the latter. The flow of client side authentication is very diff than server side. Refer to https://developers.google.com/accounts/docs/OAuth2
Having said all this. Its really not that difficult to implement an app with REST based calls. The methods in all client libraries mimic the corresponding REST URLs. You could set up some functions of your own to handle request and response and the rest would feel the same.

Categories

Resources