neo4j javascript example - javascript

What I have not been able to find is simple examples (no third party) of neo4j using javascript. I have got the desktop of neo4j working and got an example with a third party graph tool working (the example appears to put the request in the textarea of a DIV and send the request to the graph api and the graph is produced).
I am very familiar with MYSQL, other SQL interaction but having problems interacting with neo4j. Have done a lot of research but stuck.
From my SQL days there was:
connect statement (i.e. get a handle and I have got this to work with neo4j)
send an SQL statement to the database, (in this case it would be cypher)
get the cursor and process the results (I assume process the Jason)
I would like the example to:
Connect to the database (local and remote)
Show sample cypher commands to fetch data (movie dtabase)
How to store returned results in the javascript program
if possible provide a short explanation of Node, HTML, Javascript ie the javascript goes into app.js and there is index.htnl that refers to app.js. Do I have to use Node can I access neo4j with Javascript only?
Thanks
Marty

Take a look at the official Neo4j Driver for Javascript. The driver can be used with node.js and there is also a version that runs in a browser.
The repo's README contains links to full documentation and sample projects.

AS #cybersam told you, you should use the neo4j-javascript-driver.
You can find an example application here : https://github.com/neo4j-examples/movies-javascript-bolt
And this is snippet on how to perform the connection, a query and to parse the result :
// Create a driver instance, for the user neo4j with password neo4j.
// It should be enough to have a single driver per database per application.
var driver = neo4j.driver("bolt://localhost", neo4j.auth.basic("neo4j", "neo4j"));
// Create a session to run Cypher statements in.
// Note: Always make sure to close sessions when you are done using them!
var session = driver.session();
// the Promise way, where the complete result is collected before we act on it:
session
.run('MERGE (james:Person {name : {nameParam} }) RETURN james.name AS name', {nameParam: 'James'})
.then(function (result) {
result.records.forEach(function (record) {
console.log(record.get('name'));
});
session.close();
})
.catch(function (error) {
console.log(error);
});
// Close the driver when application exits.
// This closes all used network connections.
driver.close();
Moreover, you can also take a look at the GRAND stack : http://grandstack.io/
It's stack to build a web application based on React, Neo4j and GraphQl (with Apollo).

Related

Fastest redirects Javascript

My main function is I am creating a link-shortening app. When someone entered a long URL, it will give a short URL. If the user clicked on the short link it will search for the long URL on the DB and redirect it to the long URL.
Meantime I want to get the click count and clicked user's OS.
I am currently using current code :
app.get('/:shortUrl', async (req, res) => {
const shortUrl = await ShortUrl.findOne({short: req.params.shortUrl})
if (shortUrl == null) return res.sendStatus(404)
res.redirect(shortUrl.full)
})
findOne is finding the Long URL on the database using ShortID. I used mongoDB here
My questions are :
Are there multiple redirect methods in JS?
Is this method work if there is a high load?
Any other methods I can use to achieve the same result?
What other facts that matter on redirect time
What is 'No Redirection Tracking'?
This is a really long question, Thanks to those who invested their time in this.
Your code is ok, the only limitation is where you run it and mongodb.
I have created apps that are analytics tracker, handling billion rows per day.
I suggest you run your node code using AWS Beanstalk APP. It has low latency and scales on your needs.
And you need to put redis between your request and mongodb, you will call mongodb only if your data is not yet in redis. Mongodb has more read limitations than a straight redis instance.
Are there multiple redirect methods in JS?
First off, there are no redirect methods in Javascript. res.redirect() is a feature of the Express http framework that runs in nodejs. This is the only method built into Express, though all a redirect response consists of is a 3xx (often 302) http response status and setting the Location header to the redirect location. You can manually code that just as well as you can use res.redirect() in Express.
You can look at the res.redirect() code in Express here.
The main things it does are set the location header with this:
this.location(address)
And set the http status (which defaults to 302) with this:
this.statusCode = status;
Then, the rest of the code has to do with handling variable arguments, handling an older design for the API and sending a body in either plain text or html (neither of which is required).
Is this method work if there is a high load?
res.redirect() works just fine at a high load. The bottleneck in your code is probably this line of code:
const shortUrl = await ShortUrl.findOne({short: req.params.shortUrl})
And, how high a scale that goes to depends upon a whole bunch of things about your database, configuration, hardware, setup, etc... You should probably just test how many request/sec of this kind your current database can handle.
Any other methods I can use to achieve the same result?
Sure there are. But, you will have to use some data store to look up the shortUrl to find the long url and you will have to create a 302 response somehow. As said earlier, the scale you can achieve will depend entirely upon your database.
What other facts that matter on redirect time
This is pretty much covered above (hint, its all about the database).
What is 'No Redirection Tracking'?
You can read about it here on MDN.

How to use a same DB with 2 different React applications

I have to make an offline application that sync with another online app when it can.
I developed the offline app using PouchDB. I create this app thanks to the github repo create-react-app. This app is running at localhost:3000. In this app, I create and manage a little DB named "patientDB".
I manage the db with the classical put method like we can see in the documentation:
var db = new PouchDB('patientDB')
db.put({
_id: 'dave#gmail.com',
name: 'David',
age: 69
});
With the development tool for chrome given by PouchDB, I can see that the DB is working like I want (the documents are created):
The other application is another React application with a node server. During the development, this app is running at localhost:8080.
In this app I try to fetch all the docs contained in the "patientDB" with the following code:
const db = new PouchDB('patientDB', { skip_setup: true });
db.info()
.then(() => {
console.log("DBFOUND")
db.allDocs({include_docs: true})
.then(function (result) {
console.log("RESULT" , result)
}).catch(function (err) {
console.log("NOPE")
console.log(err);
});
})
My problem is that I can't get the "patientDB" created with the offline app in the online app. When I do a var db = new PouchDB ('patientDB') it create a new and empty db because it can't find a db which is already present.
I use google chrome to run all my application so I thought that the dbs could be shared.
However I did little and very simple tests with two html files:
First.html which initialize a new db with a doc
Second.html which read the db create in First.html
In this case, I can fetch the doc created with First.html in Second.hmtl even if the are two separated "website".
I think that the application which run at a localhost are like isolated of the rest of the application even if, like I said before, I use the same browser for all my applications...
I don't know what to do or I don't know if it's even possible to do what I want to do. If someone has an idea for me, I would be pleased.
EDIT
I can see why my DBs are not shared:
When I look at all my local DBs after running an html file I can see the following thing :
As we can see, the DBs come from the files _pouch_DB_NAME - file://
When I check my DB from the application running localy (localhost), I can see this :
The DB don't come from file but from localhost:8080
If you know how I can fetch doc from a local db in an app running in a server, it could be really helpful for me!
PouchDB is using IndexedDB in the browser, which adheres to a same-origin policy. MDN says this:
IndexedDB adheres to a same-origin policy. An origin is the domain, application layer protocol, and port of a URL of the document where the script is being executed. Each origin has its own associated set of databases. Every database has a name that identifies it within an origin.
So you have to replicate your local database to a central server in order to share the data. This could be a PouchDB Server together with your node app. You can also access PouchDB Server directly from the browser:
var db = new PouchDB('http://localhost:5984/patientDB')
As an alternative, you can use CouchDB or IBM Cloudant (which is basically hosted CouchDB).

How to capture query string parameters from network tab programmatically

I am trying to capture query string parameters for analytics purpose using javascript. I did some searching and found that BMP can be used to do it but i am unable to find ample examples to implement. Could anyone point me in the right direction.
EDIT 1:
I used below code using browsermob-proxy to get har file but i get ERROR: browsermob-proxy returned error when i run it . I use selenium with it.
getHarFile() {
const proxy = browsermb.Proxy;
const pr = new proxy({host:"0.0.0.0",port:4444});
pr.doHAR("http://www.cnn.com/", (err,data) => {
if (err) {
logger.debug('ERROR: ' + err);
} else {
fs.writeFileSync('ua.com.har', data, 'utf8');
logger.debug("#HAR CREATED#");
}
})
}
So since I´m not quite sure of your scope I will throw you some ideas:
1. Fixing browsermob-proxy
You should change the host and proxy of the browsermob-proxy. Change the host to 127.0.0.1 and the port with any random number (4444 its ok). Then, make sure your browser run in that host and proxy by changing the browser settings.
2. Using plain javascript
2.1 Get current page query string
You can get the query string using location.search. If you are using some BDD framework with selenium, it is possible to execute javascript code and retrieve the result. You should always add a return to your code in order to recieve the response in your BDD test.
2.2 Using Performance API
You can access to all the network information within performance api. If you need to get the current page url you can use the following code:
performance.getEntriesByType("navigation")
This will return all the current navigation events and information.
If you want to get some information of the calls that the page made you can access it using:
performance.getEntriesByType("resource")
This will return all the calls made by your site. You have to loop over it searching the resource you want to find.
In all the ways, there is no way to get the value and key of query string as in the network tab. You have to separate it manually with a function, you can use the code provided here to get the value of a key.
My suggestion is to create your personal extension for Google Chrome, and developing an extension you can access few more apis that are not available by default in the console.
For example you will have this object in order to inspect the network tab:
chrome.devtools.network
Here two links you may find useful:
https://developer.chrome.com/extensions/devtools
https://developer.chrome.com/extensions/devtools_network
I hope it helps
I was finally able to do it using the s object available on chrome console. The url with encoded query string was available as s.rb object in chrome console. I just decoded it and extracted the query string parameters.

Always using the same OAUTH code with Dropbox-js

I'm using the official Dropbox JS library in a Node.js server. It only ever needs to authenticate as a single user, and it can't go through the whole OAUTH browser setup every time the server starts. I am attempting to write an auth driver that pretends to be like the NodeServer driver, but runs the callback straight away with a code that always stays the same.
Here's what I've got (it's coffeescript, but you get the idea):
myAuthDriver = {
authType: -> return "code"
url: -> return "http://localhost:8912/oauth_callback" # What the url would be if I were using NodeServer
doAuthorize: (authUrl_s, stateParam, client, callback) ->
authUrl = url.parse(authUrl_s, true)
callback({
code: "[a code I just got using the NodeServer driver]"
state: authUrl.query.state
})
}
Running authenticate with this driver set causes this error:
Dropbox OAuth error invalid_grant :: given "code" is not valid
The docs say that this should only occur with a broken auth driver (but it doesn't give any ideas for fixing it).
Does anyone with more knowledge of OAUTH or Dropbox know what's wrong here?
Note: I've found in several places online that Dropbox OAUTH codes never expire
Once you have an OAuth 2 access token, you can just do var client = new Dropbox.Client({token: '<your token>'});. No need for an auth driver at all.
(If you want an easy way to get an access token, consider using https://dbxoauth2.site44.com.)

Multi-OS text-based database with an engine for Python and JavaScript

This is probably a far fetch, but maybe someone knows a good solution for it.
Introduction
I'm currently making an application in Python with the new wxPython 2.9, which has the new html2 library, which will inherit the native browser of each OS (Safari/IE/Firefox/Konquerer/~), which is really great.
Goal/Purpose
What I'm currently aiming for is to process large chunks of data and analyze it super fast with Python (currently about 110.000 entries, turning out in about 1.500.000 to 2.250.000 results in a dictionary). This works very fast and is also dynamic, so it will only do that first big fetch once (takes about 2-4 seconds only) and afterwards just keeps listening if new data gets created on the disc.
So far, so good. Now with the new wxPython html2 library I'm making the new GUI. It's mainly made to display pages, so what I have made now is a model in a /html/ folder (with HTML/CSS/jQuery) and it will dynamically look for a JSON files (jQuery fetching), which is practically a complete dump of the massive dictionaries that the Python script is making in the background (daemon) in a parallel thread.
JavaScript doesn't seem to have issues with reading a big JSON file and because everything is (and stays) local it doesn't really incur slowness or anything. Also the CPU and memory usage is very low.
Conclusion
But here comes the bottleneck. From the JavaScript point of view, the handling of the big JSON file is not really a joyride. I have todo a lot of searching and matching for all the data I need to get, and also creates a lot of redundant re-looping through the same big chunks of entries.
Question
I'm wondering if there is any kind of "engine" that is implemented for both Python and JavaScript that can handle jSON files, or maybe other text-based files as a database. Meaning you can really have a MySQL-like structure (not meant by full extend of course), where you at least can define a table structure which hold the data and you do reads/write/updates on methodically.
The app I'm currently developing is multi-OS based (at least Ubuntu, OS X and Windows XP+). I also really don't want to create more clutter than using wxPython (for distribution/dependency sakes) to use an extension database (like I could run a MySQL server on localhost), so purely keep it inside my Python distro's folder. This is also to prevent writing massive code (checks) checking if the user has already got servers/databases in use that might collide with what my app I would then install.
Final Notes
I'm kind of aiming to build some kind of API myself too for future projects to make this way of development standard for my Python scripts that need a GUI. Now that wxPython can more easily embrace the modern browser technologies; there seems to be no limit anymore to building super fast, dynamic and responsive graphical Python apps.
Why not just stick the data into a SQLite database and then have both Python and Javascript hit that? See also Convert JSON to SQLite in Python - How to map json keys to database columns properly?
Sqlite is included with in all modern versions of Python. You'll have to check out the SQLite website for its limitations
Kind of got something figured out, through running a CGI HTTP server and letting Python in there fetch SQLite queries for JavaScript's AJAX calls. Here's a small demo (only tested on OS X):
Folder structure
main.py
cgi/index.py
data/
html/index.html
html/scripts/jquery.js
html/scripts/main.js
html/styles/main.css
Python server (main.py)
### CGI Server ###
import CGIHTTPServer
import BaseHTTPServer
class Handler(CGIHTTPServer.CGIHTTPRequestHandler):
cgi_directories = ['/cgi']
# Mute the messages in the shell
def log_message(self, format, *args):
return
httpd = BaseHTTPServer.HTTPServer(('', 61350), Handler)
#httpd.serve_forever()
thread = threading.Thread(name='CGIHTTPServer', target=httpd.serve_forever)
thread.setDaemon(True)
thread.start()
#### TEST SQLLite ####
# Make the database file if it doesn't exist
if not os.path.exists('data/sqlite.db'):
db_file = open('data/sqlite.db', 'w')
db_file.write('')
db_file.close()
import sqlite3
conn = sqlite3.connect('data/sqlite.db')
cursor = conn.cursor()
cursor.execute('CREATE TABLE info(id INTEGER UNIQUE PRIMARY KEY, name VARCHAR(75), folder_name VARCHAR(75))')
cursor.execute('INSERT INTO info VALUES(null, "something1", "something1_name")')
cursor.execute('INSERT INTO info VALUES(null, "something2", "something1_name")')
conn.commit()
Python SQLite processor (cgi/index.py) (demo is purely for SELECT, needs more dynamic)
#!/usr/bin/env python
import cgi
import json
import sqlite3
print 'Content-Type: text/json\n\n'
### Fetch GET-data ###
form = cgi.FieldStorage()
obj = {}
### SQLite fetching ###
query = form.getvalue('query', 'ERROR')
output = ''
if query == 'ERROR':
output = 'WARNING! No query was given!'
else:
# WARNING: The path probably needs `../data/sqlite.db` if PYTHONPATH is not defined
conn = sqlite3.connect('data/sqlite.db')
cursor = conn.cursor()
cursor.execute(query)
# TODO: Add functionality/detect if it's a SELECT, INSERT/UPDATE (then we need to conn.commit() )
result = cursor.fetchall()
if len(result) > 0:
output = []
for row in result:
buff = []
for entry in row:
buff.append(entry)
output.append(buff)
else:
output = 'WARNING! No results found'
obj = output
### Return the data in jSON (map) format for JavaScript
print json.dumps(obj)
JavaScript (html/scripts/main.js)
'use strict';
$(document).ready(function() {
// JSON data read test
var query = 'SELECT * FROM test';
$.ajax({
url: 'http://127.0.0.1:61350/cgi/index.py?query=' + query,
success: function(data) {
lg(data);
},
error: function() {
lg('Something went wrong while fetching the query.');
}
});
});
And that wraps it up. The console output in the browser is;
[
[1, "something1", "something1_name"],
[2, "something2", "something2_name"]
]
With this methodology you could let Python and JavaScript read/write in the same database, while Python keeps doing its system tasks (daemon) and update the database entries, whilst JavaScript can keep checking for new data.
This method could probably also add room for listeners and other means of communication between the both.
The main.py will instantly stop running because of the daemon. This is because of my wxPython script after it that keeps the daemon (server) alive until the application stops. If someone else wants to use this code for the future; just make sure the server-code runs after the SQLite initiation and unquote httpd.serve_forever() to keep it running.

Categories

Resources