Get Jupyter notebook name [duplicate] - javascript

I am trying to obtain the current NoteBook name when running the IPython notebook. I know I can see it at the top of the notebook. What I am after something like
currentNotebook = IPython.foo.bar.notebookname()
I need to get the name in a variable.

adding to previous answers,
to get the notebook name run the following in a cell:
%%javascript
IPython.notebook.kernel.execute('nb_name = "' + IPython.notebook.notebook_name + '"')
this gets you the file name in nb_name
then to get the full path you may use the following in a separate cell:
import os
nb_full_path = os.path.join(os.getcwd(), nb_name)

I have the following which works with IPython 2.0. I observed that the name of the notebook is stored as the value of the attribute 'data-notebook-name' in the <body> tag of the page. Thus the idea is first to ask Javascript to retrieve the attribute --javascripts can be invoked from a codecell thanks to the %%javascript magic. Then it is possible to access to the Javascript variable through a call to the Python Kernel, with a command which sets a Python variable. Since this last variable is known from the kernel, it can be accessed in other cells as well.
%%javascript
var kernel = IPython.notebook.kernel;
var body = document.body,
attribs = body.attributes;
var command = "theNotebook = " + "'"+attribs['data-notebook-name'].value+"'";
kernel.execute(command);
From a Python code cell
print(theNotebook)
Out[ ]: HowToGetTheNameOfTheNoteBook.ipynb
A defect in this solution is that when one changes the title (name) of a notebook, then this name seems to not be updated immediately (there is probably some kind of cache) and it is necessary to reload the notebook to get access to the new name.
[Edit] On reflection, a more efficient solution is to look for the input field for notebook's name instead of the <body> tag. Looking into the source, it appears that this field has id "notebook_name". It is then possible to catch this value by a document.getElementById() and then follow the same approach as above. The code becomes, still using the javascript magic
%%javascript
var kernel = IPython.notebook.kernel;
var thename = window.document.getElementById("notebook_name").innerHTML;
var command = "theNotebook = " + "'"+thename+"'";
kernel.execute(command);
Then, from a ipython cell,
In [11]: print(theNotebook)
Out [11]: HowToGetTheNameOfTheNoteBookSolBis
Contrary to the first solution, modifications of notebook's name are updated immediately and there is no need to refresh the notebook.

As already mentioned you probably aren't really supposed to be able to do this, but I did find a way. It's a flaming hack though so don't rely on this at all:
import json
import os
import urllib2
import IPython
from IPython.lib import kernel
connection_file_path = kernel.get_connection_file()
connection_file = os.path.basename(connection_file_path)
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
# Updated answer with semi-solutions for both IPython 2.x and IPython < 2.x
if IPython.version_info[0] < 2:
## Not sure if it's even possible to get the port for the
## notebook app; so just using the default...
notebooks = json.load(urllib2.urlopen('http://127.0.0.1:8888/notebooks'))
for nb in notebooks:
if nb['kernel_id'] == kernel_id:
print nb['name']
break
else:
sessions = json.load(urllib2.urlopen('http://127.0.0.1:8888/api/sessions'))
for sess in sessions:
if sess['kernel']['id'] == kernel_id:
print sess['notebook']['name']
break
I updated my answer to include a solution that "works" in IPython 2.0 at least with a simple test. It probably isn't guaranteed to give the correct answer if there are multiple notebooks connected to the same kernel, etc.

It seems I cannot comment, so I have to post this as an answer.
The accepted solution by #iguananaut and the update by #mbdevpl appear not to be working with recent versions of the Notebook.
I fixed it as shown below. I checked it on Python v3.6.1 + Notebook v5.0.0 and on Python v3.6.5 and Notebook v5.5.0.
import jupyterlab
if jupyterlab.__version__.split(".")[0] == "3":
from jupyter_server import serverapp as app
key_srv_directory = 'root_dir'
else :
from notebook import notebookapp as app
key_srv_directory = 'notebook_dir'
import urllib
import json
import os
import ipykernel
def notebook_path(key_srv_directory, ):
"""Returns the absolute path of the Notebook or None if it cannot be determined
NOTE: works only when the security is token-based or there is also no password
"""
connection_file = os.path.basename(ipykernel.get_connection_file())
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
for srv in app.list_running_servers():
try:
if srv['token']=='' and not srv['password']: # No token and no password, ahem...
req = urllib.request.urlopen(srv['url']+'api/sessions')
else:
req = urllib.request.urlopen(srv['url']+'api/sessions?token='+srv['token'])
sessions = json.load(req)
for sess in sessions:
if sess['kernel']['id'] == kernel_id:
return os.path.join(srv[key_srv_directory],sess['notebook']['path'])
except:
pass # There may be stale entries in the runtime directory
return None
As stated in the docstring, this works only when either there is no authentication or the authentication is token-based.
Note that, as also reported by others, the Javascript-based method does not seem to work when executing a "Run all cells" (but works when executing cells "manually"), which was a deal-breaker for me.

The ipyparams package can do this pretty easily.
import ipyparams
currentNotebook = ipyparams.notebook_name

On Jupyter 3.0 the following works. Here I'm showing the entire path on the Jupyter server, not just the notebook name:
To store the NOTEBOOK_FULL_PATH on the current notebook front end:
%%javascript
var nb = IPython.notebook;
var kernel = IPython.notebook.kernel;
var command = "NOTEBOOK_FULL_PATH = '" + nb.base_url + nb.notebook_path + "'";
kernel.execute(command);
To then display it:
print("NOTEBOOK_FULL_PATH:\n", NOTEBOOK_FULL_PATH)
Running the first Javascript cell produces no output.
Running the second Python cell produces something like:
NOTEBOOK_FULL_PATH:
/user/zeph/GetNotebookName.ipynb

Yet another hacky solution since my notebook server can change. Basically you print a random string, save it and then search for a file containing that string in the working directory. The while is needed because save_checkpoint is asynchronous.
from time import sleep
from IPython.display import display, Javascript
import subprocess
import os
import uuid
def get_notebook_path_and_save():
magic = str(uuid.uuid1()).replace('-', '')
print(magic)
# saves it (ctrl+S)
display(Javascript('IPython.notebook.save_checkpoint();'))
nb_name = None
while nb_name is None:
try:
sleep(0.1)
nb_name = subprocess.check_output(f'grep -l {magic} *.ipynb', shell=True).decode().strip()
except:
pass
return os.path.join(os.getcwd(), nb_name)

There is no real way yet to do this in Jupyterlab. But there is an official way that's now under active discussion/development as of August 2021:
https://github.com/jupyter/jupyter_client/pull/656
In the meantime, hitting the api/sessions REST endpoint of jupyter_server seems like the best bet. Here's a cleaned-up version of that approach:
from jupyter_server import serverapp
from jupyter_server.utils import url_path_join
from pathlib import Path
import re
import requests
kernelIdRegex = re.compile(r"(?<=kernel-)[\w\d\-]+(?=\.json)")
def getNotebookPath():
kernelId = kernelIdRegex.search(get_ipython().config["IPKernelApp"]["connection_file"])[0]
for jupServ in serverapp.list_running_servers():
for session in requests.get(url_path_join(jupServ["url"], "api/sessions"), params={"token": jupServ["token"]}).json():
if kernelId == session["kernel"]["id"]:
return Path(jupServ["root_dir"]) / session["notebook"]['path']
Tested working with
python==3.9
jupyter_server==1.8.0
jupyterlab==4.0.0a7

Modifying #jfb method, gives the function below which worked fine on ipykernel-5.3.4.
def getNotebookName():
display(Javascript('IPython.notebook.kernel.execute("NotebookName = " + "\'"+window.document.getElementById("notebook_name").innerHTML+"\'");'))
try:
_ = type(NotebookName)
return NotebookName
except:
return None
Note that the display javascript will take some time to reach the browser, and it will take some time to execute the JS and get back to the kernel. I know it may sound stupid, but it's better to run the function in two cells, like this:
nb_name = getNotebookName()
and in the following cell:
for i in range(10):
nb_name = getNotebookName()
if nb_name is not None:
break
However, if you don't need to define a function, the wise method is to run display(Javascript(..)) in one cell, and check the notebook name in another cell. In this way, the browser has enough time to execute the code and return the notebook name.
If you don't mind to use a library, the most robust way is:
import ipynbname
nb_name = ipynbname.name()

If you are using Visual Studio Code:
import IPython ; IPython.extract_module_locals()[1]['__vsc_ipynb_file__']

Assuming you have the Jupyter Notebook server's host, port, and authentication token, this should work for you. It's based off of this answer.
import os
import json
import posixpath
import subprocess
import urllib.request
import psutil
def get_notebook_path(host, port, token):
process_id = os.getpid();
notebooks = get_running_notebooks(host, port, token)
for notebook in notebooks:
if process_id in notebook['process_ids']:
return notebook['path']
def get_running_notebooks(host, port, token):
sessions_url = posixpath.join('http://%s:%d' % (host, port), 'api', 'sessions')
sessions_url += f'?token={token}'
response = urllib.request.urlopen(sessions_url).read()
res = json.loads(response)
notebooks = [{'kernel_id': notebook['kernel']['id'],
'path': notebook['notebook']['path'],
'process_ids': get_process_ids(notebook['kernel']['id'])} for notebook in res]
return notebooks
def get_process_ids(name):
child = subprocess.Popen(['pgrep', '-f', name], stdout=subprocess.PIPE, shell=False)
response = child.communicate()[0]
return [int(pid) for pid in response.split()]
Example usage:
get_notebook_path('127.0.0.1', 17004, '344eb91bee5742a8501cc8ee84043d0af07d42e7135bed90')

To realize why you can't get notebook name using these JS-based solutions, run this code and notice the delay it takes for the message box to appear after python has finished execution of the cell / entire notebook:
%%javascript
function sayHello() {
alert('Hello world!');
}
setTimeout(sayHello, 1000);
More info
Javascript calls are async and hence not guaranteed to complete before python starts running another cell containing the code expecting this notebook name variable to be already created... resulting in NameError when trying to access non-existing variables that should contain notebook name.
I suspect some upvotes on this page became locked before voters could discover that all %%javascript-based solutions ultimately don't work... when the producer and consumer notebook cells are executed together (or in a quick succession).

All Json based solutions fail if we execute more than one cell at a time
because the result will not be ready until after the end of the execution
(its not a matter of using sleep or waiting any time, check it yourself but remember to restart kernel and run all every test)
Based on previous solutions, this avoids using the %% magic in case you need to put it in the middle of some other code:
from IPython.display import display, Javascript
# can have comments here :)
js_cmd = 'IPython.notebook.kernel.execute(\'nb_name = "\' + IPython.notebook.notebook_name + \'"\')'
display(Javascript(js_cmd))
For python 3, the following based on the answer by #Iguananaut and updated for latest python and possibly multiple servers will work:
import os
import json
try:
from urllib2 import urlopen
except:
from urllib.request import urlopen
import ipykernel
connection_file_path = ipykernel.get_connection_file()
connection_file = os.path.basename(connection_file_path)
kernel_id = connection_file.split('-', 1)[1].split('.')[0]
running_servers = !jupyter notebook list
running_servers = [s.split('::')[0].strip() for s in running_servers[1:]]
nb_name = '???'
for serv in running_servers:
uri_parts = serv.split('?')
uri_parts[0] += 'api/sessions'
sessions = json.load(urlopen('?'.join(uri_parts)))
for sess in sessions:
if sess['kernel']['id'] == kernel_id:
nb_name = os.path.basename(sess['notebook']['path'])
break
if nb_name != '???':
break
print (f'[{nb_name}]')

just use ipynbname , which is practical
import ipynbname
nb_fname = ipynbname.name()
nb_path = ipynbname.path()
print(f"{nb_fname=}")
print(f"{nb_path=}")
I found this in https://stackoverflow.com/a/65907473/15497427

Related

Scrape a javascript variable from a webpage

I am scraping a site with beautiful soup but all the content is hidden inside a script inside a js variable like this:
I can't seem to find any solution to this other than using selenium which in this case is not an option, I won't go into detail why but it just doesn't work. I can already scrape it by getting the insid eof the script tag and then using eval() on it but that introduces a few problems (unexpected indent, unwanted functions) I can use python, javascript and maybe C# if anything there helps.
Expected behaviour - whatever makes me get the info (the variable in the last line) into any of those 3 languages (preferably python).
The code (sorry for the formating but i cant since its so long, it isnt even the full variable, its huge):
barLoadGoogleFont('Open Sans'); barCssLoad('/global/pics/js/jquery/royalSlider/skins/universal/rs-universal.css?v=e449c4'); barCssLoad('/global/pics/css/material-icons.css?v=e6d856'); barCssLoad('/user/pics/css/user.css?v=eced9d');
barCssLoad('/user/pics/css/userIcons.css?v=6f9a03');
barCssLoad('/timeline/pics/css/timeline.css?v=8ec2ca'); barJsLibraryLoad('/global/pics/js/jquery/jquery.royalslider.min.js?v=515a43'); barJsLibraryLoad('/anketa/pics/js/utilsAnketa.js?v=9383d5'); barJsLibraryLoad('/znamky/pics/js/utilsZnamky.js?v=7afc9e'); barJsLibraryLoad('/exam/pics/js/utilsExam.js?v=033d55'); barJsLibraryLoad('/timeline/pics/js/utilsTimeline.js?v=29cf0e'); barJsLibraryLoad('/timeline/pics/js/timelineItemCreator.js?v=c37c99'); barJsLibraryLoad('/timeline/pics/js/timelineInputbox.js?v=2fde70'); barJsLibraryLoad('/timeline/pics/js/timelineViewer.js?v=f35e45');
barJsLibraryLoad('/user/pics/js/DailyPlan.js?v=e81fb9'); barJsLibraryLoad('/user/pics/js/userHomeEtest.js?v=6166f3');
$j(document).ready(function() { $j('#jwbcddd3da_md').userhome({"items":[{"timelineid":"2140963","timestamp":"2020-12-09 09:59:13","reakcia_na":"692638","typ":"h_clearplany","user":"Plan5077","target_user":null,"user_meno":"Kvarta aj2","ineid":"clearplany","text":"","cas_pridania":"2020-12-09 09:59:13","cas_udalosti":null,"data":"null","vlastnik":"Ucitel8678605","vlastnik_meno":"Barbora Drugajov\u00e1","pocet_reakcii":"0","posledna_reakcia":"","pomocny_zaznam":"1","removed":"0","cas_pridania_btc":"2020-12-09 09:59:13","posledna_reakcia_btc":null},{"timelineid":"2287814","timestamp":"2020-12-09 09:59:12","reakcia_na":"2290613","typ":"h_dailyplan","user":"Trieda8694210","target_user":null,"user_meno":"Kvarta A","ineid":"daily2020-12-09","text":"","cas_pridania":"2020-12-09 09:59:12","cas_udalosti":null,"data":"[]","vlastnik":"Ucitel8678605","vlastnik_meno":"Barbora Drugajov\u00e1","pocet_reakcii":"0","posledna_reakcia":"","pomocny_zaznam":"1","removed":"0","cas_pridania_btc":"2020-12-09 09:59:12","posledna_reakcia_btc":null},{"timelineid":"1439827","timestamp":"2020-12-09 08:56:57","reakcia_na":null,"typ":"h_clearplany","user":"*","target_user":null,"user_meno":"Cel\u00e1 \u0161kola","ineid":"clearplany","text":"","cas_pridania":"2020-12-09 08:56:57","cas_udalosti":null,"data":"null","vlastnik":"Ucitel16434","vlastnik_meno":"Ivor Dian","pocet_reakcii":"0","posledna_reakcia":"","pomocny_zaznam":null,"removed":"0","cas_pridania_btc":"2020-12-09 08:56:57","posledna_reakcia_btc":null},{"timelineid":"2290324","timestamp":"2020-12-09 08:37:22","reakcia_na":null,"typ":"sprava","user":"CustPlan5075","target_user":null,"user_meno":"Kvarta A+Kvarta B - nj4 \u00b7 nemeck\u00fd jazyk","ineid":null,"text":"Ahojte, zajtra...
Ok, little tough to debug without actually working with it. But you'll need to pull out that json structure. You can do it with splits. So this is sort of a generic code.
from bs4 import BeautifulSoup
import pandas as pd
import requests
import json
url = 'www.thesite.com'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
scripts = soup.find_all('script')
for script in scripts:
if '.userhome({' in script.text:
json_str = script.text
data = json_str.split('.userhome(')[-1]
loop=True
while loop == True:
try:
jsonData = json.loads(data)
loop = False
break
except:
data = data.rsplit(';',1)[0]
rows = []
for row in jsonData['items']:
rows.append(row)
table = pd.DataFrame(rows)

How to hook when generating a specific value in frida

Good morning
I'm not familiar with frida yet, so I'm going to ask you a question.
When an app communicates, it sends a specific value to the server.
Assume that the com.n.a.w app sends a value of md= 25d7f5e3s5d7f5 to the server.
My goal is to know what information is needed to create how this app generates this 25d7f5e3s5d7f5 value.
For convenience, the package name of the app will be Hook_package = "com.n.a.w".
When I opened the app in jd-gui and looked it, I found a place to create the value "md".
There is a place in the package called "com.n.a.a" that creates the value "md".
Now simple I am trying to code.
I want to know how to get the value "md" with jscode code.
Does anyone know?
Thank you.
import sys,frida
Hook_package = "com.n.a.w"
def on_message(message,data):
print("{} -> {}".format(message,data))
jscode = """
"""
try:
device = frida.get_usb_device(timeout=10)
pid = device.spawn([Hook_package])
print("App is starting.. pid:{}".format(pid))
process = device.attach(pid)
device.resume(pid)
script = process.create_script(jscode)
script.on('message', on_message)
print('[*] Running Frida')
script.load()
sys.stdin.read()
except Exception as e:
print(e)enter code here

How to yield fragment URLs in scrapy using Selenium?

from my poor knowledge about webscraping I've come about to find a very complex issue for me, that I will try to explain the best I can (hence I'm opened to suggestions or edits in my post).
I started using the web crawling framework 'Scrapy' long ago to make my webscraping, and it's still the one that I use nowadays. Lately, I came across this website, and found that my framework (Scrapy) was not able to iterate over the pages since this website uses Fragment URLs (#) to load the data (the next pages). Then I made a post about that problem (having no idea of the main problem yet): my post
After that, I realized that my framework can't make it without a JavaScript interpreter or a browser imitation, so they mentioned the Selenium library. I read as much as I could about that library (i.e. example1, example2, example3 and example4). I also found this StackOverflow's post that gives some tracks about my issue.
So Finally, my biggest questions are:
1 - Is there any way to iterate/yield over the pages from the website shown above, using Selenium along with scrapy?
So far, this is the code I'm using, but doesn't work...
EDIT:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# The require imports...
def getBrowser():
path_to_phantomjs = "/some_path/phantomjs-2.1.1-macosx/bin/phantomjs"
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/53 "
"(KHTML, like Gecko) Chrome/15.0.87")
browser = webdriver.PhantomJS(executable_path=path_to_phantomjs, desired_capabilities=dcap)
return browser
class MySpider(Spider):
name = "myspider"
browser = getBrowser()
def start_requests(self):
the_url = "http://www.atraveo.com/es_es/islas_canarias#eyJkYXRhIjp7ImNvdW50cnlJZCI6IkVTIiwicmVnaW9uSWQiOiI5MjAiLCJkdXJhdGlvbiI6NywibWluUGVyc29ucyI6MX0sImNvbmZpZyI6eyJwYWdlIjoiMCJ9fQ=="
yield scrapy.Request(url=the_url, callback=self.parse, dont_filter=True)
def parse(self, response):
self.get_page_links()
def get_page_links(self):
""" This first part, goes through all available pages """
for i in xrange(1, 3): # 210
new_data = {"data": {"countryId": "ES", "regionId": "920", "duration": 7, "minPersons": 1},
"config": {"page": str(i)}}
json_data = json.dumps(new_data)
new_url = "http://www.atraveo.com/es_es/islas_canarias#" + base64.b64encode(json_data)
self.browser.get(new_url)
print "\nThe new URL is -> ", new_url, "\n"
content = self.browser.page_source
self.get_item_links(content)
def get_item_links(self, body=""):
if body:
""" This second part, goes through all available items """
raw_links = re.findall(r'listclickable.+?>', body)
links = []
if raw_links:
for raw_link in raw_links:
new_link = re.findall(r'data-link=\".+?\"', raw_link)[0].replace("data-link=\"", "").replace("\"",
"")
links.append(str(new_link))
if links:
ids = self.get_ids(links)
for link in links:
current_id = self.get_single_id(link)
print "\nThe Link -> ", link
# If commented the line below, code works, doesn't otherwise
yield scrapy.Request(url=link, callback=self.parse_room, dont_filter=True)
def get_ids(self, list1=[]):
if list1:
ids = []
for elem in list1:
raw_id = re.findall(r'/[0-9]+', elem)[0].replace("/", "")
ids.append(raw_id)
return ids
else:
return []
def get_single_id(self, text=""):
if text:
raw_id = re.findall(r'/[0-9]+', text)[0].replace("/", "")
return raw_id
else:
return ""
def parse_room(self, response):
# More scraping code...
So this is mainly my problem. I'm almost sure that what I'm doing isn't the best way, so for that I did my second question. And to avoid having to do these kind of issues in the future, I did my third question.
2 - If the answer to the first question is negative, how could I tackle this issue? I'm opened to another means, otherwise
3 - Can anyone tell me or show me pages where I can learn how to solve/combine webscraping along javaScript and Ajax? Nowadays are more the websites that use JavaScript and Ajax scripts to load content
Many thanks in advance!
Selenium is one of the best tools to scrape dynamic data.you can use selenium with any web browser to fetch the data that is loading from scripts.That works exactly like the browser click operations.But I am not prefering it.
For getting dynamic data you can use scrapy + splash combo. From scrapy you wil get all the static data and splash for other dynamic contents.
Have you looked into BeautifulSoup? It's a very popular web scraping library for python. As for JavaScript, I would recommend something like Cheerio (If you're asking for a scraping library in JavaScript)
If you are meaning that the website uses HTTP requests to load content, you could always try to manipulate that manually with something like the requests library.
Hope this helps
You can definitely use Selenium as a standalone to scrap webpages with dynamic content (like AJAX loading).
Selenium will just rely on a WebDriver (basically a web browser) to seek content over the Internet.
Here are a few of them (but the most often used) :
ChromeDriver
PhantomJS (my favorite)
Firefox
Once your started, you can start your bot and parse the html content of the webpage.
I included a minimal working example below using Python and ChromeDriver :
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(executable_path='chromedriver')
driver.get('https://www.google.com')
# Then you can search for any element you want on the webpage
search_bar = driver.find_element(By.CLASS_NAME, 'tsf-p')
search_bar.click()
driver.close()
See the documentation for more details !

How to run javascript with no filename

I am new to coding so I have a question regarding Jupyter Notebook and the use of Javascript. A snippet of my current code is as follows (trueName is defined prior to this code and pickle.dump is usually tabbed, but isn't here due to stack overflow's formatting):
%%javascript
var kernel = Jupyter.notebook.kernel;
console.log(kernel);
var command = "import pickle" + "\n" + "file_name = 'testfile'" + "\n" + "with open(file_name, 'wb') as my_file_obj:" + "\n" + "pickle.dump(trueName,my_file_obj)";
kernel.execute(command);
This works fine but for some reason when I place it into the following format:
from IPython.core.display import Javascript
Javascript("""
var kernel = Jupyter.notebook.kernel;
console.log(kernel);
var command = "import pickle" + "\n" + "file_name = 'testfile'" + "\n" + "with open(file_name, 'wb') as my_file_obj:" + "\n" + " "pickle.dump(trueName,my_file_obj)";
kernel.execute(command);""")
I obtain the following error even though the code is the same:
Javascript error adding output!
SyntaxError: Invalid or unexpected token
See your browser Javascript console for more details.
I had hoped to use the second method in order to bypass the magic command barrier in using something similar to !ipython somefile.ipy
to read the javascript, but for some reason the second method really doesn't like the var command something I discovered after much testing. I have a few questions that I would greatly appreciate if I received some feedback on:
Why does the second method provide an error due to my var command? I cannot figure out why this happens.(Is it true that I have to use 'textarea'? If so, how do I do this? I became lost when trying to do so myself: How to force a line break on a Javascript concatenated string?)
Is there a method for me to run Javascript magic when not directly in the notebook? I have tried running ipython on the code in a separate file with the .ipy ending as seen above, but it does not like running the cell magic nor the import from IPython.core.display. (This method does not work for me: How to run an IPython magic from a script (or timing a Python script))
Is there a way for me to execute Javascript code directly (not via a file) when using a function such as def run_javascript(code):? Executing a cell in Jupyter gives back the code within under the argument code, but I cannot find out how to run it. I have been looking around but the only answers I have found are about !node somefile.js and other similar filename based Javascript codes.
I would appreciate any help! I did have a few suggestions as to how to run python code as I would like to, but since IPython doesn't work with the Javascript I am at a loss (Python Tips: run a python script in terminal without the python command Execute python commands passed as strings in command line using python -c)
This should work, when you use from IPython.core.display import Javascript this is a class that when evaluated stores the data returned in scopes and once returned the data is no longer available. Use window.variable to assign in to a window object to make it available globally.
from IPython.core.display import Javascript
Javascript("""
var kernel = Jupyter.notebook.kernel;
window.variable = kernel;
var command = "list";
window.variable = command;
console.log(kernel.execute(command));""")
I got the idea from here
Understanding namespace when using Javascript from Ipython.core.display in Jupyter Notebook

How to get the version of a pebble app on the watch?

I want to provide the app version of my pebble app on its splashscreen. But how can i access it?
Is there a way to access information from the appinfo.json on the watch or in JS? I need at least the version string.
The easiest way to get your app version into the C code is to modify the wscript to generate a header file containing it as part of the build process.
User pedrolane on the Pebble forums has provided his wscript as an example which you can find here: https://code.google.com/p/pebble-for-gopro/source/browse/wscript?spec=svn8634d98109cb03c30c4dab52e665c4ac548cb20a&r=8634d98109cb03c30c4dab52e665c4ac548cb20a
Here's the contents of the file. The generate_appinfo function reads in appinfo.json, grabs the versionLabel and writes it to generated/appinfo.h.
import json
top = '.'
out = 'build'
def options(ctx):
ctx.load('pebble_sdk')
def configure(ctx):
ctx.load('pebble_sdk')
def build(ctx):
ctx.load('pebble_sdk')
def generate_appinfo(task):
src = task.inputs[0].abspath()
tgt = task.outputs[0].abspath()
json_data=open(src)
data = json.load(json_data)
f = open(tgt,'w')
f.write('#ifndef appinfo_h\n')
f.write('#define appinfo_h\n')
f.write('#define VERSION_LABEL "' + data["versionLabel"] + '"\n')
f.write('#endif\n')
f.close()
ctx(
rule = generate_appinfo,
source = 'appinfo.json',
target = 'generated/appinfo.h',
)
ctx.pbl_program(source=ctx.path.ant_glob(['src/**/*.c','generated/**/*.c']),
includes='generated',
target='pebble-app.elf')
ctx.pbl_bundle(elf='pebble-app.elf',
js=ctx.path.ant_glob('src/js/**/*.js'))
To use the value, include appinfo.h and use VERSION_LABEL.
Another hacky solution without code generation, add the following lines in your main.c :
#include "pebble_app_info.h"
extern const PebbleAppInfo __pbl_app_info;
Then you can get the version of your app like this :
__pbl_app_info.app_version.major
__pbl_app_info.app_version.minor

Categories

Resources