How do I upload to cdnjs? I have no clue and no experience with git.
I have a file that I want to upload to cdnjs. Do I have to download something.
Can someone walk me through what I have to do?
You can't just "upload" a single file to cdnjs.
Read CDNJS contributing document, in particular, for library even to be considered:
[...] please make sure it's not a personal project, we have a basic requirement for the popularity, like 100 stars on GitHub or 500 downloads/month on npm registry.
The new library must have at least one officially public accessable repository and open source license.
When you have that covered, you can create a pull request for your library to be included in cdnjs's master GitHub repo.
Related
I would like to import a module from github in deno, which is only available as github release and is not part of the code in the repository.
I would like to import: https://github.com/zingi/random-lon-lat-generator/releases/download/v0.1.0/random_lon_lat_generator.js
I tried:
import * as wasm from 'https://github.com/zingi/random-lon-lat-generator/releases/download/v0.1.0/random_lon_lat_generator.js'
which gives this error:
Download https://github.com/zingi/random-lon-lat-generator/releases/download/v0.1.0/random_lon_lat_generator.js
Download https://github-releases.githubusercontent.com/352299341/6ca4b280-9638-11eb-9f4a-c7b6b890c5e9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210405%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210405T161838Z&X-Amz-Expires=300&X-Amz-Signature=82fbc720c3a05232836678385da43cecd2a9d29ca959f736e5e8a47ce62b23bf&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=352299341&response-content-disposition=attachment%3B%20filename%3Drandom_lon_lat_generator.js&response-content-type=application%2Foctet-stream
error: An unsupported media type was attempted to be imported as a module.
Specifier: https://github-releases.githubusercontent.com/352299341/6ca4b280-9638-11eb-9f4a-c7b6b890c5e9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210405%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210405T161838Z&X-Amz-Expires=300&X-Amz-Signature=82fbc720c3a05232836678385da43cecd2a9d29ca959f736e5e8a47ce62b23bf&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=352299341&response-content-disposition=attachment%3B%20filename%3Drandom_lon_lat_generator.js&response-content-type=application%2Foctet-stream
MediaType: Unknown
I know I could easily import a module if it was part of the tracked files in git using raw.githubusercontent.com
But because the module also contains compiled WebAssembly, I don't want to track it with git.
If it is not possible, do you have any other suggestions on how to make this work?
Edit: The error message seems to be the same as in this post. But the problem source is different and can not be solved with the accepted answer from there. The assets in github releases seem to not have a permanent link, like the tracked files in git with raw.githubusercontent.com. The links to files provided on a github release page seem to forward (302) to a generated, available for a limited time url like: github-releases.githubusercontent.com/.... So it would be interesting to know, if there is any possiblity to get a permanent "raw" link to a github asset.
The GitHub release download URL actually redirects to an S3 pre-signed URL that itself sends back a response with the application/octet-stream Content-Type header no matter what the content is. Since deno doesn't know that content type, that's why you see that error message.
You can however import directly from the git tag that corresponds to the release via the raw.githubusercontent.com CDN. For example, you could import from the std lib with:
import { serve } from "https://raw.githubusercontent.com/denoland/deno_std/0.92.0/http/server.ts";
If it's a built artifact that can't be accessed through GitHub's raw usercontent CDN, you can push to an external data store like GCS or S3. They both can be exposed via a URL and you can then import from there.
Some module registries also allow you to have a build step, for example nest.land gives that option. You can then hook that into the same CI workflow as the release workflow.
For your specific case though, I can see from the repo that you're building a wasm module; you can also include a TypeScript or JavaScript file that serves as an entrypoint for your wasm module. This is how it's done in the standard lib's hash module for example.
I have been fighting Azure for the last few hours trying to figure out why my deployments completed, succeeded, and then the app failed to start. Logs pointed to JsConfig.JSON not being allowed with a TsConfig.JSON. I checked my locals and it looked clean, so I dug in the SSH of my webapp and found that the Deployment process added a JsConfig.JSON to my files.
For anyone struggling with this problem - Azure is adding the file because its missing some compiler options that it wants you to have, but it puts it in a new JsConfig, not your TsConfig.
How do we prevent it from doing this and failing successfully, and instead, break the deployment process and let us know we need to make an adjustment? Microsofts support is abysmal, so hoping there is an expert out here with thoughts on it.
Thanks community!
An update for anyone checking back around or finding this in the future:
The Visual Studio Azure Extension is no longer the "Best Practice".
Microsoft has offloaded their repos and build pipelines to Github.
From within your AppService Deployment Center, you can automatically attach your Github repo and push a deployment pipeline to Github - its very easy.
For some time I am using autobahn.js and autobahn.min.js files in my project linked directly from:
https://autobahn.s3.amazonaws.com/autobahnjs/latest/autobahn.js and
https://autobahn.s3.amazonaws.com/autobahnjs/latest/autobahn.min.js
as suggested on http://autobahn.ws/ website.
Today I found out the two above files are no longer accessible via these links. Only thing I can see is 403 error with message: Access Denied.
I can not find any mirrors anywhere. I tried to build them using this instruction: http://autobahn.ws/js/building.html. No such luck. Where can I find autobahn.js files so I can download them in case of situation like this happen in the future?
Self answer. Unfortunately author decided to remove all the files from Amazon S3 as described: https://groups.google.com/forum/#!topic/autobahnws/aHxWgImJvCY
Hi,
we (Crossbar.io GmbH) have not only provided massive development
funding of AutobahnJS, but also free hosting of AutobahnJS for
development purposes (download it and host it yourself).
We asked people NOT to hot link to this bucket MULTIPLE times, as we
have to pay for the traffic obviously.
Now, it seems, people don't get that.
Our traffic costs have persistently increased to a surprising level. I
just wanted to delete the log folder alone in that bucket - and I have
a hard time, the log files number in the 100k's!
There seem to be a number of highly frequented sites hot linking to
our bucket.
Now, instead of injecting some nice JavaScript to completely take over
all those sites (which is trivial and would take me half an hour to do!), we have decided to remove the whole bucket.
Dozens of sites will break. Not our problem.
Cheers, /Tobias
Newer source files can be still found on github (the one that implements wamp v2). Last version that implements wamp v1 (v.0.8.2 of autobahn-js) however can be found in this repo:
https://github.com/sergeyvolkov/autobahn-old
It can be hot swapped if you were using links from my question. Good source of other older versions is to digg through releases on GitHub page:
https://github.com/crossbario/autobahn-js/releases
Try this link https://github.com/crossbario/autobahn-js-built . They have the files
There's a README:
https://github.com/crossbario/autobahn-js-built/blob/master/README.md
Here's a summary:
Install bower:
npm install -g bower
Install autobahn in your web dir:
bower install autobahn
Change your links to point to:
<script src="/bower_components/autobahnjs/autobahn.min.js"></script>
I'm not sure why you replied to another comment RE wamp v1 / v2 as the URLs you posted are retrieving the latest.
I've created a bundle file of a private project of mine, and I would like to share it with someone. They ask me to provide the git bundle file generated.
Can I just email them the single bundle file? ...or do I need to attach the folder of the project itself as well?
Can I just email them the single bundle file?
Yep, that the point behind the bundle.
Its a full read-only repository.
Once you have the bundle you can share it.
git bundle will only package references that are shown by git show-ref: this includes heads, tags, and remote heads.
git bundle create ../reponame.bundle --all
--all- get all refs
This command creates bundle with all the source code and all other info like tags.
Even due that --all is not documented its much better to use it.
Note : --all would not include remote-tracking branches !!!
Today I checked mega.co.nz and I'm excited about some features. For example in download page it will download files on browser and after that decrypt them with javascript.
for example see this link to download a png file :
https://mega.co.nz/#!7JRgFJzJ!efpJGWuPhYczLexY19ex82nuwfs4sR_DG4JXddeClH4
in this link it will start the download inside the browser. i checked network tab in inspect element it will download parts of file with AJAX after that completed all parts of file, will save all of them in one file on computer automatically!
i want to know what they do? can you explain or link to some resource about download files inside browser like that?
also can done it only with javascript or should use some flash plugins or something like that?
Mega uses several different methods to do this: (as of 27 Nov 2013)
Filesystem API (Chrome/Firefox Extension polyfill)
Adobe Flash SWF Filewriter (old browsers fallback)
BlobBuilder (IE10/IE11)
MEGA Firefox Extension (deprecated)
Arraybuffer/Blob (in memory) + a[download] (for browsers that support a[download])
MediaSource (experimental streaming solution)
Blob stored in IndexedDB storage + a[download] (Firefox 20+, improvement over the in-memory Blob method)
(source: https://eu.static.mega.co.nz/js/download_6.js)
A basic implementation of multipart in-browser downloader using Blob and URL APIs is brought here. It downloads a file on 4 concurrent requests and shows the progress also. Please note that it seems setting range header might generally not a good idea on XHR requests, have a look at this topic.
While downloading:
After the download:
Another interesting topic would be implementing Pause/Resume functionality from Mega. XHR API of current browsers doesn't offer that capability so the only chance you have is to do multiple small sized chunks downloading and giving up on the downloaded part of your small chunks, the way it seems is done on Mega also. But fetch streaming feature can be used for that purpose, I didn't explore that yet well enough but it is documented here.
Btw, have a look at these awesome projects:
https://github.com/eligrey/FileSaver.js
https://github.com/jimmywarting/StreamSaver.js
MEGAcmd
There is megacmd, the official command line interface. You can also build it from sources on github at https://github.com/meganz/MEGAcmd
megacmd is a wrapper around Mega SDK and if you decide to compile it on your own you'll need the same dependencies (on ubuntu) as the ones listed below for Mega SDK.
For details on usage see the MEGAcmd User Guide.
Mega SDK
Mega SDK which can be compiled by following the steps on the github page. It includes the megacli utility which is an interactive shell for synching and downloading/uploading.
## compilation steps for ubuntu
git clone --depth 1 https://github.com/meganz/sdk megasdk
cd megasdk
sudo apt install libcurl4-openssl-dev libc-ares-dev libssl-dev libcrypto++-dev zlib1g-dev libsqlite3-dev libfreeimage-dev libswscale-dev
autogen.sh
./configure
make -j 8 ## pass the number of CPUs you have to speed up compilation
sudo make install
mega.py python module (deprecated)
For those who found this question searching for an actual recipe to download a link in text mode here is a simple python script that uses the mega.py module (install it with sudo pip install mega.py):
import sys
import getpass
#install the module with: 'sudo pip install mega.py'
from mega import Mega
email = '_your_megamail_#domain.com'
password = getpass.getpass(prompt='Mega password for {}:'.format(email))
mega = Mega({'verbose': True})
m = mega.login(email, password)
m.download_url(sys.argv[1])
The script works with python 2.7 and takes the URL of the mega.nz link.
getpass is used for securely entering the password in the console in order to avoid storing the password in the script — if you are comfortable hardcoding the password then set it in line #7.
megatools
On most Linux/posix boxes you can install megatools from standard repositories, i.e.
On ubuntu/debian:
apt install megatools
On MacOS:
brew install megatools
Once installed you will find a number of command line utilities, among which megadl which can download both shared files and your own files. See megadl -h for details.
As of 2020, you can use the Service Workers for seamlessly integrating your custom code with the browser's built-in download manager: https://developers.google.com/web/updates/2016/06/sw-readablestreams
I also guess you'd have the following headers in order for a file to be downloaded instead of being viewed:
headers: {
'Content-Type': 'application/octet-stream',
'Content-Disposition': 'attachment; filename="your_filename.bin"',
}
Personally I have found this approach to be working flawlessly in both Google Chrome in Firefox, and I'm already using it in production.