When i define my meteor collections on the server and try to access them in the client without being in any of the meteor provided methods rendered, events, created, helpers ... I always get an error that says Meteor collection not defined if i try to redefine the method in the client, i get the Meteor collection already exists. I am able to get around this by referencing my custom made collections in a Meteor.startup() function. How can i reference the collection i defined on the server in the client. In the meteor docs there able to create two instances of Meteor.Collection() and subscribe even before declaring.
// okay to subscribe (and possibly receive data) before declaring
// the client collection that will hold it. assume "allplayers"
// publishes data from **server's "players" collection.**
Meteor.subscribe("allplayers");
...
// client queues incoming players records until ...
...
Players = new Meteor.Collection("players");
You can place Players = new Meteor.Collection("players"); at the top of your file without it being in Meteor.startup. Make sure its defined before you initiate Meteor.subscribe
e.g your file could be:
Players = new Meteor.Collection("players");
MyCollection2 = new Meteor.Collection("MyCollection2");
Meteor.subscribe("allplayers");
Meteor.subscribe("mycollection2");
..rest of stuff
Something a bit cleaner might be to create a file in your project's root directory containing this so that its used on both the client and the server without you having to redefine them for each e.g a collection.js in your project root could contain
Players = new Meteor.Collection("players");
MyCollection2 = new Meteor.Collection("MyCollection2");
if(Meteor.isClient) {
Meteor.subscribe("allplayers");
Meteor.subscribe("mycollection2");
}
so now you don't have to define Players or MyCollection2 on your /server or /client anymore. The way meteor loads files will ensure that this is defined before your other regular files. This probably works best if you've arranged your files in the /client,/server and /public format as used on the other meteor examples (parties & todo)
Edit: as BenjaminRH suggests, putting your file in /lib/collections.js assures it will be loaded even before other files in your root project dir.
Related
We have a Vue app that connects to a Web Service and get's some data. The web service URL is different, depending on the location we install the app on.
I first thought of using .env files, but later I realized these files get injected into the minified .js files.
Having this in my main.js was very convenient in the case of .env files:
Vue.prototype.ApiBaseUrl = process.env.VUE_APP_APIBASEURL
Vue.prototype.PrintDocsFolder = process.env.VUE_APP_PRINTDOCSFOLDER
Vue.prototype.TicketPrintWSocket = process.env.VUE_APP_TICKETPRINTWSOCKET
The app is already built. I don't want to build the app for each of the hundred locations we have to deploy to. I'm not sure about the "official" approach for this.
Is there any out of the box solution in Vue that can allow this configuration? Basically we need to have a file in the root folder of the built app, and read values for our Vue.prototype.VARIABLES.
We are using vue-cli 3.
Like others have said, you can do this at runtime via a network request. If you don't know where you're deploying to until you're live, you'll need to do something like this.
Alternatively, you can do this at an infrastructure and networking level. Sometimes for A/B testing systems, that's how it's done.
Alternatively, you can do this at build time. I've done both... for static assets like images, sometimes you cannot do this at runtime and you need to replace the public urls at build time. For the network request approach, fetching a static json file with the mappings you host is a definite possibility.
You were very close with the idea of using .env files.
Build-time approach with Vue CLI
In Vue CLI, you get to use the Webpack's DefinePlugin for free by specifying variables in your .env files prefixed as such: VUE_APP_THE_API_URL and then using it like process.env.VUE_APP_THE_API_URL. Docs.
Usage
In your source, use process.env.VUE_APP_THE_API_URL and .env files. Reference your API url in your source code as process.env.VUE_APP_THE_API_URL and then you should use the .env files, like you were planning to, to switch between the dev-only value and a production-only value.
The production-only value is gonna be fake and very unique so that when you find + replace it, it will be distinct.
Find + Replace the fake API_URL you built with
After you're done building your application for production, you're going to loop over a mapping file (json, js, whatever) which contains the API urls you're going to deploy to.
You'll use the filesystem and find + replace to replicate your app as many times as you need before doing a deploy via S3, fastly, etc. You can do this in a bash script or using node + execa or node + fs.
Why you might need to do this
I had to do this at build time because modifying certain assets is not possible at runtime due to optimizations done by webpack loaders where they hard-code things like the public path so it is faster. With hundreds of API/CDN urls, it would be extremely inefficient to rebuild the application over and over.
How Vue CLI does it (if you don't wanna do VUE_APP_*)
Vue CLI is ontop of webpack and this is kind of an advanced use case, so you'll want to set configureWebpack inside of vue.config.js and point that to a require('webpack.config.js') file. You want to do this in Webpack or just within your build process (bash, node, gulp, whatever).
Vue CLI 3 is tied to the major webpack version. Right now that's Webpack 4. I'm going to give you the Webpack 4 answer for your problem, but I think they're changing the plugin name in Webpack v5.
Define Plugin
The plugin you want is the DefinePlugin. Just do the above steps, but you'd be manually setting { plugins: [ new DefinePlugin() ] } with the options you want. You'd do this if you didn't wanna do VUE_APP_* as the prefix to your env variables.
This can be done simply by making an XMLHttpRequest to a URL (e.g., to a local JSON file), and reading the data contents.
For instance, you could use fetch (supported by 95% browsers) to request the config from <projectroot>/config.json (which can be unique to each deployment location), and then set the global properties with the result:
// main.js
fetch('config.json')
.then(res => res.json())
.catch(error => {
// ignore any errors
console.warn(error)
return {}
})
.then(config => {
Vue.prototype.ApiBaseUrl = config.ApiBaseUrl
Vue.prototype.PrintDocsFolder = config.PrintDocsFolder
Vue.prototype.TicketPrintWSocket = config.TicketPrintWSocket
// initialize app here
new Vue(...)
})
demo
If there are literally 100's of locations where the app is supposed to be deployed with different APIs/params for each location AND YOU DON'T WANT TO LEAVE ANY TRACE OF THE WHOLE DATA apart from the variables which are necessary for proper functioning of the app, I would have personally stored all the different params in one central database and create one single common API which is able to decide which params to feed to which particular deployment. So that at initial app load, the app would just have to make 1 extra API to call to get the correct params. (provided there is some unique identifier for each deployment).
For example, if the unique identifier is the domain name over which the app is served.
You can store the params like this in the database:
+-------------------+----------------------------+-----------------+--------------------+--+
| domainName | ApiBaseUrl | PrintDocsFolder | TicketPrintWSocket | |
+-------------------+----------------------------+-----------------+--------------------+--+
| example.com | http://api-base-url-1.com/ | print-doc-1 | ticket-print-1 | |
+-------------------+----------------------------+-----------------+--------------------+--+
| secondExample.com | http://api-base-url-2.com/ | print-doc-2 | ticket-print-2 | |
+-------------------+----------------------------+-----------------+--------------------+--+
| thirdExample.com | http://api-base-url-3.com/ | print-doc-3 | ticket-print-3 | |
+-------------------+----------------------------+-----------------+--------------------+--+
then at app load, you can make a axios (Promise based HTTP client) call and pass the current domain name as a param like this:
const details = await axios.get('/common-api-with-all-the-details', {
params: {
domainName: location.hostname
});
This common API should match the domain with db and fetch the correct record accordingly.
Advantages:
You never need to rebuild the app or config the environment variables
individually.
You will always be in control over which params to feed
to which particular deployment.
You can CHANGE/UPDATE the params on
the FLY.
Your whole data store is not public.
Disadvantages:
Requires one extra server setup.
1 extra API call at app's initial load.
Other approaches:
You can avoid the use of database (if your dataset is not too large) by storing all your details in an array. Then on every COMMON API call, you can match the domain name against the array (lodash can help), thereby increasing response time, lowering complexity and avoiding a database setup completely.
You can use server-less architecture to avoid setting up a completely new server to host your COMMON API, Firebase Cloud functions or AWS Lambda have generous free tiers to cover a decent amount of traffic.
I am working on a platform for different clients, every client is tracking different KPI, therefore a common project cannot be shared due to the models are different.
What I want to do is the following:
The user creates an account
Once, they have created the account, they can log in and see their
dashboard
What I am doing is to create a different file per client for every part of the project (Models, API, server, ctr). For instance; we have 2 clients client A and client A, the project will have a folder called models (which contains the mongoose schema) therefore, I was thinking to create 2 folders inside that folder called model, client_model_A and client_model_B. And, I pretend to do the same for the APIs, the DB connection, and the server.
Just like this:
Do you have any suggestion? Or should I use another method?
No need to copy in files just create a data table name user_configure Than set all configuration in user_configure
I'm currently using AWS's Javascript SDK to launch custom EC2 instances and so far so good.
But now, I need these instances to be able to run some tasks when they are created, for example, clone a repo from Github, install a software stack and configure some services.
This is meant to emulate a similar behaviour I have for local virtual machine deployment. In this case, I run some provisioning scripts with Ansible that get the job done.
For my use case, which would be the best option amongst AWS's different services to achieve this using AWS's Javascript SDK?
Is there anyway I could maybe have a template script to which I passed along some runtime obtained variables to execute some tasks in the instance I just created? I read about user-data but I can't figure out how that wraps with AWS's SDK. Also, it doesn't seem to be customisable.
At the end of the day, I think I need a way to use the SDK to do this:
"On the newly created instance, run this script that is stored in such place, replacing these
placeholder values in the script with these I'm giving you now"
Any hints?
As Mark B. stated, UserData is the way to go for executing commands on instance launch. As you tagged the question with javascript here's an example on passing this in the ec2.runInstances command:
let AWS = require('aws-sdk')
let ec2 = new AWS.EC2({region: 'YOUR_REGION'})
// Example commands to create a folder, a file and delete it
let commands = [
'#!/usr/bin/env bash',
'mkdir /home/ubuntu/test',
'touch /home/ubuntu/test/examplefile',
'rm -rf /home/ubuntu/test'
];
let params = {
...YOUR PARAMS HERE...
UserData: new Buffer(commands.join("\n")).toString('base64')
}
// You need to encode it with Base64 for it to be executed by the userdata interpreter
ec2.runInstances(params).promise().then(res => { console.log(res); })
When you launch the new instances you can provide the user-data at that time, in the same AWS SDK/API call. That's the best place to put any server initialization code.
The only other way to kick off a script on the instance via the SDK is via the SSM service's Run Command feature. But that requires the instance to already have the AWS SSM agent installed. This is great for remote server administration, but user-data is more appropriate for initializing an instance on first boot.
I have created a new api using sails sails generate api tasks, using the default configuration of sails 0.12.
With Sails awesome blueprints, I can access localhost:1337/tasks and see the list of tasks or localhost:1337/tasks/create?text=yo to create a new one.
But what I want it to connect these endpoints to an .ejs view.
I tried creating a new folder tasks and placing show.ejs or index.ejs files in it but it's still returning the Json.
Is there a default way to render .ejs files through the default blueprint urls, without creating new routes and controller methods?
Well it took me a while to find the answer, so for anyone looking to use sails.js development speed, here is the way to do it:
After generating the api, create a folder inside your views folder (named after your controller). The files in it should be:
+ tasks (the folder with the same name as your controller)
- find.ejs (list of all items)
- findOne.ejs (view a specific item)
- create.ejs (after a successful creation)
- update.ejs (after a successful update)
- destroy.ejs (after a successful deletion)
These files are connected by default to the different api endpoints. So, when you access the url localhost:1337/tasks sails will automatically render tasks/find.ejs. Same for the other endpoints.
Another point is that each view will have a global variable named data that will include the result of the api request (i.e. the records that were fetched / modified).
You can see a small example here: https://github.com/web-development-course/Bootstrap (look at the 'things' api)
I hope it will help you guys
I'm building my first Express app, which needs to interact with an API, using an API key that ideally remains secure.
So I wanted to follow a basic pattern of keeping the key (and any future environment variables), in a .gitignored .env file in the root directory.
To not reinvent the wheel, I used this package, and set my env variables like so, in my app.coffee file (the root file of the application):
env = require('node-env-file')
env __dirname + '/.env'
console.log process.env.MY_API_KEY
That console.log prints out the right key to the server logs. The problem arises later:
If I try to access that same variable in one of the JS files loaded later on by my app, process.env is an empty object, so the API key is undefined. This doesn't appear to be a problem with the above package, because if I define the variable in the CL (API_KEY=whatever npm start), the behavior is the same -- it console logs correctly from app.coffee but is unavailable later.
Some information on how the files in which the key is unavailable are being loaded:
The app is running React, which I write to a few .jsx files in public/javascripts/src, and which are compiled by gulp into public/javascripts/build/*.js.
I'm trying to access the key in a .js file in public/javascripts/ which is required by one of the .jsx files.
In that required .js file, process.env returns an empty object. When I try to access process.env in the .jsx files, I'm actually told that process itself is undefined.
Any ideas what's going on here? I'm new to Express/React, and unclear where this process object, which I thought was global and defined on npm start is defined, and what's happening to all the env info in it.
Thanks! Please let me know if any other information would be helpful, orif anyone has any suggestions for how better to handle private env info in my situation.
EDIT:
I tried the suggestions below, and created a separate endpoint internally, which hits the external API and then returns a response. I've strung things up correctly, so that this responds correctly:
router.get '/images', (req, res, next) ->
res.json({ some: 'json' });
but this (which uses a separate class to make a request to an external API), throws an error:
router.get '/images', (req, res, next) ->
new Images('nature').fetch (images) ->
res.json({ some: 'json' })
Essentially, it looks like the asynchrony of the response from the external API (and not even the data itself, which I ignored), is creating a problem. How do I hit this external endpoint and then respond to the internal request with the incoming data?
Back-end vs Front-end
It seems like you are trying to access back-end data from a front-end location, in a wrong way.
The great power of Node.js is having JavaScript in the front and in the back, but it is quite confusing in the beginning to understand on which side each script is executed.
In an Express project, all Javascript files that are sent to the front-end, those that will directly interact with the client's page, are located in public/javascripts/. Generally you will have some AJAX functions in some of those files to exchange data and communicate with the back-end.
These back-end files are located everywhere else : in the root directory, in routes/, and all the other folders you create. Those files are pretty much all connected to your Node instance, and therefore can communicate with each other using global objects like process for example.
Your script in public/javascripts/, that is executed on the client's computer, is trying to directly access a variable located on the server running your Node instance : that's why your code doesn't work. If you wish to access data from the back-end, you must use AJAX calls in the front-end.
Server <---(AJAX only)--- Client
------ ------
app.js public/javascripts/script.js
routes.js
...
That being said, you wanted to keep your API key private, which will not happen if you send it to every client who's on that specific page. What you should do is make the call from the back-end, using the xhr module for example, and then delivering the data to front-end, without the secret API key.
I hope I was clear, Node is quite confusing at first but very soon you will get over these little mistakes !
All .jsx is, is some code, what matters is where the code is being executed. process.env is a variable that is accessible inside the Node.js runtime. When your .jsx code gets transpiled down to .js and served to the browser, the process.env variable will no longer exist. If you're making an API call inside the browser, the API key will be fundamentally available to the client. If you want to secure the key, you have to have your Node.js server expose an API route, which your React app will hit. That Node.js server will then make the call to the external service using the API key. Because that call is being made by the server, process.env will be available, and will remain hidden from the client. You can then forward the result of the API call back to the user.