I've been trying out GunJs for a couple of days now and I'm really enjoying it. As a starter project I've followed the Fireship chat dapp video aimed at building your own chat.
Here's the issue, now that I've finished the tutorial I would like to create my own chat. However, for some reason if I get a 'chat' node within my own app it seems to pick up on the same 'chat' node as the tutorial one that is online.
onMount(() => {
// Get Messages in large chat
db.get('chat')
.map()
.once(async (data, id) => {
if (data) {
// key for E2E - to do: change for web3
const key = '#foo';
var message = {
//transform the data
who: await db.user(data).get('alias'),
what: (await SEA.decrypt(data.what, key)) + '',
when: GUN.state.is(data, 'what'),
};
if (message.what) {
messages = [...messages.slice(-100), message]
}
}
})
})
This is also the case if I change the encryption key (then the messages just become undefined). Multiple questions arise from this:
Are graph node names unique within the whole of GunDb?
How do you handle conflicts where two gun-based apps call on the same node name?
Is this problem generally solved through filtering using 'header' props?
How do I make it pick up on only my data?
Even if I've read most of the docs, there seems to be something I'm missing in my comprehension of how the graph is generally seperated between apps. Any insight on how this works would be much appreciated.
Are graph node names unique within the whole of GunDb?
Yes.
How do you handle conflicts where two gun-based apps call on the same node name?
You don't. The expected result will be, they will overwrite each other.
Is this problem generally solved through filtering using 'header' props?
I don't think it's the right way to do it.
How do I make it pick up on only my data?
Use your own relay server.
Conclusion :
gunDB doesn't really care about the who fetch / put the data. If you want to protect your data, use your own relay server (not a public one), and put data in your user space. user space is readonly to the public, but read/write for the owner.
Related
From reading up on the management API, I think I should be able to fetch data from storyblok from inside my js. The first thing I'm trying is to export my entire space so that I can have an external backup. From reading the documentation, I think the following should work, but it gives me a 401. What is the correct syntax for this?
// spaceId is discovered in Settings / Space
fetch(
`https://mapi.storyblok.com/v2/spaces/${spaceId}/export.json`,
{
headers: {
Authorization: managementToken, // this was created in My Account / Account Settings / Personal access Token
},
}
)
.then(async (res) => {
const json = await res.json()
console.log(json)
})
.catch((err) => console.log(err));
I was also looking to export a single story, which I think the correct URL should be:
`https://mapi.storyblok.com/v2/spaces/${spaceId}/stories/${storyId}/export.json`
I can't figure out how to determine the storyId, though. I tried the UID but that didn't work and the example showed an 8 digit number. Where do I find this number?
Note: I'm in the US, and for the regular fetches I had to use the domain https://api-us.storyblok.com so I tried adding -us and that didn't work.
Note: I will eventually be trying to add and modify stories in this same js file. Also, be able to "restore" the entire space if necessary. I hope the solution to the above will be applicable to all the rest of the calls I'll be attempting.
Note: The app is written in Nuxt 3 and I'm using useStoryblok() successfully to retrieve data. I could fulfill the above requirement to back up the entire space by iterating through everything there, but that seems like more work than is necessary, and it doesn't solve my problem with the other calls I need to make.
The management API is still on v1, it's only the content API that's v2:
https://www.storyblok.com/docs/api
That probably should help with your 401. I actually also assumed I had to use v2 :)
As how to get the story ID: yes, it's the simple ID, not the UUID.
You get it by listing some stories .../v2/spaces/SPACEID/stories?with_slug=abc.
Or look at the draft/published JSON in the Storyblok UI.
I am having some issues trying to connect to a matrix server using the matrix-js-sdk in a react app.
I have provided a simple code example below, and made sure that credentials are valid (login works) and that the environment variable containing the URL for the matrix client is set. I have signed into element in a browser and created two rooms for testing purposes, and was expecting these two rooms would be returned from matrixClient.getRooms(). However, this simply returns an empty array. With some further testing it seems like the asynchronous functions provided for fetching room, member and group ID's only, works as expected.
According to https://matrix.org/docs/guides/usage-of-the-matrix-js-sd these should be valid steps for setting up the matrix-js-sdk, however the sync is never executed either.
const matrixClient = sdk.createClient(
process.env.REACT_APP_MATRIX_CLIENT_URL!
);
await matrixClient.long("m.login.password", credentials);
matrixClient.once('sync', () => {
debugger; // Never hit
}
for (const room of matrixClient.getRooms()) {
debugger; // Never hit
}
I did manage to use the roomId's returned from await matrixClient.roomInitialSync(roomId, limit, callback), however this lead me to another issue where I can't figure out how to decrypt messages, as the events containing the messages sent in the room seems to be of type 'm.room.encrypted' instead of 'm.room.message'.
Does anyone have any good examples of working implementations for the matrix-js-sdk, or any other good resources for properly understanding how to put this all together? I need to be able to load rooms, persons, messages etc. and display these respectively in a ReactJS application.
It turns out I simply forgot to run startClient on the matrix client, resulting in it not fetching any data.
I am attempting to proof of concept a simple support & ticketing bot using Microsoft Bot Framework v4 and Azure. I have successfully created a bot via the Bot Framework Composer and deployed it to Azure using a Web App Bot. I have configured and installed the BotFramework-WebChat component on my test site and successfully connected the bot to it. The BotFramework-WebChat component is installed via cdn.botframework.com. The test site is a single page application and the bot sits alongside this such that it can be accessed at the users convivence.
Everything is working as expected. I would like however to post certain contextual information to the Bot depending on the current browser state. For example, if a user is looking at product ABC when they converse with the bot I'd like the bot to know this. To achieve it I'd like to update the user state on the bot channel, preferably via vanilla JavaScript when the main application state changes. I'd like to do this seamlessly in the background and I do not want to rely on additional frameworks such as React (the Microsoft Documentation references React heavily and I unfortunately have no understanding of the particular product so find it difficult/impossible to follow).
My question is therefore in two parts. Is the above possible using the API provided by the BotFramwork-WebChat component and if so how would one go about doing so? If it is not possible I would also appreciate any assistance on alternative methods should such alternatives exist.
Answer to my own question, this is indeed possible with a minor modification to the example code used to create/render the webchat component.
First I've parameterised the DirectLine client creation. This client is key to posting to the channel once the WebChat component is set up:
var dl = window.WebChat.createDirectLine({ token: '#yourtokenhere#' });
window.WebChat.renderWebChat(
{
directLine: dl,
userID: '#yourid#',
username: '#yourusername#',
locale: 'en-US'
},
document.getElementById('#yourtargetelement')
);
Using the client I have then posted an event to the channel using the postActivity method:
dl.postActivity({
from: { id: '#youruserid#', name: '#yourusername#' },
type: 'event',
name: '#youreventname#',
value: {
// Put your parameters here
}
})
On the Composer side I have set up a trigger to activate on a "Custom Event" with the name #youreventname#. I can then parse out the parameters from the value property and set my user state.
I personally found the documentation for the Microsoft Direct Line JS protocol much easier to work with than going directly through the WebChat docs.
Thank you to #billoverton for the inspiration to use a custom event.
If you just want to capture information at the time the user renders the webchat session, you can do this by sending a custom event, which you'll also need to handle in your bot code. This specific code will not work if you are trying to create a single instance of the bot that persists across multiple pages, and you are wanting the page details sent every time, but the same concept should apply. This method sends the details only on the initial connection to the bot.
First you need to send an event from your bot rendering via custom store. Here is how I did it.
var store = window.WebChat.createStore({}, function(dispatch) { return function(next) { return function(action) {
if (action.type === 'DIRECT_LINE/CONNECT_FULFILLED') {
dispatch.dispatch({
type: 'WEB_CHAT/SEND_EVENT',
payload: {
name: 'webchat/join',
value: {
browserLanguage: navigator.languages[0],
page: window.location.href,
language: getCookie("LANGUAGE"),
country: getCookie("COUNTRYNAME")
}
}
});
}
return next(action);
}}});
Now your bot also needs to be set up to handle that event. Again, I stored the relevant details in "value" but you can design your object differently if you want. I store this in user state under userData.siteContext and then can use the information throughout the bot application. If you want a unique welcome message based on this information, you'll need to add that inside the if statement as well and add a condition that your welcome message in the onMembersAdded handler fires only for non-direct line channels.
if (context.activity.name && context.activity.name === 'webchat/join') {
const userData = await this.userDialogStateAccessor.get(context, {});
userData.siteContext = context.activity.value;
// siteContext specific welcome message if desired
await this.userState.saveChanges(context);
await this.conversationState.saveChanges(context);
}
If you need this information at other times than the original connection, you should be able to create another event from your page and set up an appropriate handler in the bot.
Imagine I have a number of entries(say, users) in my database. I also have two routes, one for list, other for detail(where you can edit the entry). Now I'm struggling with how to approach the data structure.
I'm thinking of two approaches and a kinda combination of both.
Shared data set
I navigate to /list, all of my users are downloaded from api a stored in redux store, under the key users, I also add some sort of users_offset and users_limit to render only part of the of the list
I then navigate to /detail/<id>, and store currently_selected_user with <id> as the val... which means I will be able to get my user's data with something like this users.find(res => res.id === currently_selected_user)
updating will be nice and easy as well, since I'm working with just one data set and detail pointing to it
adding a new user also easy, again just working with the same list of users
Now the problem I have with this approach is that, when the list of users gets huge(say millions), it might take a while to download. And also, when I navigate directly to /detail/<id>, I won't yet have all of my users downloaded, so to get data for just the one I need, I'm gonna have to first download the whole thing. Millions of users just to edit one.
Separated data set
I navigate to /list, and instead of downloading all of my users from api, I only download a couple of them, depending on what my users_per_page and users_current_page would be set to, and I'd probably store the data as users_currently_visible
I then navigate to /detail/<id>, store currently_selected_user with <id> as the val...and instead of searching through users_currently_visible I simply download user's data from api..
on update, I'm not gonna update users_currently_visible in any way
nor will I on add
What I see as possible problem here is that I'm gonna have to, upon visiting /list, download data from api again, because it might not be in sync with what's in the database, I also might be unnecessarily downloading users data in detail, because they might be incidentally already inside my users_currently_visible
some sort of frankenstein-y shenanigans
I detail, I do the same as in Separated data set but instead of directly downloading user's data from api, I first check:
do I have any users_currently_visible
if so, is there a user with my id between them?
if both are true, I then use it as my user data, otherwise I make the api call
same happens on update, I check if my user exists between users_currently_visible if so, I also update that list, if not I do nothing
This would probably work, but doesn't really feel like it's the proper way. I would also probably still need to download fresh list of users_currently_visible upon visiting /list, because I might have added a new one..
Is there any fan favorite way of doing this?... I'm sure every single one redux user must have encountered the same things.
Thanks!
Please consult “real world” example from Redux repo.
It shows the solution to exactly this problem.
Your state shape should look like this:
{
entities: {
users: {
1: { id: 1, name: 'Dan' },
42: { id: 42, name: 'Mary' }
}
},
visibleUsers: {
ids: [1, 42],
isFetching: false,
offset: 0
}
}
Note I’m storing entities (ID -> Object maps) and visibleUsers (description of currently visible users with pagination state and IDs) separately.
This seems similar to your “Shared data set” approach. However I don’t think the drawbacks you list are real problems inherent to this approach. Let’s take a look at them.
Now the problem I have with this approach is that when then list of users gets huge(say millions), it might take a while to download
You don’t need to download all of them! Merging all downloaded entities to entities doesn’t mean you should query all of them. The entities should contain all entities that have been downloaded so far—not all entities in the world. Instead, you’d only download those you’re currently showing according to the pagination information.
when I navigate directly to /detail/, I wouldn't yet have all of my users downloaded, so to get data for just the one, I'm gonna have to download them all. Millions of users just to edit one.
No, you’d request just one of them. The response action would fire, and reducer responsible for entities would merge this single entity into the existing state. Just because state.entities.users may contain more than one user doesn’t mean you need to download all of them. Think of entities as of a cache that doesn’t have to be filled.
Finally, I will direct you again to the “real world” example from Redux repo. It shows exactly how to write a reducer for pagination information and entity cache, and how to normalize JSON in your API responses with normalizr so that it’s easy for reducers to extract information from server actions in a uniform way.
I also used normalizr approach before to normalize entities, but the problem with it is that this requires manual work.
If you know Apollo in GraphQL world, you probably know that it supports automatic normalisation, so data for a given object is not stored in multiple places. Thx to that they also support automatic updates, if your server responds with an object with the same id but with updated attrs, Apollo will recognize it and update this object in multiple places.
However, why this luxury should be reserved only for GraphQL? Due to this reason I implemented redux-requests library, which supports automatic normalisation for any API, REST, GraphQL, Firebase, whatever. How does it work? Imagine you have a book list and detailed endpoints. To communicate with them, you would just dispatch Redux actions like that:
const fetchBooks = () => ({
type: FETCH_BOOKS,
request: { url: '/books' },
meta: { normalize: true },
});
const fetchBook = id => ({
type: FETCH_BOOK,
request: { url: `/books/${id}` },
meta: { normalize: true },
})
Now, to update title of a book in both places, we would just do:
const updateBookTitle = (id, newTitle) => ({
type: UPDATE_BOOK_TITLE,
request: { url: `books/${id}`, method: 'PATCH', data: { newTitle } },
meta: { normalize: true },
})
If you are interested with such an approach, more about it could be read here
I am using firebase for data storage. The data structure is like this:
products:{
product1:{
name:"chocolate",
}
product2:{
name:"chochocho",
}
}
I want to perform an auto complete operation for this data, and normally i write the query like this:
"select name from PRODUCTS where productname LIKE '%" + keyword + "%'";
So, for my situation, for example, if user types "cho", i need to bring both "chocolate" and "chochocho" as result. I thought about bringing all data under "products" block, and then do the query at the client, but this may need a lot of memory for a big database. So, how can i perform sql LIKE operation?
Thanks
Update: With the release of Cloud Functions for Firebase, there's another elegant way to do this as well by linking Firebase to Algolia via Functions. The tradeoff here is that the Functions/Algolia is pretty much zero maintenance, but probably at increased cost over roll-your-own in Node.
There are no content searches in Firebase at present. Many of the more common search scenarios, such as searching by attribute will be baked into Firebase as the API continues to expand.
In the meantime, it's certainly possible to grow your own. However, searching is a vast topic (think creating a real-time data store vast), greatly underestimated, and a critical feature of your application--not one you want to ad hoc or even depend on someone like Firebase to provide on your behalf. So it's typically simpler to employ a scalable third party tool to handle indexing, searching, tag/pattern matching, fuzzy logic, weighted rankings, et al.
The Firebase blog features a blog post on indexing with ElasticSearch which outlines a straightforward approach to integrating a quick, but extremely powerful, search engine into your Firebase backend.
Essentially, it's done in two steps. Monitor the data and index it:
var Firebase = require('firebase');
var ElasticClient = require('elasticsearchclient')
// initialize our ElasticSearch API
var client = new ElasticClient({ host: 'localhost', port: 9200 });
// listen for changes to Firebase data
var fb = new Firebase('<INSTANCE>.firebaseio.com/widgets');
fb.on('child_added', createOrUpdateIndex);
fb.on('child_changed', createOrUpdateIndex);
fb.on('child_removed', removeIndex);
function createOrUpdateIndex(snap) {
client.index(this.index, this.type, snap.val(), snap.name())
.on('data', function(data) { console.log('indexed ', snap.name()); })
.on('error', function(err) { /* handle errors */ });
}
function removeIndex(snap) {
client.deleteDocument(this.index, this.type, snap.name(), function(error, data) {
if( error ) console.error('failed to delete', snap.name(), error);
else console.log('deleted', snap.name());
});
}
Query the index when you want to do a search:
<script src="elastic.min.js"></script>
<script src="elastic-jquery-client.min.js"></script>
<script>
ejs.client = ejs.jQueryClient('http://localhost:9200');
client.search({
index: 'firebase',
type: 'widget',
body: ejs.Request().query(ejs.MatchQuery('title', 'foo'))
}, function (error, response) {
// handle response
});
</script>
There's an example, and a third party lib to simplify integration, here.
I believe you can do :
admin
.database()
.ref('/vals')
.orderByChild('name')
.startAt('cho')
.endAt("cho\uf8ff")
.once('value')
.then(c => res.send(c.val()));
this will find vals whose name are starting with cho.
source
The elastic search solution basically binds to add set del and offers a get by wich you can accomplish text searches.
It then saves the contents in mongodb.
While I love and reccomand elastic search for the maturity of the project, the same can be done without another server, using only the firebase database.
That's what I mean:
(https://github.com/metaschema/oxyzen)
for the indexing part basically the function:
JSON stringifies a document.
removes all the property names and JSON to leave only the data
(regex).
removes all xml tags (therefore also html) and attributes (remember
old guidance, "data should not be in xml attributes") to leave only
the pure text if xml or html was present.
removes all special chars and substitute with space (regex)
substitutes all instances of multiple spaces with one space (regex)
splits to spaces and cycles:
for each word adds refs to the document in some index structure in
your db tha basically contains childs named with words with childs
named with an escaped version of "ref/inthedatabase/dockey"
then inserts the document as a normal firebase application would do
in the oxyzen implementation, subsequent updates of the document ACTUALLY reads the index and updates it, removing the words that don't match anymore, and adding the new ones.
subsequent searches of words can directly find documents in the words child. multiple words searches are implemented using hits
SQL"LIKE" operation on firebase is possible
let node = await db.ref('yourPath').orderByChild('yourKey').startAt('!').endAt('SUBSTRING\uf8ff').once('value');
This query work for me, it look like the below statement in MySQL
select * from StoreAds where University Like %ps%;
query = database.getReference().child("StoreAds").orderByChild("University").startAt("ps").endAt("\uf8ff");