I recently learned today of carbon and it is absolutely fantastic.
In my own words, carbon provides a terminal-like formatting for your code snippets, which can be included in blog posts and the like. It just makes things easier to read, in my opinion.
Where my head goes is taking a snippet that looks like this:
options(stringsAsFactors = FALSE) ## load the packages library(wakefield) ## generate a dataset of random users users = r_data_frame( n = 500, id, state, date_stamp(name="registration_date"), dob, language ) users$ID = as.
Below is a post aimed at my future self. Be forewarned.
The idea is to take an R data frame and convert it to a JSON object where each entry in the JSON is a row from my dataset, and the entry has key/value (k/v) pairs where each column is a key.
Finally, if the value is missing for an arbitrary key, remove that k/v pair from the JSON entry.
In this post, I am going to walk through some issues that I recently encountered when attempting to get up and running with the Rasa stack.. I am a big fan of the work they are doing, and by and large, it makes a complex problem, chatbot development, accessible and leverages machine learning under the hood. This is in contrast to tools that levergae simple rule-based approaches.
Below we will be using conda to manage our python environments and ensure that the package dependencies align.
Many moons ago, I wrote some code to build a Tableau Data Extract from the work that I had munged together in python. I figured it was time to update the code since I recently discovered that the Tableau API has changed.
For a link to that old code, refer to the Jupyter Notebook in this repo.
Assumptions and Requirements First off, I am using a Macbook, and while I believe things are getting easier on Windows machines with respect to coding, I prefer to write Terminal commands over point-and-click installs.
If you have skimmed through some of my other posts on this blog, it’s probably not surprising that I love using Neo4j in my projects. While you certainly can develop and work through your ideas locally, if you are like me, you probably have a few pet projects going at once, some of which you might want to share publicly.
This post aims to highlight how quickly you can get up and running using Cloud9, a cloud-based development environment.
Below is a quick writeup on how I use R and RNeo4j to munge my data and throw “larger” datasets into Neo4j. In short, I am fairly capable in R, so I prefer to use it to do the heavy lifting.
All I am doing is calling the neo4j-shell tool via ?system command. This post runs through how I have used this approach in some of my recent projects. I used this process for a project that I am currently working on at work, where 3+ million nodes and nearly 9 million relationships.
I have been watching the DiagrammeR package for a while now, and at this stage, it’s pretty impressive. I encourage you to take a look at what is possible, but be assured the framework is there to do some really awesome things.
One use-case that applies to me is that of data modeling an app within Neo4j. There are already some tools out there, namely:
Arrows Graphgen by GraphAware And you can always use graphgists The last link above is a sample graph gist that is a decent overview.
I have been playing with Neo4j quite a bit, mostly for fun as I learn how I figure out when and where I could apply it to solve various analytics problems. Neo4j, at it’s core, is a database, which allows us to query data in a structured way. While the graph model within Neo4j is very flexible, the cypher query language is fantastic. Once you get over the learning curve, with only a few lines of code you can do some really powerful queries.
I have been working on a team that is aiming to implement a Salesforce-based CRM solution for Enrollment Management. From the beginning, we had an aggressive timeline, and the project has taken many twists and turns along the way. While our experience is certainly not unique, and perhaps commonplace, it’s provided us with an opportunity to evaluate some of the fundamental steps that should be set in place prior to continuing down our deployment path considering our go-live date is currently TBD.
This repo contains my first-ever R Shiny project. It’s simple, and represents a minimally viable app. It’s super basic, but the app allows us to query and visualize the NHL’s Play-by-play event logs for a given game.
I updated the app for the 2015-16 season. There are a few manual updates to the code that I could refactor and allow the end-user to set, but in the short run, it works.