Blog

Yelp Vancouver Instagram Takeover

Yelp Vancouver approached me to do a takeover of their Instagram account. I was be able to feature some of my favourite food spots as well as further expand my audience. It was a lot of fun and I’m really happy with how it turned out. Below are the stories I created for the takeover on April 30th 2019.

Foodora Campaigns

Foodora(food delivery company) first reached out to me in January 2018 to collaborate and help market their campaigns. Ever since then, I have done numerous promotional posts for them thus spreading awareness and increasing engagement. The most recent Valentine’s Day 2019 collaboration was a giveaway allowing my followers a chance to win 2 $50 vouchers to use on the app. Entrants were required to follow both @foodora_ca and myself which helped foodora grow real active followers who have an interest in Vancouver’s food scene. They were also asked to tag their friends in the comment section (1 tag = 1 entry) which allowed the campaign to spread quickly throughout the lower mainland.

View this post on Instagram

❤️Giveaway closed❤️ Why eat out on the BUSIEST day of the year when you can have the perfect night in with your Valentine? I’ve partnered up with my favourite food delivery app @foodora_ca . Choose from a huge variety of restaurants(Zakkushi, Virtuous Pie, Cartems, the list goes on!!) and have it all delivered right to your door. – We will be giving away $50 vouchers to TWO people to use on their app, Foodora. (You don't have to use it on Valentines day. You can save it for later!) – To Enter the Giveaway: 1. LIKE this photo 2. FOLLOW @foodora_ca AND @erictriesit 3. TAG one friend in the comment (Unlimited unique entries) – The cake is the orange mousse cake from @3quartersfullcafe and it was SO GOOD. The mousse itself had a florally flavour and slightly cheesy, totally complimented the orange perfectly. – Open only to Vancouver Residents. Contest closes on Wednesday, February 13th at 11:59PM. Winners will be randomly selected, DM'ed and announced the next day. Good luck everyone! #Sponsored #lovefoodora _ By entering, entrants confirm they are 13+ years of age, release Instagram of responsibility, and agree to Instagram's term of use.

A post shared by ????Eric – Vancouver Foodie (@erictriesit) on

In November 2018, foodora was approved to start delivering alcohol. This was a big deal! They offered free delivery all of November which would create an incentive to order beverages that month. This post played a large part in my own freelance photography career. It showed employers that I was able to do professional product shoots and in turn led me to work with various companies.

Here was my post to help kick start the launch:

7 Eleven X foodora

foodora formed a national partnership with 7Eleven and everyday essentials(milk, bread, Advil, condoms, Tylenol etc.) are now available for delivery exclusively through foodora Canada-wide. In addition, every order came with a 7Eleven x foodora tote bag.

My demographic is mostly students of Vancouver. When they announced that they delivered to The University of British Columbia, it was a BIG DEAL. This came in handy during exam season!

Pi (3.14) day was a really fun campaign and also my first! Who doesn’t love a good double meaning?! foodora worked with Australian company Peaked Pies to create an exclusive surf-and-turf pie ONLY AVAILABLE through the foodora app.

STAT 406 – Statistical Learning Course Reflection

All I can say is props to Professor Barrera for not only conveying SO MANY concepts to us clearly, but also making it interesting with his weird sense of humor. This is one of the courses that made me think that EVER SINGLE concept taught in this class is going to somehow benefit me or be used in the future.

Topics covered:

  • Supervised and unsupervised learning
  • K-fold cross validation
  • Prediction models (linear, non-linear) and non-parametric models
  • Variable selection: step-wise, sequencing, shrinkage
  • LASSO, Ridge regression, Elastic net
  • Smoothers (local regression, kernal, splines)
  • Regression and classification trees
  • K-nearest neighbors, QDA, LDA
  • Logistic Regression
  • Bagging
  • Curse of Dimensionality
  • Boosting
  • Random Forests
  • Neural Networks

MNIST Handwritten Digit Classifier

Dataset: Popular machine learning dataset from http://yann.lecun.com/exdb/mnist/ . Images are 24×24 pixels thus resulting in 784 explanatory variables. Each pixel will have a gray scale numerical value. The label/response variable is given from digits 1-9. Training set has 7000 observations while the test set is around 2000.

Goal: Use training/test set method in conjunction with ML algorithms to classify handwritten digits correctly.

 

 

 

 

 

 

We can try a decision tree first, it’s the easiest and can sometimes lead to good results.

library(rpart)
dtree <- rpart(as.factor(label) ~ ., data = dat.train, method='class')
plot(dtree, uniform=FALSE, margin=0.1)
text(dtree,use.n=F)


 

Each node makes a decision to split based on a threshold. It is safe to say that each split should result in the most homogeneous sub-tree.
I personally like to imagine a decision tree as one of those carnival games where you drop a marble into a box of pegs, but in this case the pegs are nodes deciding where the marble will fall.

With classifiers, MSPE is not appropriate. We can compare our predictions with the test set. The number of correctly classified digits divided by the total number of rows in our test data.

dtree.pr <- predict(dtree, newdata=dat.test, type='class')
dtree.pr.acc <- 1- sum(as.numeric(as.numeric(levels(dtree.pr))[dtree.pr]- as.numeric(dat.test$label) != 0))/nrow(dat.test)
dtree.pr.acc

#0.6165769 – so 61% success rate is not the best. This is to be expected. Decision trees are known to be unstable; a small change in the data could result in a COMPLETELY different tree.

So let’s try something else.

Bagging / bootstrapping

Now imagine I went up to 20 different magical talking decision trees and asked them what digit I have written on a piece of paper.
eight of them say the number 7 and the other twelve say it’s the number 9. We take the majority as our prediction. (for regression we can take the average response but this is classification!!)

We don’t have 20 different magical talking trees, so we need to make them. This process is called bootstrapping: randomly sample with replacement from our current training set to create 20 new pseudo-training sets. This can be computationally intense and could take a while.

my.c <- rpart.control(minsplit = 3, cp = 1e-6, xval = 10)
ensemble <- 20
ts <- vector('list',ensemble)
n <- nrow(dat.train)
for (j in 1:ensemble){
ii <- sample(1:n, replace=TRUE)
ts[[j]] <- rpart(label ~ ., data = dat.train[ii,], method = 'class', parms = list(split = 'information'), control= my.c)
}

Training each of the 20 training sets results in 20 different decision trees. This process further reduces variance.

It’s important to acknowledge that these trees seem over fit and we need to resist the urge of pruning, which is the process of simplifying trees by removing leaves/branches with low predictive power. In the case of bagging, an ensemble of overfit trees provides us with a better majority vote.

Let’s calculate our prediction error now:

prs <- list()
for(j in 1:ensemble) {
  pred <- predict(ts[[j]], newdata=dat.test, type='class')
  prs[[j]] <- as.matrix(pred)
}
prs.mx = do.call(cbind, prs)
prs.bagg <- c()
for (i in 1:dim(prs.mx)[1]){
  prs.bagg <- c(prs.bagg, mode(prs.mx[i,]))
}

bagg.pr.acc <- 1- sum(as.numeric(as.numeric(prs.bagg) - as.numeric(dat.test$label) != 0))/nrow(dat.test)

#0.8445829 – around 84% success rate! Pretty good but we still have one more thing to try…

Random Forest

Random forest in my experience is usually the best option. Similar to bagging, we have an ensemble, however it attempts to break up correlation by randomly limiting the features available at each node in each tree. Some drawbacks of random forest is that we need to take a sufficiently large ensemble in order for all features to be included.

library(randomForest)
dat.tr <- as.factor(dat.train$label)
randomf <- randomForest(as.factor(label) ~ ., data = dat.train, ntree = 50)
randomf.pr <- predict(randomf, newdata=dat.test, type='class')
randomf.pr.acc <- 1-sum(as.numeric(as.numeric(levels(randomf.pr))[randomf.pr] - as.numeric(x.test$label) != 0))/nrow(dat.test)

#0.93465612 – 93% success rate!! Pretty good, I think we can stop there.

Paris Pride

The last 4 months in Europe has truly changed me. Working and living abroad has rekindled my passion for what I do and allowed me to grow as a person.

I was fortunate enough to attend my first Pride event ever in Paris and it was one of the best experiences of my life. Attending this event invoked bittersweet emotions in me. It made me think of how far us as a society has come. I never expected so many people to come out to show their support. The event was a literal party: rainbow everything, hundreds of floats, music blasting, drag queens everywhere, people were celebrating love and inclusion. I was so moved. Below are videos and photos I took at the event:

 

STAT 443 – Time Series and Forecasting Course Reflection

After finishing STAT 443, I feel that the topics covered in this course will be very useful in my career. The professor Natalia Nolde was a great and I felt that she genuinely cared for her students. We were required to submit our assignments as PDF versions of R Markdown. It was fun to learn and the results after knitting our code was super rewarding. My friend Shangeeth described our finished R Markdown assignments as works of art because they looked so professional and took a lot of hard work.

Topics covered in this course:

  • Trend and seasonality(additive/multiplicative)
  • Autocorrelation/Autocovariance and correlogram
  • White noise / error
  • Yule-Walker
  • Stationarity
  • Stochastic models including AR, MA, ARMA, ARIMA, SARIMA models
  • Exponential smoothing
  • Holt-Winters methods
  • Box-Jenkins prediction approach
  • Frequency domain
  • Fourier transforms
  • Spectral density
  • Models for changing variance: GARCH processes

STAT 306 – Finding Relationships in Data Course Reflection

I also thought that the topics covered in this course will be very relevant at my future job. A lot of the course was done in R. There was an interesting homework assignment where he gave the entire class the same data set. Whoever was able to get the lowest Mean Squared Prediction Error with their model would get a high mark. We were able to model the data however we liked. I personally used a training/test subset approach.

Topics covered:

  • Modeling a response variable as a function of several explanatory variables
  • Multiple regression for a continuous response
  • Logistic regression for a binary response
  • Log-linear models for count data
  • Finding low-dimensional structure
  • Principal components analysis (PCA)
  • Cluster analysis

MATH 307 – Applied Linear Algebra

Definitely one of the harder courses. However, I came out with good MATLAB knowledge, which I used in other courses I was taking at the same time. The course felt like a combination of many of my past MATH courses but more in-depth and difficult.It always feels rewarding when you get to apply mathematical concepts to real world  problems like chemical systems, circuits, and Markov Chain probabilities.

Topics Covered:

  • Solving linear equations
  • Interpolation
  • Finite difference approximations
  • Subspaces, basis and dimension
  • The four fundamental subspaces for a matrix
  • Graphs and networks
  • Projections
  • Complex vector spaces and inner product
  • Orthonormal bases, orthogonal matrices and unitary matrices
  • Fourier series
  • Discrete Fourier transform
  • Eigenvalues and Eigenvectors
  • Hermitian matrices and real symmetric matrices
  • Power method
  • Recursion relations
  • Markov chains
  • Singular value decomposition
  • Principle component analyis (PCA)
  • Applications of linear algebra to problems in science and engineering
  • Use of computer algebra systems for solving problems in linear algebras

UBC Storm the Wall 2018

As my time at UBC was coming to an end, I ask myself “did I truly get the full University experience?”

The answer was no. So I made a pact with my close friends that we would say yes to various UBC events, join more clubs, and become more involved with school activities. This year I finally got to try  UBC AMS’ annual Storm The Wall.

“Storm the Wall has been a lasting tradition at UBC starting in 1978. It has grown to be the largest intramural event in North America with over 800 teams registering! The race itself is similar to a triathlon, but completed as a relay with a team of five. One team member swims laps in the Aquatic Centre, the second does a sprint up to Main Mall, the third bikes around Main Mall, the fourth runs through campus to the wall, and the last teammate joins the team at the wall where every team member will climb over!” – Recreation UBC

I volunteered to be the long distance run through the campus and boy did I overestimate my physical abilities.

Eric
group

CPSC 312 – Functional and Logic Programming Course Reflection

CPSC 312 – Functional and Logic Programming was an interesting course. This was the first computer science course that required us to learn and be proficient in two drastically different programming languages.

In this term, we learned both Prolog (logical programming) and Haskell (functional programming).  We also coded TicTacToe in both languages for our two projects.

I found Haskell easy to learn as it was similar to other function-based languages like Java. My favorite part was that each function had a type declaration which basically told us what parameters it took. This made things so much easier! For example:

sum :: Num a => [a] -> a
sum takes a list of some type and returns a value of that type.
That type must implement functions of the Sum class (+, *, -, etc)

Another thing I liked was that there were built in functions in Haskell called foldr and foldr which was basically a “for each” function. Example:

Let the function harmonic take one parameter n.

1 + 1/2 + 1/3 + 1/4 + 1/5 + … + 1/n

harmonic n = foldr (+) 0 [1/i | i <- [1..n]]

THIS WEBSITE WAS A LIFESAVER: http://learnyouahaskell.com/chapters

Prolog on the other hand did not come second nature to me. Prolog code was weird! Basically you have lines of code which are statements and predicates (if statements) that are true. Then you ask prolog queries and it will tell you whether or not it is true or it can give you a result. For example this is prolog code:

foo([],Y,Y).
foo([A|B],C,[A|D]) :- foo(C,B,D).

and the query that you would type into prolog would be: ?- foo([1,3], [9,7,4], A).
result given: A = [1, 9, 3, 7, 4].

The same function but in Haskell:

foo [] y = y

foo (a:b) c = a: foo c b

Brief summary of prolog topics covered in this class (that I can recall):

  • Propositional Definite Clauses
  • Bottom-up proofs(soundness and completeness)
  • Top-down proofs
  • Box model
  • Negation as failure
  • Relations and Datalog
  • Variables
  • Functions
  • is
  • Lists
  • Trees
  • Difference Lists
  • Triples
  • Ontologies
  • Proofs with variables
  • Unification

Haskell topics:

  • Tuples
  • Lists
  • Recursion
  • Lamda
  • List comprehension
  • Folding
  • Call by value
  • Call by name
  • Lazy evaluation
  • Types
  • Classes
  • Data
  • IO
  • Briefly touched on machine learning
  • Abstract data types