Date archives "December 2017"

CPSC 312 – Functional and Logic Programming Course Reflection

CPSC 312 – Functional and Logic Programming was an interesting course. This was the first computer science course that required us to learn and be proficient in two drastically different programming languages.

In this term, we learned both Prolog (logical programming) and Haskell (functional programming).  We also coded TicTacToe in both languages for our two projects.

I found Haskell easy to learn as it was similar to other function-based languages like Java. My favorite part was that each function had a type declaration which basically told us what parameters it took. This made things so much easier! For example:

sum :: Num a => [a] -> a
sum takes a list of some type and returns a value of that type.
That type must implement functions of the Sum class (+, *, -, etc)

Another thing I liked was that there were built in functions in Haskell called foldr and foldr which was basically a “for each” function. Example:

Let the function harmonic take one parameter n.

1 + 1/2 + 1/3 + 1/4 + 1/5 + … + 1/n

harmonic n = foldr (+) 0 [1/i | i <- [1..n]]

THIS WEBSITE WAS A LIFESAVER: http://learnyouahaskell.com/chapters

Prolog on the other hand did not come second nature to me. Prolog code was weird! Basically you have lines of code which are statements and predicates (if statements) that are true. Then you ask prolog queries and it will tell you whether or not it is true or it can give you a result. For example this is prolog code:

foo([],Y,Y).
foo([A|B],C,[A|D]) :- foo(C,B,D).

and the query that you would type into prolog would be: ?- foo([1,3], [9,7,4], A).
result given: A = [1, 9, 3, 7, 4].

The same function but in Haskell:

foo [] y = y

foo (a:b) c = a: foo c b

Brief summary of prolog topics covered in this class (that I can recall):

  • Propositional Definite Clauses
  • Bottom-up proofs(soundness and completeness)
  • Top-down proofs
  • Box model
  • Negation as failure
  • Relations and Datalog
  • Variables
  • Functions
  • is
  • Lists
  • Trees
  • Difference Lists
  • Triples
  • Ontologies
  • Proofs with variables
  • Unification

Haskell topics:

  • Tuples
  • Lists
  • Recursion
  • Lamda
  • List comprehension
  • Folding
  • Call by value
  • Call by name
  • Lazy evaluation
  • Types
  • Classes
  • Data
  • IO
  • Briefly touched on machine learning
  • Abstract data types

STAT 344 – Sample Surveys Course Reflection

STAT 344 – Sample Surveys was actually a very enjoyable course for me. Unlike many theory-based statistics courses, stat 344 gives concrete and practical examples of how surveys are conducted. This gave me a feeling of accomplishment because I could get a sense of how what I was learning could be applied towards real-life situations. A common question that was asked on practice exams: we are given a table of data and we are asked to treat the data as a

a) stratified sample

b) panel study

c) aggregation of polls

d) cluster sample

and find their relative estimates and standard errors.

 

Some topics or concepts covered in this course (Off the top of my head):

  • Recommending a sample size in order to satisfy an employer’s preferred accuracy level
  • Bias (an example that helped me understand bias was “say you were sampling random people on the street and asking them the number of people in their household” but this is a biased way of sampling because larger households have a better chance of being approached by you)
  • Ratio vs. Regression vs. “Vanilla” estimation
  • Panel study (has a co-variance term)
  • Stratified sampling
  • One-stage cluster sampling (Simple random sample clusters, then sample everyone in selected cluster)
  • Two-stage cluster sampling (Simple random sample of clusters, then another random sampling within the cluster)
  • Aggregate polls (poll of polls)
  • House-effects (τ)
  • Weighted sampling
  • Proportional/optimal allocation
  • Cluster sampling with probability-proportional-to-size (this was tricky!)
  • Non-responders
  • 3 types of missing data – missing at random (MAR): the chance of participation varies with the helper variables but not with the variable of interest, missing completely at random (MCAR): the chance of participating is constant and does not depend of the variable of interest, non-ignorable missing (NMAR): chance of participation varies with the variable of interest and helper variables.

STAT 305 – Statistical Inference Course Reflection

STAT 305 – Introduction to Statistical Inference was a pretty difficult course in my opinion. It was very theory based with not many concrete examples. My favorite unit was probably likelihood estimators. I felt as if I could just follow the same game-plan for most questions:

1) Find the likelihood function by taking the product of n probability density functions

2) Then log it to make it the log likelihood which is easier to proceed with

3) Take the first derivative of the log likelihood and equate it to zero and solve for parameter of interest to find the MLE

4) Take the second derivative of of the log likelihood and if it is <0 then it ensures that you are maximizing

5) Fisher information is -E(second derivative)

6) Variance estimate is just 1/(Fisher Info)

Some topics or concepts covered in this course (Off the top of my head):

  • Moment Generating functions. First derivative gives E(Y) or mean while the second derivative gives E(Y²). Var(Y)=E(Y²)-E(Y)²
  • Likelihood functions
  • Maximum likelihood estimators (MLE’s)
  • Bayesian prior/posterior
  • Hessian matrix
  • Fisher information
  • Wilk’s and Pearson’s statistics
  • Paired comparisons/comparing 2 multinomial distributions
  • Hypothesis testing using Neyman Pearson Lemma. Significance level, power, and p-value.
  • Pooled samples
  • Categorical data with free parameters