SLOG #13 – Final Words and Thoughts

Well, this is going to be my final post on this CSC148 courSe LOG (SLOG) and I’m only now realizing how difficult a task it’s going to be to sum up this semester. But one thing’s for sure: my first semester at the University of Toronto has been phenomenally educational and a great deal of fun.

When I walked into BA1180 on Monday, the 9th of September at 9am, I was nervous, afraid that the other 150 people in the room would be far smarter than me and that the lecture content would prove too challenging. That I was coming in with only a Java background (whereas CSC148 is Python-based) didn’t help my nerves.

Within the next hour though, my nerves settled and I found things to be running smoothly. Most importantly, I found that my professor, Danny Heap, is absolutely brilliant. I still don’t know how he manages to keep us engaged while also remembering most of the students’ names and, of course, providing complex, important information in an easy-to-understand way. Without Danny, I am sure this class would not have been nearly as fun and interesting.

As the semester wore on, I began to understand that learning a new language is really not as hard as it seemed, and that the core functioning of most OOPs is the same, regardless of the language. Indeed, my past experience in Java helped me a great deal in many parts of the course, from understanding recursion to LinkedLists to BSTs and efficiency. Even for the parts I didn’t know, like memoization and reduce(), I found myself consciously drawing upon my high school education. Having this prior knowledge, in some strange way, actually helped me better understand the concepts and gave me a frame of reference to compare my knowledge.

The exercises and labs too, I felt, were excellent. For the first half of the semester, we got a weekly exercise to complete, which forced me to review my lecture notes and concepts, helping me to perform better in the course. Though they were oftentimes challenging, I feel all of us could have benefitted from some more exercises in the second half of the semester. The labs were also highly educational for two reasons: (1) I worked with two other people, and we continuously helped each other and this shored up my knowledge, and (2) my TA, Amirali, was excellent because he always gave us a different perspective on the problem and helped us think in new, creative ways. For this, I thank you, Mr. Amirali.

And of course, I can’t leave out this SLOG itself. Oddly enough, I have found that maintaining a continuous blog on the course has actually forced me to review my notes and has made me think about the concepts in a clear and concise way. Reading and commenting on other students’ SLOGs (like the ones in my Blogroll, on the right-navigation-bar) also gave me greater depth of understanding. So this courSe LOG has, contrary to my initial belief, helped me do better and better in this course.

Now, with less than 5 days to go for the final exam, the nerves I had at the beginning of the semester are once again beginning to show up. But this time, I go in with a different mind-set. This time, I know I can cut it, I know that I am good enough. This time, I know that CSC148 has prepared me well enough. This time, I know I can succeed.

Advertisements

SLOG #12 – Sorting and Efficiency (Prescribed Topics)

This week, under the instructions of Professor Danny, I will be writing a two-part SLOG entry about sorting and efficiency. I have, of course, already written about these two topics in previous entries (SLOG #6 — Big-O and SLOG #7 — Sorting), but the fact that it has been several weeks since those entries means I now have a better understanding of both areas.

Sorting
Sorting is one of the most important terms we’ve come across in CSC148, and for good reason. There are several sorts that have been developed around the world, and each with its own efficiency (a term that I will discuss next). In class, we’ve only looked at a few, namely: selection, merge, and quick sorts.

As I gave in my previous post about sorting, below are short descriptions of each sort:

  • The selection sort – for each element – finds the minimum value from that element onwards, and switches the two elements (this is for sorting in ascending order, of course).
  • The merge sort, on the other hand, continually halves an unsorted list (recursion’s a great tool!), then sorts those “halves” and then finally combines all of these halves to create a sorted list.
  • The quick sort chooses a “pivot” from the beginning of the list, and splits the unsorted list into halves – the first half being smaller than the pivot, the second half being larger. The first pivot is now sorted (and is in the middle of the list), so the new first element becomes the pivot, and the process is [recursively] begun again.

In class, we compared each of the various sorts on different input sizes using Python’s time() function, which was the first time I had ever seen a visual representation of the efficiency of the different sorts.

What was very interesting was that some sorts were more efficient than others only for a certain range of inputs, which was something I hadn’t expected. This says a great deal, again, about efficiency, and that for smaller inputs, sorts all work approximately as well as each other. As the inputs increased, however, efficiencies played a much bigger role. Not surprisingly, the selection and merge sorts became inefficient quickly, while it was the quick sort that came out on top (of the sorts we studied); this didn’t surprise me one bit because its Big-O efficiency of O(n log n). Not surprisingly, however, it turned out that Python’s own built-in list.sort() came up the winner in all but the very biggest inputs, which is because its searching mechanism itself changes for different input sizes; that, too, is something I didn’t expect, and is definitely something I intend to look into in the future.

But more importantly, the type of input can change the efficiency. For example, an input in reverse-sorted order renders the quick sort very inefficient, whereas it may not have as big an impact on other sorts, which was another thing I had never know of from previous experience. In such cases, randomizing the input, or increasing the max-recursion limit can be very important.

Learning to write sorting algorithms quickly and efficiently is a skill I am yet to master, but one I know I will need in a future career. That said, let me move on to the next term, ‘efficiency.’

Efficiency
Over the past several weeks, another term that’s often come up in CSC148 is efficiency. Specifically, Big-O efficiency (though there are other efficiency descriptors too – Big-Omega and Big-Theta).

Big-O running time gives an absolute upper-bound on the running-time of a given algorithm, and gives the worst-case performance of an algorithm. This worst-case performance is given in terms of the input size for the algorithm. For example, the Big-O efficiency of a linear search is O(n), or “Oh-n”, which tells us that, at worst, the algorithm will need to search all n elements in a list to find a given element.

Why do we need to know this? Simply put, it tells us that our algorithm will never perform worse than a given running-time, which is helpful because we know that in most situations, the efficiency will actually be better than the given Big-O. By contrast, a best-case performance (i.e. Big-Omega) will be less helpful because our efficiency would almost always be worse than the given runtime. For the sorts talked about above, these are their respective Big-O notations:

  • Selection sort: O(n^2)
  • Merge Sort: O(n log n)
  • Quick Sort: O(n log n) — both merge and quick sorts may have the same Big-O efficiency, but for larger and larger inputs, the merge sort’s efficiency becomes closer to O(n), which is obviously worse than the quick sort’s O(n log n)

Arguably, the most important thing I learned about Big-O is that it is tremendously important in the computer science world; it tells a great deal about algorithms, but it tells a great deal more about the person who programmed the algorithms, and Big-O will be the standard that every programmer will be measured against during the course of their interviews, jobs and careers. Big-O is an area that will be absolutely pivotal as I continue my journey in the computer science world.

SLOG #11 – Naming and Memoization

This week was our final full week of school, and it rounded off a semester that has been challenging but incredibly fun. These were the last lectures in which we learn something new, making it all the more important.

First up, we talked about the scopes of programs, and Python namespaces. In short, “namespaces” is simply a mapping from a name to an object. We talked about how scopes and namespaces interact, and based upon that, how variables are accessed and altered, and ultimately, how programs are executed. We learnt that Python looks, in a sense, “upwards” when searching for a variable name or a method.

  1. That is, it first analyses the innermost scope (usually the body of a function, often knows as the “local” scope) for variable names and executes commands based upon that.
  2. Next, it looks at “nonlocal” scopes, or scopes that enclose the innermost scope.
  3. Third, it looks at the “global” scope, where names are defined either at a module level or in  __main__
  4. And finally, Python looks at the built-in names that have been used in the program.

This hierarchical understanding is important because it allows programmers to not have to instantiate new variables again and again with similar names; doing this, logically, can make a program hard to read and understand. But with this hierarchy, a programmer can use the variable name “deepest_path” in three different scopes, for three different purposes, and have three diferent values, just as he/she requires it. This would remove the need for naming variables such as deepest_path_one, deepest_path_two and so on…

We then spoke briefly about tracing through algorithms, and learnt a bit about an intuitive approach to tracing. It’s something I’ve been very used to doing though, so I won’t delve too much into it.

Finally, to round off the semester, we talked about “memorization.” This is a term that is completely new to me, so it was a very intriguing lecture. The term refers to an optimization technique to avoid redundant and repetitive function calls, a problem that arises primarily in recursive functions. We looked at the concept through the lens of the Fibonacci sequence. If we have a function called fib (n: int) that finds the first n terms of the Fibonacci sequence, if we call fib (4), then fib(3) is called twice; this is unnecessary and slows down the process. With memorization though, we build up a cached table of the results of each function call; so for every new function call, it checks if the value is already cached. If it is isn’t, then the call takes place; if it is already cached though, then we simply use that value! This can increase program efficiency many, many-fold!

Also, while digging through some fellow CSC148 student’s SLOGs, I came across this excellent post on the concepts we learnt this week. He/she put across their points very succinctly and clearly. Check it out! :
http://abrarisland.wordpress.com/2013/11/30/slog-week-ten/

And with that, the theory for CSC148 is over. I can’t believe how incredibly fun (but also difficult, anyone remember A1 and A2?) it has been. In my next post, I will be talking about sorting and efficiency (topics assigned to us by Professor Danny), and in my final post (coming sometime next week), I will have a truly reflective, and wrapping-up-loose-ends sort of post.

Do comment on anything by clicking on “Leave a Comment.”

Exams coming soon!

SLOG #10 – Getting, Setting and Altering

This week’s lectures delved into some very interesting material on information hiding, altering in-built methods by combining elements of an iterable and ‘reducing’ them.

The first half of the week, we talked about information hiding. Coming from a Java background, I know that Java has this attribute inherent in its language – specifically, through its use of ‘public’ and ‘private’ methods. This is in contrast to Python, where – by definition – every part of a program is public and is available to any part of a program. While this can be very useful and make programming “easier”, it can also lead to a lot of purposeful and accidental mishaps.

This is where “getter” and “setter” methods (or “accessor” and “mutator” methods) come in. They allow variables to be accessed and set/altered using only these methods, and not in any other way, making a program more secure and fool-proof. The get() and set() methods we learnt about are very similar to those I learn about in Java, except for the property() method. property() must necessarily be called in order for a variable to be encapsulated and be accessible and mutatable only through its get() and set() methods.

The second half of the week, we talked about how to alter in-built functions and the reduce() method. reduce() takes in a given iterable argument (for example, a nested list), and applies a function cumulatively to the items in the argument so as to ‘reduce’ and return a single value. We looked primarily at how reduce() can be used to add values in a list (as opposed to concatenate them), and also find the sum of the values in a possibly nested list. This was an area that was completely new to me, making it highly interesting and educational (and the fact that Google uses MapReduce makes it even more interesting!)

A fellow CSC148 student has also written a very good post about this week’s topics, and was – in some parts – able to discuss things more tersely than I have. Check it out!
http://dhyeysejpal.blogspot.ca/2013/11/reduce-and-information-hiding.html

Only a few more lectures of CSC148 to go (3 or 4, I think) – hope they end on a high!

Hope everyone else is getting ready for the end-of-term exams (and of course, for the post-end-of-term joy)! Comment by clicking on “Leave a Comment”!

SLOG #9 – Some Administrative Things

Today’s very short post is regarding just one or two administrative things.

As I have done throughout the semester, I will continue to link to various other interesting SLOG entries from my CSC148 colleagues (the links will be within the body of my own entries) until the end of the semester. However, now, I have also put up a “blogroll” — so to speak — of some of my favourite CSC148 SLOGs. I have read some excellent, insightful posts on these SLOGs and want to acknowledge people for their work in not only expressing their own thoughts, but also helping others clarify their own! I hope my own readers derive as much pleasure from these other SLOGs as I have.

The links to these other SLOGs can be seen on the navigation bar on the right, under “Blogroll”.

With that said, this week’s entry will be coming up very shortly! Of course, do comment away on anything of interest by clicking on the “Leave a Comment” link above the title of any SLOG entry. I will reply as best I can!

SLOG #8 – Term Test 2

With Fall Reading Break falling (pun intended) on Monday and Tuesday, we did not have our regular Monday lecture and Tuesday lab. What we had instead, was Term Test 2 for CSC148 on Wednesday morning.

This test was an interesting one simply because the problems themselves weren’t too difficult. Indeed, I had seen similar problems in our exercises and labs themselves, which gladdened my heart no end. However, the nature of the problems was such that every single question demanded the use of recursion; and I’m sure every single one of you, CSC148 students, know that when it comes to recursion, things sometimes simply go wrong.

I was able to answer each question, but I’m slightly unsure if my algorithms do exactly what I intend them to do. Especially in question 3 (where we had to create a LinkedList of certain nodes in a BST and return the front node of the LinkedList), I know that I went about answering the question in a right manner (though there are simpler ways, I now realize), but I somehow feel like my recursive solution will spit out some wrong values – but I certainly hope not!

In light of this, while traversing some other SLOGs, I came across a fellow CSC148 classmate’s post regarding Test 2. I particularly enjoyed the post because he supplemented his solutions with detailed explanations on what each segment of the code does. This, and the fact that his solutions were far more terse and efficient than mine helped me shore up my understanding of recursion and its implementation, so thanks!. A link to this excellent SLOG entry is below:
http://ammarqadri.blogspot.ca/2013/11/midterm-2.html

Aside from the test, there was no real CSC148 work to do, so I thought I’d install the Eclipse IDE and Java on my computer, and begin a bit of individual work on that. I have a strong knowledge of Java from high school, but brushing up on skills will be a great way to get into the mood for next semester’s CSC207. I also went ahead and installed the Android Development Tools plugin for Eclipse, and intend to learn a bit about programming on the Android OS (something I’ve always wanted to do).

Exciting times ahead!

How did the rest of our CSC148 class do for Test 2? Anyone else working on any interesting side-projects of their own? Do comment!

SLOG #7 – All Kinds of Sorts

Building off last week’s lectures on Big-O, it was only natural that we were introduced to some of the various kinds of sorting algorithms available to programmers, and how to implement them.

Primarily, we looked at the ‘selection’ sort, the ‘merge’ sort and the ‘quick’ sort. None of the sorts were new to me, but I quickly found that my prior knowledge of sorting was deficient, and this week’s lectures gave me a far deeper understanding than I previously had.

To me, the selection and merge sorts were fairly straightforward, whereas it took me a little while to wrap my head around the quick sort. Below is a short description of my understanding of each of the sorts.

  • The selection sort – for each element – finds the minimum value from that element onwards, and switches the two elements (this is for sorting in ascending order, of course).
  • The merge sort, on the other hand, continually halves an unsorted list (recursion’s a great tool!), then sorts those “halves” and then finally combines all of these halves to create a sorted list.
  • The quick sort chooses a “pivot” from the beginning of the list, and splits the unsorted list into halves – the first half being smaller than the pivot, the second half being larger. The first pivot is now sorted (and is in the middle of the list), so the new first element becomes the pivot, and the process is [recursively] begun again.

The rest of my post is going to deal with the consequences of the sorts that I found interesting, but before I do that, I would like to share a fellow SLOGger’s entry on searching and sorting. This post (and pretty much every other post on this classmate’s blog) delves much deeper into the mechanics of whatever he is talking about, giving the reader a very thorough understanding of the subject, which I absolutely love! The link to the entry is below:
http://g3sf.x10.mx/blog/?id=6

That said, much more interesting than simply learning the sorts, was seeing the performance of the sorts. This was the first time I had seen timed executions of the various sorts, so watching them being compared to each other really helped me reinforce the idea of efficiency and Big-O performance. The fact that we compared these sorts to the ‘count’ sort (one I had never encountered before) and Python’s in-built .sort() function further shored up my understanding of efficiency.

From the comparisons, it was clear that the recursive sorts were far faster than the non-recursive sorts. While this was something I had always known to be true, this week’s lectures finally gave a practical side to information I knew only in theory, which was highly educational for me.

With that said, I look forward to next week’s final CSC148 term test. Practice, practice, practice!