Sunday, March 29, 2015

week 11: Review on object-oriented programming

  I've had some new perspectives on object-oriented programming. And I want to compare it with process-oriented programming to strengthen our intuition.
  Let me give a simple example of how process-oriented programming work. Consider the case where we want to describe sale state of a shop. Say we have three items: pen, pencil, others.
Process oriented:
sale:
num_pen = num_pen -1
num_pencil = num_pencil -1
purchase:
num_pen = num_pen + 10
num_pencil = num_pencil+20
Object oriented:
class shop:
  pen
  pencil
  others

def sale(type, num):

def purchase(type, num):

Advantage of object-oriented programming:
1.Code reuse.
We can easily see from above that object-oriented programming is more human friendly. And by inheritance of class, we can derive tons of subclass that have similar properties or can share some same methods.
2.Encapsulation.
One can use object oriented programs without knowing one thing before looking at it. This means, as long as we know the type contract, we can call the program and get what we want to get in few lines.
Additionally, we can also prevent other programmers from tampering our programs since it hides the variables quiet well.
3.Design benefits.
When a program reaches certain size, it is easier to program oop rather than non-object-oriented programming. It is because oop forces the programmers to write programs with more focus design intention, which means it is less likely to be a huge law.








Monday, March 23, 2015

week 10: Mutating BST

 There are two situations that we will mutate a BST. One is inserting, but it does not change the topological structure of BST, so here I will just talk about deleting.
 
 Here is BST, and they are three cases we may encounter when deleting nodes from it.
  Case 1: Deleting one leaf from it, which means the node we want to delete has None as children. This is simple, we just delete from the BST and would not break the structure of the tree.
  Case 2: Deleting a node with only child. In this case, we just need to delete the node and link its only child to the parent of the original node. For example, if we delete 72 from the BST, we just need to put 67 to the right child of 54.
  Case 3: This is a bit trickier. Using recursion could make the process easier. What we need to do is first finding the node with minimum value rooted at the node we want to delete. Note that this is just a convention, theoretically we can find the maximum from the left child. After finding the node, we replace the target node's value with the minimum and delete the node with minimum value from the new node. So if we delete 17, we will find 19 to be smallest from 17's right children. Replacing 17 with 19 and delete 19 from its previous position then we are done.
Here is a code we implement the actual method.
    return_node = node
    if not node:
        pass
    elif data < node.data:
        node.left = delete(node.left, data)
    elif data > node.data:
        node.right = delete(node.right, data)
    elif not node.left:
        return_node = node.right
    elif not node.right:
        return_node = node.left
    else:
        node.data = _find_max(node.left).data
        node.left = delete(node.left, node.data)
    return return_node

But I noticed this may not work with BST that has duplicate nodes. One way to solve this from what I learned is that passing the address as well when we want to delete the node. The motivation is that address is unique for each node.




Sunday, March 22, 2015

week 9:Choosing powerful unittest case.

  We sometimes need unittest to help us check whether a code works for all the cases(ideally). For simple codes, we can check it directly by giving a few examples. But for codes that are more complex, like the minimax game strategy we implemented in our assignment, we can not test all the cases, nonetheless, it is reasonable to be confident that our code works perfectly after making it pass some powerful tests.
  Take minimax strategy as an example. Suppose we have current value N for subtract square game state, and for all the 10 possible next moves, 2 out of 10 are the moves that lead to a score 1. So if our suggest_move return one of those two moves, one should expect that it has a pretty low probability that it is just by chance that we get the desired result.
  Furthermore, assume the current value N1 and N2 has n1 and n2 possible independent moves respectively(ignore fact that they may be correlated.) If every time we set case with low probability for computer to randomly choose a right move, theoretically we can make the probability approaching zero by adding enough test cases. But to make it more efficient, we can choose good cases rather than a case that can not differentiate a random suggest_move and minimax. For instance, if there is a state N3, such that every next moves lead to a score -1. Then it can not help to reject the wrong codes.
  In conclusion, we should think wisely to choose powerful test cases that can give strong evidence whether it is a good code.

Sunday, March 8, 2015

week8 : Linked List

Linked List is a type of degenerated tree if you consider it a tree whose nodes has one child except for its leaf. And we can add a lot of useful methods on linked list, such as append, prepend, even built-in methods like __getitem__, __setitem__. Among those operations, most of them we just need a current_node to keep track of nodes in the linked list and traverse it node by node until its next is None.
Such as for __getitem__,

def __getitem__(self, index):
        ''' (LinkedList, int|slice) -> object

        Return the value at position index.
        # don't fuss about slices yet. 
if index > self.size - 1:
            raise Exception('out of range')
        else:
            current_node = self.front
            for i in range(0, index):
                current_node = current_node.nxt
            return current_node.value

But there are also some operations quiet complicate which require more than one flag pointers to keep track of the process, such as 
def insert_before(lnk, v1, v2):
    ''' (LinkedList, object) -> NoneType

    Insert a new node with value v1 before the first occurrence
    of a node with value v2.  Do nothing if no node has value
    v2.'''
    if v2 == lnk.front.value:
        new_node = LLNode(v1)
        new_node.nxt = lnk.front
        lnk.front = new_node
        lnk.size += 1
    else:
        pre = lnk.front
        cur = lnk.front.nxt
        while cur and cur.value != v2:
                pre = cur
                cur = cur.nxt
        if cur:
            new_node = LLNode(v1)
            new_node.nxt = cur
            pre.nxt = new_node
            lnk.size += 1
The additional node 'pre' is added because in one directional oriented linked list, there is no way for us to know the pre-node of current node. In that case, we need a node to keep track of the pre-node of current node, when current node meets the requirement, we can operate on the pre-node which is pointed by pre.











Sunday, March 1, 2015

Week 7: More on recursion

  In our assignment 2 , when we use Minimax Strategy, we actually applied a recursion of one kind of tree structure, and to be honest, it is an easy one.
  In our lab06, we were asked to do some exercise due to recursion, which were fairly interesting. For example, the part of implementing parenthesize, it is another form of calculator in comparison with RPN calculator I have done in week 5's SLOG.
 if isinstance(b.data, float):
        return str(b.data)
    else:
        return "(" + parenthesize(b.left) + b.data + parenthesize(b.right) +")"
And the following is the code for list_between, which is used to return a list of
integer of tree value lies between the interval.
    if node == None:
        return []
    elif start <= node.data <=end and is_leaf(node):
        return [node.data]
    else:
        if node.data < start:
            return list_between(node.right,start,end)
        elif node.data > end:
            return list_between(node.left,start,end)
        else:
            return list_between(node.left,start,end)+[node.data]+list_between(node.right,start,end)

The idea is that instead of traversing the whole tree, we construct a base case in which we know we can return "something", and then we just need to think like an idiot who asks only his neighbors what value to return without looking ahead.
In this case, in order to avoid "unnecessary" process of recursion, we can add some if statement to decide whether we need to look up for current node's all children. For example, if the data of current node is less than the start, we don't need to look up for its left child given its an well ordered binary tree. Similarly, we don't need to look up for its right child if its data bigger than the end. At last, if the data is inclusively between the start and the end, we add its data up with values returned from its left child and right child for which we both use list_between method in order to get recursive result.