Showing posts with label csc148. Show all posts
Showing posts with label csc148. Show all posts

Wednesday, 13 November 2013

Building a Scalable Network Search

WARNING: This whole discussion assumes that every node is well-behaved, and not malicious. Let's take this stuff one step at a time, ok? 
ALSO: Sorry this is so long. Lot to say.

Not sure how to preamble this so... I'll just dive in. Picture of a kitten.

Plain ol' search

 

Since we already know (more or less) how think about tree algorithms, let's model our networks as them, the same way I show in my earlier post

This makes common-sense search pretty easy - let the node send a search request to all its subtrees, and make them pass it on. If a node has the desired information, it can call back straight to the origin, and celebrate a job well done with a pint of orange juice.
 

Here's what it looks like: If our tree has a branching factor of two (the network then has a branching factor of three, since we include the pseudo-parent node), each node checks its own records and passes the request to its two pseudo-children:

        ---C
       /
   ---B
  /    \
 /      ---D
A
 \      ---F
  \    /
   ---E
       \
        ---G


(fyi I labelled the children in pre-order)


Let's call the time taken for the request to reach a child delta_t. We can assume that actually firing a request might take negligible time, since the parent need not wait for a response to send the next request.  It's like how you can send five letters from the post office at the same time, instead of sending one, waiting for a response, then the second, and so on.

Taking time steps through the process looks like this:

t = 0:  A fires a request to B and then to E.
t = delta_t:  B fires a request to C then F, and E asks F and G
t = 2 * delta_t:  C, D, F, and G each send requests to their children, and so on.


This tells us that we can reach twice as many peers with each increment delta_t, and therefore search takes logarithmic time - doubling the number of peers only results in a constant-time incremental increase! Changing the branching factor or the magnitude of delta_t will just affect the magnitude of that increment. 

But here's the catch: this type of search will hit every peer, both in the best and the worst case. Even if the first child node B has the desired information, the remaining children will not only propagate down their own trees, but also down B's subtree! Even though B does not propagate, if C, D, E, etc.. are connected to any of B's children, those children will receive the request! sadface.gif!

This sucks. It means that although search scales well, the amount of work every node will do as part of other nodes' searches is linear with N. Because of this, large networks will quickly choke on all the bandwidth that searches require. Suppose one node runs, on average, one search per minute. The number of searches it must process is N per minute. Imagine doing this with a million nodes, with the majority of results turning up negative - it's a complete waste.




Slightly better (maybe) search


So obviously we need to modify our search algorithm. We'll certainly need to sacrifice some things, but let's talk about what we want to maintain:

1)  Full searchability: If one node in N has the data we're looking for, we want to find it eventually.
2)  (More-or-less) Constant time node work: A node ought not to do much more work in a network with large N (say a few million nodes) than with small N. We could of course *let* a node do more work if it wants to take it on...
3)  Time of data retrieval is related to abundance of data: Looking for something common like cat pictures should take less or equal time to something rare like videos of me doing something stupid at that party one time.

We can add to this list, but this is a good place to start. Here's my solution for now: Limiting the number of hops a request can make. For each hop, increment the 'hop counter' on the request, and if it reaches a sustainable limit
J, hold the request. If any node in J has replied positive with the information, it should broadcast that the search can be called off. Ideally, information has been found within that radius and propagation may cease. Such a search results in node work proportional to J, and keeps a nice search time.

But what if nobody in the radius returns positive? I'd suggest letting one node in the outside circle propagate the request after waiting some time
T for confirmation that the item has not been found. It may be helpful for each failure to broadcast their unique ID; this way, propagating the request will ignore nodes that have already been hit. To be honest, I'm not sure what the work complexity is of this kind of search, because there's a lot of sub-algorithms going on.

Every time the bounds have been exceeded, the search will experience a deficit of
T extra time. Furthermore, the more 'common' the search (small number of extra T's) the less work every node does.  To this end, it is helpful for networks to be clustered into groups with similar interests (if I want cat pictures, I should hang with people that have a lot of cats). But I think this post is long enough : )


At this point I fell asleep, so...

...That's all for now. If you got this far, wow what the heck! We should hang out!

Or just leave a comment, that would be nice : )

Sunday, 13 October 2013

CSC148: Comments on Modularity and Recursion

Hey look you guys I'm gonna talk about OOP and recursion and why they're important, ok?


Object Oriented Programming

Object-oriented programming (compare with functional programming) means modelling a program as abstracted objects interacting with each other.  Each object consists of attributes (values associated with it, like number_of_legs) and/or methods (functions associated with the object, like defenestrate(target)).  Each object is generated by a programmer-defined class, both of which can then be used or modified.  Classes can also be extended by importing properties from old classes to make new classes (inheritance).

The advantage of OOP is that it creates an excellent organizational structure around the entire program, which improves readability and makes logical sections of code more reusable.  For example, I can easily take out a DatabaseManager class from one project and reuse it elsewhere.  OOP also provides useful abstractions for high-level programming - if you want to create a linked list, instead of figuring out how it works and implementing it, you can just create the object and use its methods.

The funny thing is that there's practically nothing you can do with objects that you can't do with subroutines. But! objects make code far more reusable, as well as readable.  This means it's no longer necessary to create tightly controlled, monolithic programs (which do have their uses, such as hardware programming).  In the best case, each class can then be taken out of context and modified without consequence.  Finally, it's simply easier to think about how the pieces of a program work if you consider them as objects with properties; attributes become more organized, and therefore less likely to fail due to programmer error.  It's definitely recommended practice whenever possible, for the sake of your code's longevity.

PS >> Here's an interesting, albeit somewhat unrelated article I found while researching:  The difference between imperative and declarative programming styles


Recursion

If you're a functional programmer, this is pretty much the bee's knees.  When a problem can be divided into smaller versions of itself, you can use a recursive algorithm to reduce the length of code, and often greatly simplify the task (reducing likelihood of a bug).  Similarly to OOP, recursion lets you simplify a complex algorithm into abstractions.

For example, say you want to search through a nested list (lists containing lists containing lists containing..). You could try iterating over the list and checking cases:

l = [1, [2, 3], [4, [5]]]
for item in l:
  #Check if the item is a list, and iterate over it if it is
  #If that list has a list, loop over that..
  #More if statements to account for deeper nesting
  #If the item ain't a list, check if it's what we're looking for

As you can see, the code starts to nest a whole bunch of loops, and even then it's not very flexible (how do you handle a hundred lists nested within each other?).

Now try thinking recursively: We're looking for a value in this list.  If an item in the list is another list, we want to search it, and so on.  Since the search algorithm is the same in each case, we can just write a general statement:
1) Look through a list for a value.
2) If this list contains another list, look through that list using step 1.

Just like that, a monolithic algorithm full of looping and type checking can be shortened to just a few lines.

One big problem, however, is that debugging recursion is difficult, or at least different.  If a recursive function fails after 100 iterations, the traceback will be the function itself, 100 times.  This means you ought to take care to understand every possible case before implementing it (e.g. What if the array index goes out of range? What happens when we finish execution? Consider all the possible arguments).

All in all, recursion is a powerful tool, but with great power comes great responsibility.  Recursive functions should be thoroughly tested with a range of inputs, and kept as isolated as possible (think of it as a black box that will return the correct answer), because they're really annoying to debug when they fail.  Furthermore, thoroughly document such functions, because they can be difficult to read and understand for others reading your code.  But besides that, recursion can be particularly useful for shrinking down and conceptually simplifying code.