General Solution to the Josephus ProblemApril 29, 2016
Let’s figure out how to generalize a solution for some value of where is the number of people in the circle for the Josephus problem.
Imagine you have a circle of people and you go around the circle removing every second person until one person is left.
If you have 3 people in the circle, then the 3rd person will be the last one remaining.
If you have 4 people then the 1st person will be the last one remaining.
If you have 11 people then the 7th person will be the one remaining.
If you have N people in the circle, who will be the last one remaining?
Getting a Feel for the Problem
First things first, we need to identify some sort of pattern to come up with a general solution. Okay, so how do we this? A logical first step would be to start with small inputsThis is a good rule of thumb when trying to discover any discernable pattern and you have no idea what you are dealing with. .
First, one person.
One person is trivially the last person remaining.
Next, two people.
The second person is removed first and we are left with the first person remaining.
Next, three people.
The second person is removed, then the first person is removed, and the third person is the last person remaining.
Okay, starting to get a little tedious..
Next, four people.
Remove the second person, remove the fourth person, then remove the third person, and the first person is the last person remaining.
What are the results so far?
Okay, it doesn’t look like a simple pattern is going to obviously emerge.
We are going to need more data.
Finding the Pattern
What do we when we face a boring, repetitive task? Automate of course! I wrote a few lines of Python that generates exactly the data we need.
This will print out the solutions from to where is the number of people in the circle.
Here is the output from the REPL:
This can be easily reorganized and we can see a simple pattern from this data. There seems to be a pattern that repeats every so often.
Where to go from here? If we check the one-indexed position of each solution with where the solution is 1, we see the following pattern:
Okay, so now we have a nice general pattern to work with.
The next step will be to formulate a general solution for .
Generalizing from the Particular
How can we develop a formula from this data in particular?
It is clear that the solution will always be at least 1. And it will always be 1 + 2 multiplied by some number.
So how do we generalize to find that number based on any ?
Lets try rewriting the number of elements in each row as a power of two to see what number of people are returning a solution of 1.
Okay, so values of that are powers of 2 result in a solution of 1. What then determines the multiple of two that is added when is not exactly a power of two?
We can determine this by relating as a surplus of the maximum power of two that fits into the bounds of . Then whatever number is leftover will be the number of times to add two to the solution.
We can easily find the maximum power of 2 that fits into the magnitude of by taking .
Thus, we can write a general solution as follows.
This solution is valid for all .
Continuous Integration with Node.js/Express Made EasyMarch 23, 2016
I decided to make a post about continuous integration because I couldn’t find any straightforward, simple resources for how to setup code coverage and build checks.
What is continuous integration? It is essentially using automated tools to continually ensure good software quality via testing and build checks when pushing to a repository on a frequent basis.
We will use Travis CI for our build checks, Coveralls for our code coverage checks, and David for our node dependency checks.
We will use Mocha for our unit testing framework, Chai for our test assertion library, and Istanbul for our code coverage generation (using Mocha).
To give you a general idea of how this is going to play out, the sequence of events looks something like this:
1. Commit/push to remote repository.
2. Travis CI is triggered to run a build check
3. Travis CI runs our custom npm run-script command.
4. Istanbul runs using Mocha to run the unit tests and generate code coverage files.
5. Code coverage files are piped to Coveralls node module.
6. Coveralls node module sends data to remote Coveralls server.
7. Code coverage is reported
8. All of this data is logged and reported with these services, and optionally on your GitHub repository.
Register with Travis-CI
Travis CI takes care of our build checks. By default, Travis CI runs npm test, but in our our example we will configure it to run npm run-script coverage in order to leave npm test for development server side testing.
Once you have registered an account at “travis-ci.org”:https://travis-ci.org then you can simply add your repository and configure it to your needs. You will need to add a .travis.yml file which uses YAML front matter (YAML is a human friendly data serialization standard for all programming languages).
In this example, our .travis.yml file will end up looking something like this:
Note that we are calling the custom script npm run-script coverage which we will define once we have Mocha and Istanbul working.
You can create an account at “coveralls.io”:https://coveralls.io and easily link it to your public repository.
Next, you need to install “node-coveralls”:https://github.com/nickmerwin/node-coveralls with your package manager:
h2. Unit Testing with Mocha/Chai
Mocha will be our unit testing framework, and Chai will be our assertion library.
You can easily install both of these via your node package manager:
You can go over the documentation for these libraries from there to set them up.
Generating Code Coverage with Istanbul
We will use Istanbul with Mocha to generate code coverage files to pipe to Coveralls.
We will define the commands for this in a custom node commmand under “scripts” in our package.json file.
Istanbul will now be run using Mocha to generate code coverage (.lcov) files which will be piped to the node Coveralls module.
David for Node Dependencies
Using David is about as easy as it gets to manage your node dependencies.
And that’s it! On your next push to your remote repository, everything will be triggered. Best of all, it is all automated so you don’t have to lift a finger. On your next push you can check your Travis CI to see all the details of what is happening with each build instance.
You can get the SVG badges from Travis CI, Coveralls, and David to reflect your repository health in your main markdown file which will automatically update to reflect your latest push as well.
RESTful Requests with Node.js/ExpressMarch 21, 2016
Making RESTful responses using forms with Node.js/Express is fairly simple.
All HTML links are GET requests, while forms can be GET or POST. So where does this leave PUT and DELETE?
Luckily, Express can work some magic in the backend for us.
Each URL route is defined using either one of the following: .get, .post, .put, .delete.
So how do we make PUT and DELETE requests if the HTML specification only allows us to make GET and POST requests?
The answer lies in the method-override module included with Express 4.
Hidden RESTful Form Requests
As middleware in your application:
Then in your form you simply add a hidden input field:
Explicit RESTful Form/Link Requests
Again, as middleware in your application:
And then in your link append the method type as a query parameter using action from a form or as part of a link URL:
Although there is no way to natively make RESTful PUT and DELETE requests using the HTTP specification, Express/Node.js makes it easy to simulate this behavior using the method-override module.
And now you have beautiful RESTful responses working in your web application!
Simple Pagination with Node.js/Express/Mongoose/MongoDBMarch 20, 2016
The basic pattern for a simple pagination system using Mongoose/MongoDB and Node.js/Express is fairly simple.
Each page will be associated with a base URL and a page number.
In our example we will be referring to a collection named Product to model the pagination of an online commerce store.
Your router may look something like:
We will use the pageNumber wildcard attribute to query against our MongoDB database to retrieve the relevant data and set the link for the next and previous buttons.
Retreiving the Results
Our first step is to query the proper results for that given page. We can achieve this with the aggregate pipeline functionality that MongoDB provides.
Aggregation in MongoDB groups values from multiple documents together and can perform a variety of operations on the returned aggregated results. Aggregation can be achieved through the aggregation pipeline, the map-reduce function, and single purpose aggregation methods. We will use the aggregation pipeline in this case.
The aggregation pipeline is a multi-stage data aggregation framework. At each stage you can filter and perform transformations to the data.
We will use Mongoose to interface with our MonogoDB collection. We can use the aggregate method to achive these results. *(Note: MongoDB >= 2.1 is required!)
The aggregate constructor returns an Aggregate object. We can bind an Aggregate object to a Model in Mongoose by using the aggregate method on any given Model. Every bound Aggregate object has an exec method that can be used to execute the pipeline on the currently bound object. When we execute this pipeline, exec has a callback that can be used to manipulate the returned data from the pipeline.
We will use the $match, $sort, $skip, and $limit pipeline operators to filter the database objects for each page.
The basic template looks like this. Each enclosed curly bracket pair is an expression.
Now that you have a good idea of the general structure, here is a more specific example. Note that we make judicious use of req.params and req.locals.
req.params accesses the dynamic :pageNumber value, while res.locals sets a variable that can accessed by our HTML preprocessor so that can create the appropriate next and previous links.
Now in your HTML preprocessor you will have access to the boolean variables next and prevoius which you can check for before creating the next and previous buttons using the nextPage and previousPage data.
And that is all there is to it!
Problem Solving SmellsNovember 21, 2015
Problem solving is a skill. It must developed and honed on a regular basis. Like any other skill, it requires many hours to graduate apprentice to journeyman.
Becoming a better problem solver is a useful asset. It extends further than being able to solve a variety of pre-contrived word problems. It equips you with the toolset that will help you efficiently learn information and reach your goals.
This post will classify some of the common smells we can use to improve our problem solving ability.
I can recommend the following additional resources for further study: How To Solve It by George Polya, Pragmatic Thinking and Learning by Andy Hunt, and The Art and Craft of Problem Solving by Paul Zeitz.
In coding, there are code smells.
bq. a code smell is a piece of code that indicates a deeper problem. It is a warning sign that something should be done to prevent problems down the line. Code smells guard against bad coding techniques, wasted time, and decreased efficiency.
In problem solving, there are problem solving smells.
a problem solving smell as an indicator that we may be headed in the wrong direction in trying to solve a problem. Not paying attention to these indicators leads to more work in the long run. Problem solving smells guard against bad problem solving techniques, wasted time, and decreased efficiency*.
Like code smells, problem solving smells prevent against long term losses in efficiency. Identifying them allows us to avoid bad practices that we often believe will save time. Ironically, they often achieve the opposite. In ignoring these smells we may be able to gain time in the short term, but we will pay the price in the end.
The following are generally defined problem solving smells.
Overload by Details
When presented a problem with many details, we may gain lots information in the absence of understanding. A narrow focus on these facts may provide a false sense of understanding.
We may jump into a problem, eager to solve it, without stepping back and doing proper due diligence. Our lack of understanding may compound further throughout the question as we move forward.
We should always be able to explain the general premise of the problem to a five year old. If we cannot, we should consider this smell.
The danger of this smell comes from the belief that we can proceed by simply utilizing a great number of details. If we do not first take the time to understanding the concepts, the results can be illusory.
Proceeding with a shallow understanding may allow us to solve the problem at hand once or twice. It may even allow us to solve a collection of similar problems via pattern recognition. But we will eventually fail when the condition changes and a deeper understanding is required.
Avoid this smell by forcing yourself to get a good grasp of the central concepts. Then, and only then, should we conceptualize the details. It is only in this context that the details will make sense. We can paradoxically save a lot of time in the short and long term using this strategy.
The Indiscernible Obvious
Many times there are pieces of information in the problem that we become habituated to. We may leave a problem, only to come back later and immediately identify the issue. We then find that it was staring us in the face the entire time. How could we have consistently overlooked something so simple?
When we find ourselves exhausting our mental toolbox two or three times over we should be weary. In these cases we should consider that we may be stuck in our current mental sets.
bq. mental sets represent a form of rigidity in which an individual behaves or believes in a certain way due to prior experience.
Often we create mental sets for different entities within a problem. Stepping back and giving ourselves some time can help us identify these obvious mistakes.
The Wrong Direction
We can spend so much valuable time and effort on a problem only to find ourselves stuck at a wall with nowhere to turn. We may have exhausted all viable options and there really is nowhere else to turn.
If all other reasonable options have been tested then we should look for a larger error. Instead of looking for a local mistake we can begin to approach the problem from a global space. Looking at the problem globally may open our eyes to a larger, more broad issue.
In looking at the problem globally, we should consider novel approaches to the problem. This will allow us to better assess where we may have gone wrong. We must be aware of cognitive biases such as functional fixedness.
bq. functional fixedness is a cognitive bias that limits a person to using an object only in the way it is traditionally used.
Think about this approach when the problem has been exhaustively attempted. Then look back globally and redefine your initial approach to the problem.
The following strategies can help overcome some of these problem solving smells.
The Friendly Duck
Buy yourself a rubber duck for your work desk. Everytime you are tempted to ask someone else a question, ask the rubber duck instead. In many situations you will come up with the answer by simply stating the question out loud.
This trick is suprisingly effective.
Take a Walk
Going out into nature, or even just taking a short stroll in the office, can have profound benefits in the domain of problem solving. It allows us to divert resources from our conscious, logical brain to our subconscious, abstract brain. These two sides of the brain are commonly referred to “L-mode” and “R-mode” respectively.
L-mode works in the foreground with language processing, conscious logic problems, etc. R-mode works asynchronously to solve problems you have been solving. Since R-mode works in the background, we must give it a chance to give its input.
*The goal is to dampen L-mode so that R-mode can derive and show us an insight. R-mode excels at finding abstracted connections between concepts rather than the linear thought.
Just Write It
Blog about it, journal about it, or draw it. Whatever it is, make sure to curate like you would to someone who knows nothing about the subject. Whatever form of curation you choose, be sure to focus on keeping things simple. Like, as Einstein said, you were explaining the concept to a five year old.
Feynman had a similar process which has been coined the Feynman Technique.
This techniques forces us to abstract the core concepts and filter the important information. Bringing everything together will do wonders for your understanding and approach to the problem.
This will allow you to process everything related to the question with ease.
Becoming a better problem solver is an important skill. Being aware of these problem smells is not a silver bullet but they can help in the process.
Above all else, spending lots of time practicing solving problems is key. And although it goes without saying, enjoy the process. Problem solving can be fun and exhilarating. _Keep these smells in the back of your mind for your next problem set and figure out your own sets of smells_.
Happy problem solving!
Thinking in DecadesNovember 17, 2015
Anything worth having takes an extraordinary amount of time to achieve.
It is far too easy to become emotionally involved in daily progressHumans are notoriously bad at working towards goals that can be attained in the future. We are emotional beings. .
When we focus on short term goals, we may find ourselves dissapointed and liable to give up.
Success is the result of working consistently for a sustained period of time. Often changes go unnoticed because they happen so slowly.
This is where long term goals come in.
Functions of Success
Suppose you have a goal.
Let denote the work required of you for every th day. Success is the sum of those parts.
Success can either occur linearly or exponentially. Most of the time it is a mixture of the two.
Regardless, the basic formula is the same.
Lets go over two applications of this equation.
The following examples will take advantage of hyperbole using reductio ad absurdum reasoning. They are intended to illustrate the main point rather than some specific strategy.
Exponential success is seen in high risk, high reward activies (e.g business, sports, wealth, etc.).
Figure 1. Exponential success in high risk/high reward activities.
Lets take our summation to the function (2^x) with and where and are in years.
Plotting this will give us a beautiful, exponential curve. From this view we can see the payoff over a period of 10 years.
But for the initial years, progress can seem bleak.
To illustrate this, take and .
This will resolve to a minute positive increase. At these points, it is tough to see anything but failure as a possibility.
Yet when most people succeed, we hear only of the end of the integration, as if that is all there ever was.
How many people quit after not seeing results from month 6 to 12? It’s tough to keep putting in nights, weekends, etc. when you aren’t seeing any results.
the ones [who] were successful loved what they did so they could persevere, you know, when it got really tough. And the ones that didn’t love it quit because they’re sane, right? Who would want to put up with this stuff if you don’t love it? –Steve Jobs
This emphasizes the need for long term thinking.
Linear success is involved with lower risk, lower reward activities (e.g skill aquisition).
Figure 2. Linear success in low risk/lower reward activities.
Success in this category requires work over time. This is the basic formular for success.
Without looking at long term goals, we may overshoot what is realistic.
Thinking short term may cause us to give up and miss out on what is possible.
The Long Game
Regardless of whether success occurs linearly or exponentionally, the bottom line is the same. Success takes time.
If we expect results to come quickly, we set ourselves up for failure.
Thinking in decades allows us to keep on pushing when there is no success around us. If you are thinking in days, months, even years, then you will likely quit.
Thinking in decades gives you a steadfast work ethic.
It lets you spend less time spent analyzing progress and more time working.
It increases your resiliency.
Most importantly, it sets you up for real success.
Super Keyword in JavaNovember 9, 2015
What is super()?
super() is always called when you instantiate a class. Explicitly calling the super() method is only required in certain scenarios.
During construction there are four main phases which begin at the created class instance.
1. Set fields to default values.
2. Execute implicit/explicit this() or super() calls in the constructor.
3. Initialize blocks and fields.
4. Execute remainder of constructor body.
You can imagine this occuring like a chain starting from the bottom and working up all the way to Object superclass.
Calling super() is always optional. That said, you may want to explicitly state it to clarify your constructor. This behaviour is part of what allows for class extension. If the parent does not have a constructor then a simple no-arg constructor is created.
When instantiating a new instance of Dog() we see the following in the console:
As you can see the super() call is implicit in the Dog() constructor. Also note that super() must be the first line in the constructor when explicitly stated.
Main Use Cases
There are three main use cases for explicitly calling super():
1. Passing arguments to the parent constructor from the child.
2. Accessing overridden parent methods from the child.
3. Accessing hidden parent fields from the child.
We will cover each of these briefly.
Passing Arguments to Parent Constructor
So what happens if we change our Animal class to take a parameter? This allows us to then take advantage of the parent constructor from the child.
Now we get a compile error. As it stands we have a parent constructor that will not execute. Now we must explicitly make a super() call in Dog() passing a String argument.
This will output the following in the console:
Accessing Overriden Parent Methods
We cannot access overridden methods from the overriding class. Achieving this with super() is easy. Simply call super.overridenMethod() to call the overridden method of your choice.
Accessing Hidden Parent Fields
Hidden fields cannot be accessed without the super keyword. You can access fields that have been hidden through by calling super.hiddenField.
Achieving Goals with MEDs, LD50s, and Inverted U-CurvesNovember 6, 2015
You must determine the quickest, most efficient route to achieve your goals. Failure to do so will cost you energy, money, and most importantly, time.
Avoid Diminishing Returns By Aiming For The MED
Your plan should include action steps that have you working at peak efficiency.
The minimum effective dose (MED) is the least amount of effort required to achieve an output. This is the path of least resistance. This will cut the amount of time you spend on marginally effective activities.
How do we determine the MED for a given activity? Plan to start by following the simplest plan that you estimate will be able to take you towards your goalMany prefer to start with the most complex plan on the assumption that it will lead to the best results. Often the most complex plan may be the toughest to stick to in the long run, thus leading to poor long term results. . If you are not achieving your goal after a period of time, then increase the complexity and workload. If you are overshooting your goal, then simplify the process. Repeat this process until you are at the minimum effective dose.
Evaluate Negatives In Light Of The LD50
Every plan you devise will come with its own set of flaws.
It’s easy to aim for perfection instead of optimizing for overall expected valuePerfection is rare is nature, everything has pros and cons. . Avoid trying to create the perfect plan. We must test the negatives in relation to their lethal doses (LD ~50~). The higher the LD ~50~ for a given input, the less weight it will carry, and the more effective the plan.
Be Skeptical of Positives By Constructing An Inverted U-Curve
We should be wary of overvaluing the expected value of any given action.
The expected result of an action may be misleading without the proper considerationsMore is not always better! . Consider the inverted U-curve of expected value you expect to gain for each action you take. Aim to reach the vertex of the expected value curve without reaching too far and then move onto the next action.
Pick Okkams Razor
The plan with the fewest assumptions should be your top choice. The fewer assumptions, the more confident you can in your plan.
Track, Analyze, Modify
Track the key performance indicators from your plan. Make sure to revisit the effectiveness of your plan at a predetermined set interval. This step is essential if we are to avoid our own cognitive biases.
Python Call by ObjectOctober 31, 2015
The three ways to pass an argument into a function are: call-by-value, call-by-reference, and call-by-object. Python passes arguments by call-by-object, which can be described using a combination of call-by-reference and call-by-value.
Calling by reference assigns the local variable of a function to the argument passed in for the corresponding parameter. As a result both the local variable and the argument reference the same point in memory. This means that the function can change the value of the variable outside of its scope.
Python emulates this behavior for mutable variables such as list, set, dict, etc.
As you can see the outside variable changes in tandem with the local variable.
Call-by-value creates a new local variable for the result of the incoming argument. This variable is for use in the local scope of the function. As a result, the variable passed in as an argument is not modifiable from the local scope of a function.
Python emulates this behavior with immutable values such as int, str, tuple, etc. once the variable is changed.
The memory location referenced from a does not change until the value of a is modified. At first, Python behaves according to call-by-reference. The behaviour switches to call by value once the immutable variable is modified. The behavior “switches” to call-by-value once the variable is modified.
The combination of these two phenomenon is how calling by object works. There is no need to memorize the previous terms.
Objects passed into the function scope are always called-by-reference.
If we pass an immutable object into the local scope of a function and change it, then a copy of the variable is created. This is because immutable variables cannot be modified.
This combination of behaviour is known as call-by-object.
Python Equality OperatorsOctober 23, 2015
You compare objects in Python either by identity or by equality.
So what exactly does this mean?
The ‘==’ Operator
Using the == operator checks for equality. The intrepeter is checking that the object values are the same.
The ‘is’ Operator
Using the is operator checks for identity. The interpreter is checking that two objects occupy the same space in memory.
Do these rules always hold? Well, it depends.
The two variables above are declared separately, but they refer to the same object in memory. This is because Python actually caches all integers between -5 and 256Yay, Python! .
Observe what happens when we reference integers outside of the cacheable range. The id() values for each object different.
A similar thing happens with strings, although it’s less predictable. Just remember that simple, strings of the same value may be cached.
Moving Your Todo List To EmacsOctober 22, 2015
Emacs + Productivity Management = Beauty
For those looking for a very customizable, simple solution for their todo list needs, I highly recommend org-mode.
Figure 1. org-mode being used with org-agenda in a terminal emacs instance.
Note: for new emacs users, any key commands referenced that are preceded by C refer to the control key. C-c c for example refers to pressing Control + c, then c consecutively.
Either download and install org-mode manually to your init.el or install via a package manager like MELPA.
Create your inital org tasks file (this is where your tasks will be captured to, no matter where you are in emacs).
Open up your init.el file with your editor of choice to configure your org-mode installation. This configures your todo capture hotkey to C-c c and your agenda hotkey to C-c a. Accessing the agenda will give you a weekly view of your todos at a glance. If you already have your org-agenda-files set then you should omit the first line.
Create Your Todos
Go ahead and try to create your first todo item by pressing C-c c after you restart emacs. You should see your todo creation option as t (note you can add other capture templates that map to other files as well). The primary customization options for your todos you should be aware of are:
Set priorities between #A, #B, and #CThese priorities will be taken into account when you pull up your agenda. with Shift + Up or Shift + Down. This will partly determine their order of appearance in the agenda. Set deadlines for your todos using C-c C-d.
Press C-c C-c once you are done configuring your todo item. This will automatically save the task to your tasks.org file. You can always go back to your tasks.org file to view your tasks, or modify them. Remember that all the markup is simply text that org-mode interprets and that the key-commands are simply shortcuts to speed up the process.
Congratulations! You are now ready to begin moving your todo habits over to emacs! Don’t forget that C-c a followed by the command a will allow you to access your weekly agenda.