Welcome to the Daily Cow (Note: COW, not CAL), your daily source of bull, er, cow-related articles! As you enjoy your visit, ask yourself, who needs The Daily Cal when you have The Daily Cow (about us link)? Be sure to sign up or sign in and let your voice be heard! (or seen on the screen) You will be surprised how many things there are to learn about cows and how prevalent they are in our society.
For those of you who are not familiar with the website, it is a futures platform for predictions like elections. The idea is that through trade, the prices approximate the public's consensus on the approximate likelihood of something happening, like Trump winning the 2012 election.
Here are some consensus estimates for the 2012 Presidential election:
The second fact reveals an interesting fact about the beliefs of people on Intrade. Somehow they believe that Trump has a good chance of winning if he were nominated. That encouraged me to research further to compose the following table:
|Candidate||Pr(GOP Nominee)||Pr(Winner)||Pr(Winner|GOP Nominee)1|
Interestingly, we see that the implicit belief is that among the Republican candidates, only Gingrich, Ron Paul and Trump have a good chance of defeating President Obama. Does that fit with your intuition?
If the GOP just wanted a GOP in the White House, it would try to pick the candidate with the highest posterior probability. Yet those candidates also have relatively low prior probability, suggesting that the GOP is not doing that (or the Intrade beliefs are wrong). What do you think?
Another interesting fact that surfaced was that the sum of each individual candidate winning totals to 100.6%, suggesting that there's an arbitrage opportunity. To exploit the arbitrage opportunity, you short $x of all of the candidates and invest that money somewhere else (Say, treasuries). As a result, you will gain 0.006x + returns from the treasuries (say, 2% on x for 1.5 years of investment) for sure, assuming your transactions do not meaningfully impact the price of the "shares".
Has anyone dabbled with Intrade? What do people think of these estimates? Are they pretty good, or do you think there is profit potential from statistical arbitrage?
1 I calculated the posterior probability using Bayes rule as follows: Pr(Winner|GOP Nominee) = Pr(Winner and GOP Nominee) / Pr(GOP Nominee) ≤ Pr(Winner) / Pr(GOP Nominee). The last equality follows from the fact that a candidate must be a GOP nominee (or a nominee of another party, which is highly unlikely) to be the winner.
As of last night, my laptop's wireless is basically unusable. Here's what happens when I try to ping google.com (The same thing happens even if I ping 192.168.1.1 or yahoo.com):
C:\Users\Yan>ping -t google.com Pinging google.com [184.108.40.206] with 32 bytes of data: Request timed out. Request timed out. Request timed out. Reply from 220.127.116.11: bytes=32 time=65ms TTL=53 Reply from 18.104.22.168: bytes=32 time=65ms TTL=53 Request timed out. Request timed out. Request timed out. Request timed out. Reply from 22.214.171.124: bytes=32 time=66ms TTL=53 Request timed out. Request timed out. Reply from 126.96.36.199: bytes=32 time=66ms TTL=53 Reply from 188.8.131.52: bytes=32 time=65ms TTL=53 Reply from 184.108.40.206: bytes=32 time=66ms TTL=53 Request timed out. Request timed out. Request timed out. Request timed out. Reply from 220.127.116.11: bytes=32 time=193ms TTL=53 Reply from 18.104.22.168: bytes=32 time=65ms TTL=53 Reply from 22.214.171.124: bytes=32 time=86ms TTL=53 Reply from 126.96.36.199: bytes=32 time=73ms TTL=53 Request timed out. Request timed out. Request timed out. Reply from 188.8.131.52: bytes=32 time=66ms TTL=53 Reply from 184.108.40.206: bytes=32 time=66ms TTL=53 Reply from 220.127.116.11: bytes=32 time=66ms TTL=53 Reply from 18.104.22.168: bytes=32 time=68ms TTL=53 Reply from 22.214.171.124: bytes=32 time=66ms TTL=53 Request timed out. Reply from 126.96.36.199: bytes=32 time=65ms TTL=53 Reply from 188.8.131.52: bytes=32 time=66ms TTL=53 Reply from 184.108.40.206: bytes=32 time=65ms TTL=53 Reply from 220.127.116.11: bytes=32 time=84ms TTL=53 Reply from 18.104.22.168: bytes=32 time=67ms TTL=53 Request timed out. Reply from 22.214.171.124: bytes=32 time=93ms TTL=53 Reply from 126.96.36.199: bytes=32 time=66ms TTL=53 Request timed out. Request timed out. Request timed out. Reply from 188.8.131.52: bytes=32 time=66ms TTL=53 Request timed out. Request timed out. Request timed out. Request timed out. Request timed out. Request timed out. Reply from 184.108.40.206: bytes=32 time=87ms TTL=53 Request timed out. Reply from 220.127.116.11: bytes=32 time=65ms TTL=53 Reply from 18.104.22.168: bytes=32 time=86ms TTL=53 Request timed out. Reply from 22.214.171.124: bytes=32 time=66ms TTL=53
As you can see, half of the ping requests don't finish. Normally this would be a sign of congested airspace, but of the five devices connected to the router, mine is the only one having such problems--others are still buzzing along just fine. As a result, I have to plug in my Ethernet cable.
I've been trying to fix this issue since last night because being forced to sit at most five feet away from the router is too uncomfortable. I uninstalled the only thing I installed in between when the wireless worked, and when it stopped working (A software for a print server, which surprisingly didn't work). I tried restarting numerous times. I tried disabling and re-enabling my wireless adapter. I tried to disable my firewall/adjust settings.
I don't know what's wrong: I did not install any Windows updates in between, there are no new drivers for my device, and Windows doesn't report any problems (nor does the troubleshooter give any helpful advice). I've wasted numerous hours dealing with WiFi problems both at home and on campus. Why does it have to be such an unreliable system?
Any advice for me? Feel free to vent your WiFi frustrations here too.
EDIT: Figured out the problem. Jack left, and now I can connect just fine wirelessly:
C:\Users\Yan>ping google.com Pinging google.com [126.96.36.199] with 32 bytes of data: Reply from 188.8.131.52: bytes=32 time=68ms TTL=53 Reply from 184.108.40.206: bytes=32 time=66ms TTL=53 Reply from 220.127.116.11: bytes=32 time=84ms TTL=53 Reply from 18.104.22.168: bytes=32 time=66ms TTL=53
What was his computer doing that was disabling my internet?
In decentralized markets, public goods are under-provisioned because the private benefit to any individual of pursuing action is lower than its corresponding social benefit. There are instances with inverse dynamics: Companies that extract small fractions of money (substitute for anything valuable, like time) from each individual, which collectively amounts to a large sum when spread over millions of individuals. Subsets of the affected individuals do not have the necessary incentives to fight, and therefore such policies remain.
The world is full of examples of such abuses:
What remedies are there to this problem? For one, the threat of class action lawsuits have some effect on dissuading some, but only in cases of legal wrongs. Even then, the benefit for the plaintiff (and his lawyers) is lower than the costs of the incumbent system.
Other remedies are impractical in our system: The problem is with the nature of politics to favor groups. Asking Senators to stop picking favorites is hard. A solution would require structural changes to our legislative body so that no individual has an incentive to favor his own region at the expense of the nation.
One could presumably require members of Congress to prove that a proposal will cost the country less than it benefits it (taking into account all externalities), but how can we rely on the very members of Congress to honestly assess anothers' proposals on that merit?
What solutions do you have?
Amid the recent iPad 2 hype, the Wall Street Journal published an article about the Nook Color (from Barnes and Nobles) running Google Android, contending that it provides just the right balance of features for $200, much cheaper the Motorola Xoom (also powered by Android).
This made me think: If B&N can find success in selling tablets that run Android, why can't Amazon, which has a much larger customer base and success with the Kindle? An Android-powered Kindle would be a significant boon to their business. They are working on their own Appstore. They have a huge selection of books. And they will be able to stream those premium movies to Prime members.
It seems like the next logical step for Amazon to take to expand its reach. What do you think? If Amazon launched an Android tablet for under $300, I would be in line for one.
Remember when you had to pay $85 to apply to each college? It allowed them to pay people to pore over every application, to distinguish between seemingly-indistinguishable applicants, to select the few who would best excel in their institution.
Companies like Google, which received 75,000 applications in a week (~4 million a year) don't have the luxury of doing that and thus must resort to crude filters such as GPA, major (that often must match exactly with what they're looking for), or other keywords (such as Scala, Ruby on Rails, or Facebook). This leads to keyword-stuffing by applicants, and imprecise and inaccurate filtering by employers. The end result is a concoction of false information and disappointed job seekers.
Google intends to hire 0.15% of those applicants. Even the most selective colleges accept around 7% and 10% of their total applicants. The reason is simple: There is an entry fee to applying to Harvard or Princeton, and as a result, fewer people even submit applications.
Google should decide on an acceptance goal: Say, 4% of applicants. In order to achieve that goal, it could introduce a (small) application fee--say, $25-$100--whatever corresponds to the point at which it receives only 250,000 applications per year. Reducing total applications from over 4 million to 250,000 will allow the recruiting team to be selective on many more dimensions, and hence pick better candidates. In addition to saving lots of recruiting resources, it will also produce anywhere between $6.3 million and $25 million dollars in application revenues.
The optimal thing to do with those revenues would be to return them to successful applicants: Say, 4% of the 250,000 applicants get job offers and 60% of them accept the job. Then they can get anywhere between $1041 and $4166 in "bonus bonus" (i.e. bonus on top of the sign-on bonus) to keep the program revenue-neutral.
The end result is that many of the current 4 million applicants that knew they had little chance would put their resources somewhere else, saving lots of Google's time and resources, and allowing Google to be more detailed about its hiring selection process. In contrast, the 250,000 most confident applicants would apply. Therefore, this system would produce the exact opposite of the adverse-selection problem--candidates will basically select themselves. Any rational person (one who chooses actions that maximize expected payoffs) with sufficient capital would conclude that it is profitable to spend $100 for a 5-10% chance at $4166. Similarly, he would conclude it is unprofitable to spend $100 for less than 4% chance at $4166. Therefore, of the population of all rational people interested in the job, exactly those who are sufficiently confident would apply.
I suggested a mechanism that appears efficient and effective in theory but may run into potential problems in the implementation. The goal was to mitigate the information asymmetry in employment applications and to reduce the load on recruiting so that they can be more effective at filtering among those who show sufficient confidence or interest.
I'd like to hear your opinions on how reasonable (or unreasonable) this idea is, and whether you think this will work in practice in the present or in the future.
The recent development regarding Obama's health care legislation begs the question: How was health insurance organized before the modern age? Let me address that question with a personal anecdote.
A few days ago, I pulled a muscle playing Kinect too intensely. I wasn't aware that I had actually injured something (I thought it was a minor, temporary thing), so I exacerbated the problem by stretching it out too much (through various exercises like pull-ups). The problem became worse yesterday due to a day full of classes (8 am to 5 pm), half of which were too full to have seats. By last night, even lifting a cup became unbearably painful.
Hospitals were far away, and it was late. And plus, it wasn't that severe. But I needed help, and I knew it.
Thankfully, Tricia was nearby and helped me out yesterday and today. Walking out to pick up food and purchase a heat pack would have been too painful to bear.
So this is my quick story regarding health care in the pre-modern age: Your immediate circle of friends and family.
It's been a while since my last post here. This semester has kept me fairly busy, with the combination of projects, case competitions, job interviews, among other activities. All of them were fairly routine, except for finding the right full time job. But before we get there, let's step back a bit to get some perspective.
It wasn't always this way. In fact, just before summer I was certain that, after graduation, I would continue my education to get my MS in EECS (through the 5-year program) while simultaneously completing the MoT program. It sounded ambitious, promising and rational, and had the potential to open up a whole world of opportunities. I would get to take many of the graduate level CS courses I could not take in my undergraduate years and I would get to take some MBA courses that typically cost tens of thousands.
In the end, I would come out of school boasting a shiny resume flashing various, diverse skills and experiences, with the breadth necessary to build and cross bridges, and the depth necessary to excel in technology. I would conduct interesting research, spearhead incredible discoveries and spread my name through publications. I would be one step ahead of my peers who came out of college with just a BS, presumably because I would know "more".
My experience this summer shattered those ambitions--in a good way. I had never thought that I would learn so much while working. My project was self-contained, ambitious, required me to explore the whole stack, and potentially very useful. My coworkers (especially my mentor Brian) were very supportive, and I learned a lot from them. It made me think of what it is about school that I do learn from--and I realized that much of my learning happens out of the classroom.
I learn through working on projects (like those from my programming languages course), through tackling difficult problem sets (like those from my randomized algorithms course), and through working with other people (case competitions). But school is structured to be easy and straightforward. There are few opportunities to try something radical, see it fail, backtrack, and try again. You follow a script in school. You write the script out in the world, where I have the same learning opportunities (projects, difficult problems and interaction).
Of course, that begs the question: Why not do research through a PhD program? Why work? I tackled this question early on in college, and decided that I would like to work on projects that have immediate, real-life applications. The social advertising space is still being carved out; so is the space for location based services. In academia time is measured in years. In industry it is measured in months. You can't afford to wait because someone else will have established themselves. The excitement of the industry is fueled by the ticking clock.
So that is the story of why I have decided to forgo more years of school in favor of joining the workforce. Some days I still wonder whether that's the right decision for me--whether I will always be one step behind someone who stayed for his MS, or whether I will always be a step ahead of him.
My next post will focus on my decision regarding where I begin my full time career, including what opportunities I pursued and ultimately why I chose Facebook over the alternative options.
The NYT ran an article testifying to Germany's successes in managing the downturn, and their consequent pride in getting the "right economic model".
Of course, there are benefits to their "short work" program of reducing workers' hours instead of letting go of workers entirely--mainly that the workers' skills do not wither away. But fiscal austerity--the main economic model being touted--is the wrong reason to attribute Germany's relative success to.
Should every country embark on a direction of fiscal austerity, Germany's only source of success--export led growth--would not have been possible. In other words, Germany's relative success is a consequence of the spending of the rest of the world--Germany sucked hundreds of billions of dollars in demand from the rest of the world, and contributed none of its own.
Don't get me wrong--Germany is a great export economy. It produces the goods, like machinery, cars, and other feats of engineering, that are demanded everywhere. I just hope that it and everyone else can see that while tightening their purses may have positive individual impact, on the whole it just makes the downturn worse.
Any fiscal stimulus to counteract a large downturn like we recently experienced needs to be coordinated and have full cooperation among the large economies. Otherwise, the single country that chooses to veer off-course will suck the demand generated by everyone else, and render the plan useless as a whole.
Germany, please do not have the hubris to believe that what you did is "the" solution. It may only work for you because you choose to harm your neighbors for your own benefit. But it is wrong to tell the world that you have found the solution.
Because you have not.
If you aren't aware, there was recently a proof published by a researcher at HP Labs that asserted that P != NP. Although I did not give anything more than a cursory glance at the paper, I know (from summaries) that it leverages principles from physics.
I can leverage something unrelated to complexity theory to prove that P = NP! Suppose we take some NP-complete problem R. Let's assume that there are a finite number of particles in the universe--upper bounded by something incredibly unreal like 1050; and let's assume that there is a finite amount of time in the universe--also upper bounded by something unreal like 1050 years.
Then there exists a constant K such that no instance of R exceeds K in time. Therefore, R is upper-bounded by K--perhaps the number of years of the universe. Since all NP-complete problems can be reduced to each other, we have proven that all NP-complete problems are bounded by K (or some constant factor of it). Since, with large enough inputs, both a problem in P and NP are upper bounded by the same "universal constant".
Since there is one thing that bounds both classes of problems, they are equivalent. Therefore, P = NP.
Disclaimer: THIS IS A JOKE.