If you start at the 1950s, you'll see very simple rock songs; your classic three-chord rock songs. As you hit the 1960s you'll see more complexity; The Beatles, for instance, had more harmonic complexity than what had come before, which continues to be imitated into the 1970s. Then what happens? I don't know, by the 1980s you're looking at a lot of very simple music again, though music is becoming more diverse genre-wise so you're probably getting a larger spread. Then by the present day you have a disturbing trend of one-chord or even no-chord music; apart from rap [which contains no singing but seems to have got simpler even in the backing tracks over the years] we now find that even sung songs are completely lacking in harmony or chord progression. A particularly annoying example I noticed the other day would be that song (dunno who it's by) with the lyrics "We found love in a hopeless place", which seems to have a melody of just four notes.
I could continue this discussion going backwards in time from the 1950s and talking about how the ever-growing harmonic sophistication of art music through Beethoven to Wagner eventually led to a complete breakdown of the idea of harmony in art music which led to music that nobody liked which led to the death of art music and the establishment of rock and roll from square one, but that's another discussion.
The first C chord on a guitar is easy to hit with no finger twisting required. It's also easy to switch between the first C, Am, and G chord, you can even do it quickly and repeatedly while drunk as you can imagine many pop songs are written. The first F chord requires a little more careful finger placement but still easy to get too. Sure enough you hear this over and over in pop songs, some simple sequence of C F G A chords over and over.
Not surprising that the complex guitar chords that require six pencil-thin rubber fingers and a degree in music theory to know how to play aren't heard as often.
The main problem in analyzing tonal music is that we mainly listen to relations between chords. For instance, in the following progression in C major, A major functions as a dominant of D (D is the dominant of G and G is the dominant of C):
C A D G C.
E A B E.
The number of repetitions also matters. Tonally, the progressions C | C | C | G | G and C | G | C | G are the same as C | G. Is he eliminating repetitions in the analysis?
About using A major in C; you can use it as a dominant of D (see my 1st example) or as a chromatic mediant  in C major. Of course, in modern music you can use anything you want, but these two are the most common uses.
And, naturally, the types of chords used will vary according to the music style.
 http://en.wikipedia.org/wiki/Tonicization http://en.wikipedia.org/wiki/Chromatic_mediant
Regardless, I will be keeping tabs on this. Hah, totally didn't intend that pun.
It's a great article, but I think he may have done a lot of work to find out something that is fairly common knowledge lol. Still cool though to have the supporting data. (edit: it would be cool to make this an interactive piece of data presentation to help you write songs. Also, the I, IV and V chords are so popular because they naturally make people feel good. It's why they show up so often in 'pop' music. minor chords have a more depressive quality to them)
Also, if you need proof that certain chords show up often in music, just listen to some Nickelback. Here is a fun link (that I THINK works. my speaker only works in the left side ;) ) http://dagobah.net/flash/nickelback.swf
While I do believe the popular songs follow some pattern, I think the chord progression is only a subset. Someone should look into why Call me Maybe is so catchy. Seriously though
We spent a lot of time doing this sort of stuff to flesh out harmonic and melodic patterns/meaning of pieces while at music school. To (grossly) simplify, it's essentially a form of reduction analysis, but the final step of the analysis is always I - V - I chord progression (tonic - dominant) with the 3 blind mice melody above (stepwise descending). I never found the final reduction particularly useful as, though he had a point about the prevalence of the tonic dominant relationship, it was over blown. The reduction steps were very useful for stripping away flourishes though, in order to see what was happening at a more base level in a piece (we analyzed a lot of Mahler this way).
Kinda like Map/Reduce in some ways.
Also, the fact that he found D, E and A among the results is probably because of modulations. It's VERY common for pop songs to modulate a whole step during some chorus near the end . As mentioned, G, F and C are V, IV and I. If we modulate a whole step, from C to D, the V, IV and I are A, G and D. It would be nice to consider those modulations into the research.
About the key choice, I believe it's irrelevant. It depends a lot on what's your instrument (Bb, Eb is easier on brass instruments), your style (lots of Metal songs in the key of E because E is the lowest note on guitar), your tuning (lots of rock bands downtune their guitars to Eb or D etc), your proficiency, and, most important, the vocalists range.
 Otis Reeding - My Girl, Celine Dion - Because You Loved Me (actually lots of songs by her), Monty Python/Eric Idle - Always Look On The Bride Side of Life, Talking Heads Nothing But Flowers (If you search "whole step key change" you'll get a bunch)
This may best thought of as a lexical analysis of 1300 popular novels. E.G. what is the most popular word following the word "it". The key of a tune 'controls' the chords available, using a typical chord progression. A song in the key of C most typically has the progression C-F-G or I-IV-V in roman numerals signifying 1 for the dominant C, and 4 and 5 for F and G respectively the fourth and fifth notes in the key's scale.
More interesting might be what are the most popular chord progressions. E.G. I-IV-V or II-IV-Im. Which is what I was expecting to click through to.
A million monkeys can write a hit in how many years, now? And BTW "it was a dark and stormy night" don't you know.
In other words, "C G a F" isn't materially that different from "G D e C" or "F C d Bb". All three are instances of the same progression: "I V vi IV" ...which happens to be the most hackneyed (or "effective", depending on your point of view) chord progression in popular music over the last 30 years.
If you transform each chord progression into its figured representation then you can pick up more significant trends such as the above, or blues changes (e.g. "I I I I / VI VI I I / V VI I I") and then you can start to discern when they rose to popularity and which ones are falling out of favor.
For example, in the 50s and 60s, I have no doubt "I vi IV V" was more popular than "I V vi IV" but I have no way to prove it currently and would love to find out if I'm right or wrong on that.
Didn't Pandora radio did the same analysis for its recommendation engine?
Pop music is all about the simple melody, in terms of impacting a recognizable pattern on the brain. That is why it's popular, and you see the same melodies repeated over and over, and over, and over.
My sense, as a classical and electric (contemporary rock / blues) guitarist is that you'd observe interesting deviations from the aggregate results described in the study.
Digging a bit I found the following research piece which shares some more thoughts on this topic:
I know Pandora has done some analysis like this for their database, but I thought it was limited to things like major or minor tonality, upbeat tempo, etc. and didn't delve as much into the nitty gritty harmony. One reason for this might be that these patterns are so universal (spanning lots of genres), that it might not be too helpful for determining what types of music people like. I could be wrong about this though.
A lot of "test-taking" training basically consists of saving time by training away from full reasoning, in favor of cheap-and-good-enough heuristics. Furthermore, those heuristics are over-fitted to the particular problem types on standardized tests. I wonder how much of this study is actually measuring their ability to trigger test-taking instincts on problem types they're not designed for.
The psychologist Keith R. Stanovich is quite controversial among other psychologists precisely because he writes about what high-IQ people miss in their thinking, but his studies point to very thought-provoking data and deserve to be grappled with by other psychologists. I have enjoyed his full-length book What Intelligence Tests Miss
which meticulously cites much of the previous literature on human cognitive biases and other gaps in rationality of human thinking.
And here is the submitted article's link to a description of the Need for Cognition Scale:
Unfortunately everyone seems to be hung up on the "idea" of being smart, as if having a high IQ somehow constitutes an accomplishment.
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
If a lilypad is 20 square inches (which is probably conservative), and you started with 1 lilypad, after 48 days of doubling it would cover 1.4MILLION square miles. That is 44 times the surface area of Lake Superior.
I get the point of the question, but if you're trying to play "gotcha" on people, at least ask a reasonable question.
What he's seeing isn't something new, it's something so old that it's part of popular culture: the absent-minded professor syndrome. It's the stereotype of the brilliant physicist forgets what he's supposed to buy at the supermarket because he's thinking about their quantum properties. Analytic people are horrible at things that don't interest them.
Pay the students $50 for each correct answer, and there's not a doubt in my mind that the results will be the complete opposite of what he's seeing now.
This fallacy is at the heart of the matter. Intelligence and resistance against bias are only loosely correlated. Such resistance comes not from intelligence but from careful study and mental exercise, e.g. looking at various important ethical and philosophical arguments and analyzing them.
This is like saying all large people are strong. There is some dependance but a smaller gym-fly can kick a slacker giant's ass. The sad thing, while it is obvious that you have to exercise your body to be healthy and strong, the fact that the same is quite through fro your brain is often overlooked.
I don't know what right is, but I know the way we currently think about intelligence is wrong.
Any of those articles are a good place to start, so don't be intimidated by the amount of stuff there.
When you find it and it's by someone else, it was obviously a stupid, idiotic error that you would never make.
When you find it and it's your own, it was obviously an understandable mistake that anybody could have made.
Particularly if you consider yourself a great coder.
i _just_ watched that talk a couple of days ago because it was posted here: http://news.ycombinator.com/item?id=4082308
A similar experiment where people draw the wrong conclusions is the Milgram experiment. Yes, most people are obedient to authority figures and do what they are told. But not everyone acts that way.
This research likes to sweep the best human beings under the rug, as if being virtuous is not something to try to emulate, but is something to hide. This explains why the majority of people act the way they do. Perhaps if they were taught that their "we're only human" vices are not the ideal to emulate, perhaps if the best that humanity had to offer were put forth as the ideal instead, then these lesser human beings who make up the majority would become what they might be and ought to be.
This article reminds me of pg's reasons to have a co-founder to avoid being delusional. Better be proven wrong on the inside than on the outside.
edit: Although on second thought, I think this bias theory probably extends to organizations as well. Probably that's why big companies sometimes can't see the obvious which a startup does.
I think it comes down to having a value system where you'd rather be wrong and corrected (even if you have to do it yourself), as opposed to always projecting yourself as"perfect". Once you accept you aren't perfect, its easier to work towards perfecting what you've got.
Also, I just hate these kind of questions - they've always been used to prove that I'm stupid by those who knew the answers, and they're not solving anything useful - I need the problem to solve something I care about in order for my brain to fully focus on it and "do the math"...
bat + ball = 1.1 bat = ball + 1 2bat + ball = ball +1.1 +1 2bat = 2.1 bat = 1.05 ball =0.05
This is techno-babble on a scale the world has never seen!
"...streaming data API which connects the dyno manifold to the routing mesh."
Give me a break!
The player characters in their post-apocalyptic world come across a abandoned series of tunnels they can't explain, but which are described for the game master in the notes:
The system once spanned the North American continent and was used primarily as a method of high-speed transportation of freight. The sub-train system is something like a 20th-century subway system, in that it consists of a self-propelled train moving through an underground tunnel. Unlike the 20th- century system, however, the â€śtrainsâ€ť moved through a vacuum while being supported on super-conducting magnetic rails at very high speeds.
In a quick Google Scholar search, I turn up a 1974 article on "Surface-guided transport systems of the future" (http://dx.doi.org/10.1049/piee.1974.0277, unfortunately not open access), where evacuated-tube transport gets a mention, but under this less-than-enthusiastic banner:
A brief mention is given of other less likely transport systems, such as travel in an evacuated tube beneath or above the ground.
I'd love someone to actually do a credible cost analysis of an evacuated tunnel train. Unfortunately I don't think Oster did.
Also, easy terrorist target.
More at http://en.wikipedia.org/wiki/Transatlantic_tunnel, including reference to Goddard patents on the subject.
Haha! No chance. We might not even see the new link between London and Birmingham (HS2) before 2035, never mind one to New York!
I saw it when I was a kid and I thought it was awesome!
The problem with Civ4 is that the AI is so hyper-aggressive that long-term stability is all but impossible, unless I become a dominant superpower and take on the role of "world police," intervening in every war of aggression on the side of the underdog. But after awhile, there's really no fun in that. So I have tried to cultivate a game in which a few superpowers are at least my equals, if not my superiors. Since I'm always going to war to defend whoever's about to get wiped out, unfortunately, I'm switching sides constantly, and diplomacy is basically out of the question. (None of the AI players will even return my calls, so to speak. We've all nuked each other so many times over that we won't even speak to each other now).
The other problem is that the AI fights total wars by default. It will never engage in a limited conflict. No, when it declares war, it won't cease until either it's beaten or it totally annihilates its enemy. It's like a Terminator. It becomes quickly apparent that the Nash Equilibrium in a game of Civ4 is one nation standing, while all others have ceased to exist. The game drives ineluctably toward this conclusion, unless the human player puts aside his own nation's interests in pursuit of global stability and game longevity. (And, ironically, being the sole force for stability renders him a political pariah among all the other nations). It's sort of like trying to play one sport, when all the other players in the game have been programmed to play another.
Sometimes I wish the AI were more sophisticated, and/or that it could be incentivized to prefer economic growth and interests over nonstop warmaking. Or that one possible victory condition in a game of Civilization would be to maximize a global human development index of some kind (i.e., "Global Victory," instead of just one nation's domination of all others by X or Y measure, or else its complete extirpation of all other peoples on the planet). I realize that's not the game that 99.99% of Civ players want to play, but it's refreshing to hear that I'm not the only one.
If you haven't read 1984, it's a startlingly bleak view of a potential future (from a historical perspective, but still applicable today, I think) particularly through technology and a loss of privacy. It's the origin of terms like "big brother" and "doublethink" -- worth a read.
One of the most interesting excerpts from this piece IMO: "I wanted to stay a democracy, but the Senate would always over-rule me when I wanted to declare war... ...Anyway, I was forced to do away with democracy roughly a thousand years ago because it was endangering my empire."
Although I don't necessarily think it will be because of war, I can see a potential future where people/persons decide democracy is a less effective system because it's holding back the decision making process -- democratic process being (more or less) committee-based decision making, which proxies votes through individuals based on what is essentially a popularity contest. That's particularly true here in Australia at the moment (amidst a minority government with a lot of political sniping on both sides and seemingly very little real progress) despite the fact that we have a comparatively strong economy, low inflation, low unemployment and generally nothing really significant (again, comparatively) to complain about.
I love Civ and still go back and play Civ II at times. I spent a lot of time with Civ III and IV as well (and a little with V), but it's nice to go back to my first experience with the genre (Civ 1 was before my time, sadly).
There is no way a game of Civ II isn't eminently beatable - militarily or otherwise.
For one thing, there's no excuse to not operate as a Fundamentalism (0 population unrest) late game. Virtually all other forms of government, particularly Democracy, are an annoying cavalcade of civil unrest late game.
I enjoyed the read. It really brought me back in time to playing this game with fervent addiction in middle school.
Or maybe it would be a depressing result if the AIs would not nuke around without a human in the mix?
This one Reddit post will probably waste thousands of what could be productive hours... I am sorely tempted to play Civ3 or GalCivII:TotA today.
I like the idea of a new subreddit spawned by this: http://www.reddit.com/r/theeternalwar.It could be interresting to see if anybody finds a way to save the world from hell.
Maybe it's time to relive good old memories of times spend with old Civ games. :)
I'm not saying there are any lessons to be had from it. It's only a game.
From the script of "Gosford Park":
"What gift do you think a good servant has that separates them from the others? It's the gift of anticipation. And I'm a good servant. I'm better than good. I'm the best. I'm the perfect servant. I know when they'll be hungry and the food is ready. I know when they'll be tired and the bed is turned down. I know it before they know it themselves."
This is a perfect example of how you can turn a negative into a positive. From my own experience in the past I always noticed that if you make a mistake with a customer job but correct the mistake really quickly you actually form a bond with the customer that is much stronger than if you never made the mistake in the first place. You actually gain because of the error as long as the customer isn't in really bad shape as a result of it. (Of course you would get diminishing returns if you had to correct more than a small number of mistakes in any given time period obviously.)
The best customer service I ever got was when I was buying an engagement ring (about 12 years ago, from Blue Nile's website). It took weeks to resolve, but every step of the way the rep (Sean P -- made a big enough impression that I remember his name) communicated clearly what the problem was on their end, what solution he was looking into, and how long it would take before he had an answer for me. He was also laser-focused on making sure the solution would work for me -- right style of ring, equal or better quality to what I'd ordered, in time for me to propose.
Excellent customer service consists of being dedicated to fixing the problem, making sure that your fix will work for the customer, and being so responsive and persistent in communication that they know those things.
I agree 100% with the first, disagree slightly with the second. You can be prompt and efficient at resolving your customer's problems and providing them with a quality gateway to the organization, but you can't do this if your support staff is innundated by the constant pressure of answring phones. Things will come to a head where hearing the phone ring starts to become a genuine fear of support reps and CSR's will start avoiding calls just to catch up.
This comes from experience, having worked for an established software company that sold a great product, but had very unrealistic and unsustainable philosophies about support, not to mention a severely undersized team (four support reps and about 600 clients in four time-zones and one in the south Pacific)
Don't take this the wrong way, I am not saying you should be shirking your customers, or trying to find ways to build barriers to accessing that first line of defense. However I am saying you also shouldn't just assume that because you have support personnel, any opportunity for self-help and self-education should be on the back-burner. What I mean is, if your organization already has tools to help customers find the answer they need, that should be on the forefront, in the customer's face and easily accessible.
Then, and only then if your learning resources have failed, are too vague, or perhaps just doesn't answer the question in a way the customer can digest, that's when door number two opens up and it's time to contact the organization. And from there, I'm with you; be a shining beacon, be a smiling face and a welcoming gate keeper. You can learn a lot about your customer base as well as the quality of your documentation by following this strategy.
If you want a qualitative and effective team, don't toss them into the middle of a category 5 hurricane, trying to answer phones and create tickets at the same time. This will erode quality AND effectiveness.
Great post, otherwise!
I regularly take calls from people asking for features that we don't offer (and probably never will). They normally start out annoyed that we don't have what they want, but after I explain why we can't offer the feature, they normally understand completely and it doesn't seem to bother them at all.
Most customers don't have much perspective about your business. They know what they want, and they're not really thinking about how it might effect the overall experience. If you just say, "no, we don't have an iPhone app" they'll think you're brushing them off. Instead say, "I totally understand why you want an iPhone app, but if we made an iPhone app we'd also have to make apps for Android and Blackberry which would mean we couldn't spend nearly as much time focusing on making the core product better, which is why we have a mobile website which will work on all platforms". The customer wasn't thinking about that when they requested/demanded the feature. By explaining your reasoning, you're telling them that you really are listening to them and considering their ideas, but there are good reasons why you can't give them exactly what they want.
And the other type of people--those who want the cheapest--are generally not worth having as customers.
All the search data is used to create article stubs and reports that help us create better articles and documentation.
(this is totally unsolicited - I just genuinely like the startup and Emil has been awesome to us)
An often quoted statistic in marketing and sales is that it is 10 times more costly to acquire a new customer than to do the things necessary to retain an existing one.
Yes yes, I know about burning cash (err, growing) as fast as possible to lock down winner-take-all markets, but travel booking doesn't seem like one of those. There doesn't seem to be any switching cost other than typing a different url in the browser. I don't see any network effects. Am I missing something?
This is a business with revenue streams and cash flow. They've been around for two years. They've already raised more than $5mm. When I hear "we raised another venture round" my gut reaction is concern, not congratulations.
As far as the product goes: The UX is nice. But every time I tried hipmonk, the prices were higher than what I found elsewhere, so I stopped using it.
I can't believe how much the site & company has grown since - it's a testament to the fabulous product and team. Onward & upward!
I also love what priceline offers, their price negotiator!!! It always hits well below what the prices of airlines currently offer. Any plans to put this in the future of hipmunk?
In my case I search using Kayak, but never really book the ticket from Kayak (don't follow their link to the airline). I go directly to the airline site and book it there. Many people I know do the same. I despise using Orbitz/Priceline other than just a cursory glance to see if there are cheaper tickets. I know that sometimes an itinerary involves multiple airlines and that is when there might be an advantage booking through, say Orbitz, but even in those cases I just book separate tickets.
Why is this free search/booking industry still unsaturated and is there really a demand for such capital infusion?
Update: My question is specific to flight search, because I think hotel search can be profitable model as sometimes these sites provide better rates than the hotel's site.
update: would be nice to know when you guys closed this...someone says it was a while ago?
But I think the real story here is that Steam just launched a movie on their platform...
Interviewer (paraphrased): What if you don't finish your game? Phil Fish: I will kill myself, That's my incentive to finish it.
The movie goes through the process of starting something from scratch, with its (high) ups and (extremely low) downs, and most people on HN will likely relate in one form or another.
I'm sure it's on other platforms as well.
I'm from Europe and the price on steam is 9.99â‚¬. Never really understood why they never lower the price when it's in euros.
Loved it. Highly recommended.
For example, I had a battery become unusable because Linux often failed to sleep when the lid was closed because some dialog box was blocking. It would run in the bag with no ventilation when I didn't realize it until the battery drained and it would fail to shutdown until the hardware fail-safes took over and I realized my backpack was too hot to hold. Did this a few times and the last time, the battery wouldn't charge anymore. After getting a new battery I became very conscientious about whether it actually was asleep before I put it in the bag.
I have had this happen in Windows before as well, in one case it would wake up if I forgot to turn off my Bluetooth mouse when I put it away. Since it was already closed, there was no trigger to go back to sleep so it would run itself dead in the bag and eventually the plastic near a hot component melted. Turns out there is an option in the Windows device manager to tell it not to wake on Bluetooth that prevents this.
However a defect in the factory-installed operating system that causes failure is something you have to warranty. A defect in the user-installed operating system is not. However, I have no idea how they could trace the problem to the operating system. Not sure how they would ever know that Linux is installed. Any good Linux user would wipe the hard disk before returning a computer to the manufacturer for repair :)
I'm a big fan of newegg and hope they continue to bring competition to amazon. I am hoping this is just a small oversight and it will be corrected shortly.
One was DoA and I returned it with an RMA - they sent it straight back to me because I had returned it in the wrong box. The serial # on the machine didn't match the barcode on the box!
They seriously expect you to keep the individual box for every unit? Or are they just a scam that try and stop you every returning anything?
Anyway - haven't bought anything from them since.
This is precisely why you should always pay with a credit card online.
I wonder if there is some shared panel manufacturer who has been dropping the ball lately?
Note: When returning laptop, desktop, smartphone or anything I always tend to be anal retentive and put everything back in place, including software and OS so there are no arguing wether the problem is software or hardware (obviously I only need to send back hardware defective devices).
Anyway, the good news is that once I got in touch with someone higher up the chain they were like, "That never should have happened" and worked to make it right. It just took a ton of kicking and screaming to get there, which really wasn't worth it on my part, but in the end, they did the right thing.
One thing I'm seeing more and more is that companies are holding back their lower tier support employees from actually being helpful. For example, have you ever had one of those "Instant Online Chat Support" things help you out? No, they are always so unempowered its not even funny.
Dell Ideastorm Multiboot Linux: http://www.ideastorm.com/idea2ReadIdea?id=0877000000006ixAAA...
Dell Linux & Windows on all Laptops/Desktops:http://www.ideastorm.com/idea2ReadIdea?Id=087700000008iglAAA...
Dell Sputnik Ubuntu Laptop:http://www.ideastorm.com/Idea2SessionIdea?v=1339521444920...
I think all companies have a life cycle:1. New company, so customer is treated well/2. Company grows and becomes successful3. Company needs to show ever increasing profits. Starts taking shortcuts to save money, and starts to ignore what made them great in the first place.
"The following conditions are not acceptable for return, and will result in the merchandise being returned to you: Any desktop PC, notebook or tablet PC that has been opened"
> "Agent.BTZ did something like this already in 2008. Flame is lame."
Flame's approach is different and more impressive. Agent.BTZ copied itself and used an easy-to-discover autorun.inf file in the root directory of attached disks or network shares. Flame exports its database by encrypting it and then writing it to the USB disk as a file called '.' (just a period, meaning 'current directory')
When you run a directory listing you can't see it. You can't open it. The windows API doesn't allow you to create a file with that name and Flame accomplishes this by opening the disk as a raw device and directly writing to the FAT partition. Impressive, right.
While a lot of these individual features alone are not impressive the sum of the parts, combined with the collision attack on the certificate signature are very impressive.
As for the main point of Mikko's post, I have never understood why so many folks in the netsec industry are arrogantly pessimistic about the innovation of others. I found Flame jaw-droppingly amazing.
Nobody knew about it for years, yet it was derided when discovered and documented.
It doesn't really matter whether the nation state in question is Iran or the United States. Do not pick fights with people who can respond to a hacking incident by writing a check for $5 million dollars to a defense contractor and consider that low-intensity conflict resolution. It will not end well.
So what cryptanalytical capabilities do they have which are considered too sensitive to expose via malware?
I had the same reaction, then I thought they did this on purpose to downplay how really impressive Flame is. I imagine the people writing these blogs are actually thinking "Holy S%$&!" behind closed doors or within other security circles.
Unfortunately, the release of Django 1.4/django-registration 0.8 (https://bitbucket.org/ubernostrum/django-registration/src/27...) complicates matters a bit, and I'm torn between figuring out a way to keep my TFA or just roll it back altogether and implement d-r, and see if it has support for TFA.
If you're using Django 1.3 with something else than the default password-hashing, you should check out django-TSA.
That's basically how the auth works on with my online bank. I get a small calculator sized device that reads my debit card. I have to enter the card pin, a challenge code from the online transaction, and the amount - which then gives me a code to authorise the online transaction.
(The downside is that the devices are all identical - so anybody with one + a cloned card + my stolen login info can auth transactions - hey ho...)
NIST recommends PBKDF2.
In short, it appears that advances in hardware have made it possible such that bcrypt hashes can be efficiently computed.
Two-factor authentication using phone numbers is a huge privacy breach, especially when you're dealing with websites that have no business knowing your phone.
And rolling code tokens aren't feasible for anything except some really high-security applications. Even there, I doubt they are really much more secure than a USB stick with your paraphrase-protected private key. Sure, you can't copy the token, but that doesn't just add to security, it detracts from usability.
At some point though, shouldn't phone companies notice and beef up their end a little bit. Maybe we need another large phone hacking scandal to really lock down answering machine security. http://en.wikipedia.org/wiki/News_International_phone_hackin...
The only way to do this properly is certificate-based 2-factor like with Google's Authenticator app.
I hope some kid installs this on their parent's phone.
Edit: This just proves why parents should know what's going on: http://www.channel4.com/news/should-you-let-your-child-play-...
Does anyone know what the future business model is? Pro version? OEM pre-load?
Last time i checked 3yrl olds didn't have any problem playing games on any they could grasp.
Yet, a cataract eye can't even answer a call on modern phones
Take a cross section of any society and you're going to get people of all kinds, including people like Sonja.
But I can see how an american/international audience wouldn't get the joke.
Btw, her profile  on the project's page is up and straight about her agenda (and her character, as well): "I'm gonna tweet about my thoughts and being me, about having children and living my life and what not."
There may be a day when we become so politically-correct about everything that we can't speak out minds anymore.
Either way, anything like this was a time bomb waiting to go off if there wasn't any vetting process at all.
Pity the kids. What a mom.
They have 3,000 employees, even though their business model revolves largely around cloning[a] games that are simple enough for very small teams to create. They're losing money; their EPS is -1.30. Just because they have cash from investors doesn't mean that the business model will make significant money in the long run and it's made more complicated by the fact that they have a very heavy dependence on Facebook.
They're also dependent on casual gamers (who don't have much loyalty or will to pay) and current trends.[b] Apparently, only 2% of their customers pay for their games. Their stock market valuation seems largely mapped to their active user count (and also Facebook's share prices) rather than their financials.[c] In fact, I can't even say that they're overvalued because that involves looking at the P/E and with a negative EPS, I can't really do an apples-to-apples P/E comparison of Zynga against companies that are actually listed as profitable. Is a -3.8 P/E overvalued? It certainly is risky!
[a] They also buy some companies behind popular games too, like OMGPOP. This isn't necessarily a good idea.
[b] The use of the term "game craze" in the title of the parent article implies that part of the reason ZNGA has a valuation in the billions is because of the current trendiness of Facebook games. The problem with relying on trendiness for investments is that when there's something even trendier (i.e. mobile apps), all the investor money that chases trendy stuff could simply go there instead.
[c] It was a popular dot-com bubble plan to focus on market share with a free product at a sustained financial loss. Of course, it's too early to tell if ZNGA will sustain its losses in the long run because its stock is too young. Still, it's very risky.
ZNGA's current market cap is $3.5B. But it has about $1.2B in cash, and little debt. Regardless of how you feel the games, they do have a real business, so if Instagram is worth $1B to Facebook, ZNGA must be worth somewhere north of $2.5B, its current enterprise value.
I think that the way Zynga acquired its customers/users is exacerbating the situation. If your friends are all there you feel compelled to join too, but if some leave it's socially acceptable to follow them elsewhere.
Since we're talking about "social" gaming, I also have to wonder whether part of the drop isn't seasonal ... at least where I live, it's now nice enough to spend a lot of time outside socializing (picnics, frisbee, t-ball, etc) so perhaps there's just not the boredom to drive as much traffic?
Some nice charts...
Check out http://emberjs.com/ or http://emberjs.tumblr.com/ for more info.
And if you look at the charts, that seems to be exactly what happens. At low-but-nonzero packet loss rates, the maximum delay is never more than one or two RTTs. With high packet loss rates, you start to see a long tail of longer times due to double-loss events, I suspect.
 "By default" becuase it can be turned off. SACK can be used as a DoS vector by forcing the sender of a large transmission to buffer and reprocess essentially all of it repeatedly by pretending it "lost" a packet.
Apropos to Apple's shift to Open Street Map I feel both excited and apprehensive: for instance someone pointed out to me that the new maps won't display transit information (or don't currently, we'll have to see what happens after iOS 6 is launched) but at the same time I'm happy to see OSM gaining traction.
Edit, here's what dig tells us:
; <<>> DiG 9.7.3-P3 <<>> maps.apple.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 742 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;maps.apple.com. IN A ;; ANSWER SECTION: maps.apple.com. 2918 IN CNAME gsps28.ls.apple.com. gsps28.ls.apple.com. 219 IN CNAME gsps28.isg-apple.com.akadns.net. gsps28.isg-apple.com.akadns.net. 272 IN A 188.8.131.52
Also, the HTC One S has been available on T-Mo for a few days now; it's running ICS and should thus support IPv6. Wouldn't that make it the first branded IPv6-capable phone?
Edit: As noted below, the software on the One S does indeed not support IPv6 as of now.
Many of my colleagues agree. Slower cpu with 512GB at ~$2500 would be just about perfect.
While the upgrade to Mountain Lion will be free, I have a funny feeling that Adobe will charge (or at least will try) you some extra money for an updated version of Photoshop for Retina Display MacBook Pro :).
Any self respecting company or individual developer will probably provide a free upgrade for the UI of their applications soon in order to support the new resolution.
 reading through the article, it seems that they've either gone and broken the word 'resolution', or Anand is very confused.
At 1440 x 900 you don't get any increase in desktop resolution compared to a standard 15-inch MacBook Pro, but everything is ridiculously crisp.
I have read this sentence three times and it still makes no sense unless the word 'resolution' has been completely mauled by marketing idiots.
I'm curious of Apple and Adobe have worked together to allow image documents to display at true resolution, within a scaled output?
Does that make sense? I suppose what I'm saying is, working at a 100% canvas at 100% zoom level (instead of say, a 50% scale level) while all system elements are scaled.
EDIT: This might not even matter, from a practical perspective.
Not sure I see the applications of this, but I guess more high-level error messaging is something that is in general a good thing so I guess that should hold for the web, too.
FYI, 755 AUC is 2 AD, so the example is referring to the time of Jesus.
If something didn't exist at all, why would I send a 451?
> The 451 status code is optional; clients cannot rely upon its use
So... everybody can ignore 451?
This better fits into the 300 series as a permanent addition.
The solution is simple, copy a tarball of the source to CD and throw it inside the casing of the missile. Add rocket fuel to compensate for the added weight.
That was an intentionally stupid answer.
Assuming that they are not selling them to someone rather than firing them at someone, of course.
Whether this is the case or not depends on whether or not the Navy is making copies, or just passing along copies they received from a contractor.
I wrote my first lock-free code in 2004 based on reading some papers by Maged Michael from IBM. I wrote a lock-free FIFO in PowerPC assembly, and was convinced it was safe and robust. When I emailed Maged about it, he pointed out that if a thread was suspended on one specific instruction and some specific memory was unmapped before it could run again, the program could crash. I was amazed; I had thought hard about this algorithm, but had completely missed that possibility.
Some other specific notes about the article:
> Basically, if some part of your program satisfies the following conditions, then that part can rightfully be considered lock-free.
The are actually several levels of lock-freedom defined in the literature: lock-freedom, wait-freedom, and obstruction-freedom. For more info see: http://en.wikipedia.org/wiki/Non-blocking_algorithm
> Processors such as PowerPC and ARM expose load-link/store-conditional instructions, which effectively allow you to implement your own RMW primitive at a low level, though this is not often done.
One benefit of load-linked/store-conditional (often abbreviated LL/SC) is that it avoids the ABA problem (http://en.wikipedia.org/wiki/ABA_problem). In practice this doesn't matter that much since x86 doesn't support LL/SC, but I just think it's an interesting factoid to know.
> For instance, PowerPC and ARM processors can change the order of memory stores relative to the instructions themselves, but the x86/64 family of processors from Intel and AMD cannot.
(I've edited my reply here since my original assertion was incorrect). It's true that x86/64 won't reorder stores (see http://en.wikipedia.org/wiki/Memory_ordering for details) but it will reorder loads, so memory barriers are still required in some situations. However I believe that the atomic instructions ("lock cmpxchg", and "lock xadd") imply full barriers on x86.
On a more practical side, check out http://www.liblfds.org/ -- it is a library of lock free data structures in C. (I am not the author). I have successfully used this library in some realtime projects.
Please make sure to navigate all the way down, before navigating to the right. Press the space bar to see the full layout of the presentation.