hacker news with inline top comments    .. more ..    13 Aug 2012 News
home   ask   best   7 years ago   
1
Robotic Airplane Flies In Tight Spaces mashable.com
57 points by gagan2020  3 hours ago   19 comments top 11
1
makmanalp 2 hours ago 3 replies      
What's cool here other than the fact that it seems to react really quickly is that it has no universal conception of location, like GPS.

My limited understanding of the topic is thus:

For localizing, they use a particle filter, which is basically a method that helps you figure out latent variables (in this case location) based on multiple observations.

Using the data, it creates a model of how the aircraft moves. In each "tick", you make a prediction about where the aircraft will be in the future (let's say, based on how fast you know the motor goes and which way the rudders are tilted etc). Then, you actually compare it to the data you got from your sensors (in this case a laser rangefinder) and update your model. Thus, your model is better.

The more traditional formulation of this is the Kalman Filter, which is everywhere in classical controls systems. I think the particle filter is just simpler for large numbers of variables whereas for Kalman filters, complexity increases exponentially.

edit:

Another way to look at it is that this is how robots deal with the "real world" where sensors are noisy and slightly off, actuators are unreliable and can't produce smooth and constant output etc. Instead of trying to guess all these factors, it automagically accounts for these on the fly by looking at how the robot behaves and how you expected it to behave.

edit2:

Corrections abound! Read replies below!

2
Matt_Cutts 2 hours ago 1 reply      
3
rogerbinns 4 minutes ago 0 replies      
What happens when it hits a dead end? At least a helicopter can stop and return on exactly the track it came in on.

Presumably one of the reasons for the prebuilt map is so that it doesn't enter a dead end, which some parking garages have.

4
confluence 2 hours ago 0 replies      
Some relevant videos on autonomous catastrophic flight recovery by Rockwell Collins:

Continuous AI flight after blowing off one wing:

http://www.youtube.com/watch?v=xN9f9ycWkOY

Continuous AI flight after catastrophic wing loss (showing manual/AI difference):

http://www.youtube.com/watch?v=dGiPNV1TR5k

5
chaz 2 hours ago 1 reply      
Original story from MIT News Office w/ video: http://web.mit.edu/newsoffice/2012/autonomous-robotic-plane-...
6
Schwolop 1 hour ago 1 reply      
The big "what stopping this from being mainstream" part of this is the need for a prior map. It's absolutely analogous to Google's autonomous vehicles needing to be manually driven through the area in which they're planning to later operate autonomously.

This is great work, but it's not onboard SLAM, only onboard localisation. All up, it's great to see more of the autonomous ground vehicle work becoming small and lightweight enough to go on aerial vehicles. Traditionally the low payload capacity has been a showstopper for UAVs, and laser range finders are often heavy, hence why so many UAVs have used vision-only localisation techniques.

7
sliverstorm 58 minutes ago 0 replies      
I am always so happy to see a navigational demonstration that doesn't rely on external positioning, e.g. GPS. Having tried both sides (dead reckoning/sensing and external positioning), I have come to feel like so many GPS-based projects are essentially trivial GPS demos.
9
stcredzero 2 hours ago 0 replies      
That plane thinks it's Han Solo! (Never tell me the odds!)
10
eps 2 hours ago 0 replies      
Cool, but I'd trim "genius" from the title.
11
fluorescentLAMP 1 hour ago 0 replies      
If we can put a rover the size of a VW beetle on Mars, we can definitely put a couple of these on it too.
3
What is life like for PhDs in computer science who go into industry? vivekhaldar.com
51 points by gandalfgeek  4 hours ago   24 comments top 10
1
duked 1 minute ago 0 replies      
I have a PhD and I worked for several by industry research labs. So when the OP mentions " he went “oh well, if you must make me code…” That pretty much made me go “no hire.” He clearly is being either dishonest or taking shortcuts, PhD's want to have a job that is interesting if it's to piss code all day I would have stopped at my Master degree. Pissing code, as much as many would like to think is not exactly challenging if you have a PhD (I'm not saying that having highly optimized, readable code isn't but having something that works is not that hard) so if the whole interview process is centered around how well you can write something in A/B/C/D language yes I understand the interviewee.

One funny thing: "Most places will actually pair you up with a mentor who is not your boss" this has to be a joke, I worked for HP-Labs and other big labs never ever had a mentor, usually if you have a issue you are pretty much welcome to discuss it with your colleague, have them review your proposed solution but boy you are on your own when it comes to do implement it.

Again the OP has no clue clue of what he is talking and it's sad because his research interests are security related:
"You are much more likely to land an interesting job at the coasts. West more than East."
I can prove easily that if you have a PhD in a security or somehow related (formal methods) you have more chances to land a job on the east coast: DARPA,IARPA, MITRE, DoD all these agencies are funding 80% of the security research of the country ! Easier if you are a citizen indeed, but I'm a foreigner on H1B and I still work on a DARPA funded project ...
One more joke:
"Most industrial positions will not care about your publications" seriously this guy is ridiculous or just unable to think outside his google circle, most labs are indeed more than interested in your publications because it means for them potential patent application, more contacts in conferences and the end goal a better reputation and more easier to build a consortium to respond to call for proposals or BAA(http://www.arl.army.mil/www/default.cfm?page=8). But coming from someone who has not many publication I understand, for information here is his publication list http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/h/...

Anyway I'll stop commenting the rest of his post, his post is not all bad but it's based on his personal experience and he does a generalization out of his limited experience.

2
tensor 2 hours ago 7 replies      
The only exception I take with this is the part about coding in an interview. My complaints there are not specific to having a PhD though.

I understand the need for practical tests like these, but whiteboard coding is not a good way to do it. People dont' typically program on white boards, they do it alone with a compiler, reference manual, and the ability to test. Put them in a room with a terminal and ask them to write a straight forward program in 30 minutes. If you have to practice for the interview, what exactly are you determining in the interview? Their ability to program? Or their ability to practice interview tricks?

Immediately discounting a candidate because they are annoyed at your question to whiteboard code also seems rash. Asking someone experienced to write the "print a string in reverse" algorithm will naturally insult them. Especially so if the candidate has a CS PhD and spent time grading students on more complicated programs, or especially if their CV includes links to public repositories containing thousands of lines of code they wrote. On the other hand, asking a candidate to sketch a genuinely complicated algorithm on the board is a reasonable question.

3
flatline 2 hours ago 1 reply      
> there is no job in the modern tech industry that involves working alone.

Positions with aspects of management or sales typically involve actually working with people, but this can be hard to achieve for many others. The high tech field can be great for people who want to work in isolation, and fairly lonely for people who do not. I have never worked in a place that effectively implemented things like mentoring programs, for PhDs or anyone else. I am on the East Coast, I suspect that this trend is less prevalent out West. You may form bonds with people through a work environment, but the majority of your daily (and weekly, and monthly) effort is a solitary endeavor. I have heard this complaint from people working on PhD-level industrial research, to coding, to website design. Getting in with a company that recognizes and values real teamwork, assuming you are able to communicate effectively with others, will make nearly any job a much more pleasant experience.

4
mattdeboard 3 hours ago 0 replies      
At my previous employer, we hired on a guy who was very close to earning his PhD in CS from a midwestern university known for a prestigious engineering program. I was excited because I thought that at the very least he would be able to learn quickly and start tackling problems relatively quickly.

I was not disappointed. He was very assertive about not having his hand held wrt figuring out problems, and was proactive in learning new things. When I left to take another position, I felt comfortable that I was leaving the codebase in good hands.

5
emmapersky 3 hours ago 0 replies      
I have a Bachelors in Engineering (BEng) in Computing from Imperial College, not a PhD, but I work in a group at Google where a significant percentage of my peers do, but only know this because from time to time people mention the kind of research they did because either it is relevant to what we are doing, or comes up naturally in discussion.

I don't see any difference between what I do, and what others in my group do, we are all in it together, but took different paths to get there. Our group is more research focused (though we are Software Engineers) than many at Google, and this has given me a great opportunity to learn from those with an (even) more academic background.

TL;DR - I don't see any differences between PhDs and otherwise at Google, even in 'researchy' teams.

6
azakai 2 hours ago 0 replies      
Good points, as a PhD in the industry this all aligns pretty well with my experience as well.

The only comment I have is regarding

>> Do people usually work alone, or with others?

> Again, the answer to this question does not depend on what degree you have. The simple truth is there is no job in the modern tech industry that involves working alone.

Agreed that this probably does not depend on what degree you have. But there is a lot of variability in terms of how many people you work with. I worked almost alone for a while, and in large teams at other times.

I suppose there is no job where you work 100% alone, with no contact at all with anyone, ever, but I doubt that was the original question.

7
bearmf 1 hour ago 0 replies      
Do you work 9 to 5, or do you take your work home with you?
This, again, is completely independent of what degree you have. How you structure your priorities and your work to get it done within sane hours, and while maintaining some sort of “work/life balance” (I hate that term, but that is a whole other story) is entirely up to you.

This is indeed independent of degree but highly dependent of company you are working for. There are developers working 10-12 hours a day who cannot structure their work because they are just expected to stay in the office for said time period.

8
malloc47 3 hours ago 1 reply      
One very good point this article brings to light is the emphasis academia places on the tenure-track job as the end-all be-all of getting a Ph.D. I can't even begin to count how many times I've heard how getting a professorship is the only worthy job, and how there's no point to a Ph.D. if you don't get such a job.

It's nearly to the point of brainwashing (a sweeping generalization, of course, which likely varies greatly from department to department). As a soon-to-be Ph.D. graduate, it's refreshing to know that industry is not only a viable option, but doesn't have the same [superhuman][1] expectations.

[1]: http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Pap...

9
ltcoleman 1 hour ago 0 replies      
In my experience, the degree has not been a factor at all when it comes to value in industry. I have seen people that didn't have a degree produce more maintainable code in a more productive manner. I also have seen coworkers with bachelors outshine others with masters. I have climbed the corporate ladder at my company and now with my power to influence hires, I rarely take the level of degree in consideration.

Strangely, I have seen the attitude problem with new grads that possess a masters or doctorate. I even had one tell my manager that he had a masters and that meant he shouldn't ever have to do support work, and that he was smarter than my manager. He was a fresh 23 year old grad and eventually left for a larger company because he believed they would appreciate his masters. To add salt to the wound, we found that almost every bit of his code either broke our current projects or just flat out wasn't done correctly because he knew better than everybody so there was no reason for him to ask a senior dev or even a BA any questions about what he was doing.

All that to say, no matter the shop, no matter the degree, business is about money, and we, as coders, produce value which is why our salaries are high. A coder that is seen as valuable will always do better in there career and most of the time his coworkers will see his value and respect him for that.

10
mathattack 2 hours ago 3 replies      
The incentive system in academia is to produce people who cite their mentors in formal research. That's it. Because universities are non-profits, they go to the indirect incentives. This trumps any altruistic ideals. In their defense, most Phds in computer science are fully funded, so it's not transactional like an undergrad degree.

I've worked with several Phds with background in Stats and Computer Science. I found all to have "Good Background" - they knew a lot about their fields. Some were great at working with others, but some not so much. It wasn't a magical degree. Many regretted not getting the 5 years of work experience instead. Relative to the All But Dissertation crowd, they were a little better at Getting Things Done, versus just talking about things. (Very small sample size, so don't read too much into it)

4
Microsoft's Build 2012 developer conference sells out in an hour zdnet.com
13 points by marcieoum  1 hour ago   6 comments top 2
1
hdivider 25 minutes ago 0 replies      
I bet a lot of people just wanted to come along in case they get a free Surface or Windows Phone 8 at the end. ;)

As for the success of Windows 8, I think it's simply not possible to estimate how well it'll be received by the average Win7 user. Microsoft is releasing so much new stuff this year - most or all of it designed to work together - that I don't think it's valid to look at just one thing (eg the 'schizophrenic'-style UI of Windows 8/RT) and claim that because it's not awesome, the whole new direction that Microsoft is taking will invariably set them on a downhill path.

The data simply aren't in yet. It'll be an interesting 6-ish months before we'll see how this multi-billion dollar experiment will work out.

2
DigitalSea 1 hour ago 4 replies      
So much for lack of interest in Windows 8. I'm starting to think Windows 8 is going to be a successful operating system once people get use to the Metro interface.
5
Dalton Caldwell: We Did It daltoncaldwell.com
137 points by J-H  9 hours ago   42 comments top 12
1
therealarmen 9 hours ago 0 replies      
Congratulations to Dalton and the entire team at App.net! They deserve every penny.

Dalton's steely resolve through this entire process has been an inspiration to me; it takes a lot of guts to go out on a limb and ignore all the haters. Even if App.net as a platform doesn't take off I still consider this project a success.

2
jenius 3 hours ago 1 reply      
I would like to publicly admit my total wrongness and congratulate Dalton on his success with this project. You have won my backing and I'll be there 'apping' with the rest of you guys.

This was the somewhat popular but incorrect comment I made previously: http://hackerne.ws/item?id=4278378

Much love. Props.

3
julian37 7 hours ago 3 replies      
Congratulations from me, too! A very impressive result in such a short timeframe.

In danger of asking a silly question, I do wonder about this bit though:

In the very near future I will ask an impartial 3rd party take a look at our data (while preserving all privacy of our backers) and publicly verify that the join.app.net was operated in an honest manner.

I might not be seeing the forest for the trees here, but how would anybody actually go about doing that? If you don't release identifying information (which I assume would include names, credit card numbers, and so on) how would anybody be able to verify?

I'm not, in the slightest, implying there was any wrongdoing, I have no reason to believe that App.net is inflating any numbers or isn't "operated in an honest manner". I'm just genuinely curious to know how it would be possible to independently vet that all transactions were legit (or whatever it is you're trying to prove).

Am I correct to assume that the best anybody could do would be to say that "the numbers looks right"? Or maybe something like "the amount of money transferred via Stripe to App.net is in the right ballpark"? If people can do better, how so? Again, genuine question.

4
dangrossman 9 hours ago 3 replies      
Did Svbtle change its font recently? The text looks terrible. It's blurry, poorly aliased and some of the letters have full-out gaps in the strokes.

When viewed in Windows, of course.

5
rorrr 9 hours ago 1 reply      
I watched the video, and I still don't know what app.net is. "Social platform" is all I got, that's extremely vague.
6
ch 2 hours ago 0 replies      
Well I admit it. I drank the Kool-Aid. Partly to grab my twitter handle, not that I think it was in any danger of being scooped, and partly to see just what is happening on the inside of the walled garden.

I can't comment any further, my password I signed up with doesn't seem to work, and the password reset feature also appears broken :) I will reserve judgement for now.

7
kirillzubovsky 3 hours ago 0 replies      
Congrats. Looking forward to joining the alpha. Interestingly enough, I was skeptical about App.net before I paid the $$, but now I am all excited about it. Maybe you've got something going there :D
8
RandallBrown 8 hours ago 2 replies      
I'm glad they're going to have someone verify it. It seemed really strange to me that they rolled their own crowd funding platform for one use.

Why didn't they go with a third party like Kickstarter in the first place?

9
xwowsersx 3 hours ago 1 reply      
Just completed the sign-up, but still get "We don't recognize that username. This is probably because you haven't been invited to our alpha yet. To request an invitation, please email join@app.net." when I try logging in.
10
guscost 9 hours ago 0 replies      
Congratulations!! Incredibly excited, still waiting for that killer app.

EDIT: I purchased it weeks ago, but you get the idea...

11
briandear 7 hours ago 1 reply      
>>There has been zero manipulation of numbers, or “stuffing of the ballot box” by App.net.

Why would he even suggest that? I wouldn't have expected such behavior.

12
bslatkin 9 hours ago 0 replies      
Bets on how soon someone will imitate this approach to the platform business in other areas?
6
Why Explore Space? A 1970 Letter to a Nun in Africa. launiusr.wordpress.com
255 points by mike_esspe  13 hours ago   113 comments top 16
1
jxcole 12 hours ago  replies      
I don't know if this makes me cruel, but whenever people talk about donating money to starving children in Africa, I always imagine the following: If I were to donate some amount of money to starving children in an impoverished nation every year I could, theoretically, bring some of them out of starvation. However, these children would then grow into adults, and then these adults would have children of their own. The number of these new children would almost certainly be higher than the number I originally helped bring out of famine, so at that point there would be just as many if not more starving children than we had to begin with. So in my mind the question really goes the other way, how does donating money to buy food for starving children in Africa improve Africa's condition in the long term? What problems caused these nations to produce more children than food and what is being done to eliminate the source of these problems, rather than just the symptoms?
2
patio11 12 hours ago 2 replies      
It certainly reads better than "We need to funnel some money to the guys who build the rockets so that, if the Russians get frisky, we can credibly threaten to can end the world."
3
MarkMc 9 hours ago 2 replies      
I don't buy the argument. A $100b space mission is going to have a bigger benefit to the desperately poor than a $100b medical research program? No way.

But then, why does the space program need to be defended like that? People prefer buying big TVs, big cars, big houses instead of giving the money to starving Africans. So why not view the space program as just an extension of that?

4
confluence 2 hours ago 0 replies      
A relevant quote:

> When he [Michael Faraday] demonstrated his apparatus [the dynamo] to His Majesty's Government, the prime minister, Sir Robert Peel, asked, "Of what use is it?" To which Faraday replied: "I don't know, but I'll wager that some day you'll tax it."

- Michael Faraday

Source:

http://en.wikiquote.org/wiki/A_Race_on_the_Edge_of_Time

http://en.wikipedia.org/wiki/Dynamo

We will tax space in good time my fellow skeptics.

All in good time.

5
vacri 10 hours ago 1 reply      
A more curt, but more direct response would be: As a Christian Nun, you wouldn't even be in Zambia if it weren't for explorers increasing the bounds of our knowledge. Apart from the Copts in Egypt, there's not a lot of 'native' Christianity in Africa.
6
slowpoke 10 hours ago 0 replies      
I did not know the SU actually turned off all radio transmissions and sent out
ships to assist in the recovery of Apollo 13. That is an impressive display of
human compassion, even inmidst the Cold War (though the cynic in me assumes
there were ulterior motives, as well).
7
mseebach 12 hours ago 2 replies      
So the argument that research for the sake of research is worthwhile is perfectly sound, the condescending "let me explain to you how a budget works" and "it's not my decision to spend the money" parts certainly rubs me the wrong way.
8
joering2 10 hours ago 6 replies      
Ok, so the story with microscope was a good one. I was initially shortsighted. But coming back to the recent Mars mission, anyone has any ideas, more or less accurate/detailed, of how this particular mission will/could benefit our civilization? This is a serious question.
9
tmoertel 10 hours ago 0 replies      
Does anyone know how this letter first came to be published?

Update: Google Books answered the question for me:

> Dr. Stuhlinger responded to the sister in a letter that was published by NASA/George C. Marshall Space Center in 1970 titled "Why Explore Space?"

[1] http://books.google.com/books?id=qXuLydSqzDQC&lpg=PA55&#...

10
malkia 4 hours ago 1 reply      
"You may ask now whether I personally would be in favor of such a move by our government. My answer is an emphatic yes. Indeed, I would not mind at all if my annual taxes were increased by a number of dollars for the purpose of feeding hungry children, wherever they may live."

Amen to that.

11
PakG1 11 hours ago 0 replies      
This is also a perfect parallel to CEOs who have to justify to their shareholders why they spend so much money on R&D.
12
pippy 6 hours ago 0 replies      
I'd love to compare $billions in expenditure / angry letters from conservative nuns, for both NASA and the Department of Defense.

Even better would add the estimated lives saved by NASA technologies and DoD bombs.

13
nathan_f77 11 hours ago 3 replies      
This letter is timeless, and provides such brilliant perspective. It's a fantastic answer to questions I've also been thinking about.

As an aside, I wonder if something so convincing could be written about military spending.

14
repoman 10 hours ago 0 replies      
Well, US produces lots of surplus every year. We don't give them out. Instead, we burn them. Really now?
15
ninguem2 12 hours ago 4 replies      
>He was a member of the German rocket development team at Peenemünde

Just the guy to be answering ethical questions...

16
fragsworth 8 hours ago 2 replies      
This guy has incredible tact, and knows his audience well.

> Ever since this picture was first published, voices have become louder and louder warning of the grave problems that confront man in our times: pollution, hunger, poverty, urban living, food production, water control, overpopulation.

He even took care not to mention climate change, which I assume was in case the reader has a strong bias against it.

7
Terms of Service; Didn't Read tos-dr.info
68 points by ryanio  7 hours ago   10 comments top 5
1
bpierre 5 hours ago 1 reply      
Posted 5 days ago, comments: http://news.ycombinator.com/item?id=4350907
2
rradu 4 hours ago 2 replies      
All the ones I really care about--Twitter, Facebook, Amazon, Apple, Google--don't have a class yet (what does that even mean?).

Regardless, this is great for a quick summary of what you're agreeing to (or already agreed to).

3
DanBC 3 hours ago 0 replies      
I welcome this and I hope more people use it.

Unfortunately, companies pay lawyers to come up with incomprehensible AUP/TOS/etc, and those lawyers are unlikely to want to allow the company to use a "generic" document, even if it is very high quality.

There are some ridiculous terms and conditions in some documents, and I'm not sure if any of them have ever been tested in courts.

I'm keen to see how this team can overcome international differences in law to create a simple but effective ToS.

4
arkitaip 3 hours ago 0 replies      
There's a Swedish project called CommonTerms that also tackles user agreements from a usability POV
http://www.commonterms.net/
5
SudarshanP 2 hours ago 0 replies      
Do any of the ToS websites use http://hypothes.is/ for annotation?
9
Cling: Running C++ in an interpreter coldflake.com
84 points by coldgrnd  8 hours ago   29 comments top 13
1
sedachv 2 hours ago 2 replies      
To provide more precedents and a little history:

The first C "interpreters" I know of were for Lisp machines: Symbolics' C compiler (http://www.bitsavers.org/pdf/symbolics/software/genera_8/Use...) and Scott Burson's (hn user ScottBurson) ZetaC for TI Explorers/LMIs and Symbolics 3600s (now available under the public domain: http://www.bitsavers.org/bits/TI/Explorer/zeta-c/). Neither of them are interpreters, just "interactive" compilers like Lisp ones are.

I am writing a C to Common Lisp translator right now (https://github.com/vsedach/Vacietis). This is surprisingly easy because C is largely a small subset of Common Lisp. Pointers are trivial to implement with closures (Oleg explains how: http://okmij.org/ftp/Scheme/pointer-as-closure.txt but I discovered the technique independently around 2004). The only problem is how to deal with casting arrays of integers (or whatever) to arrays of bytes. But that's a problem for portable C software anyway. I think I'll also need a little source fudging magic for setjmp/longjmp. Otherwise the project is now where you can compile-file/load a C file just like you do a Lisp file by setting the readtable. There's a few things I need to finish with #includes, enums, stdlib and the variable-length struct hack, but that should be done in the next few weeks.

This should also extend to "compiling" C to other languages like JavaScript, without having to go through the whole "emulate LLVM or MIPS" garbage that other projects like that do. I think I figured out how to do gotos in JavaScript by using a trampoline with local CPS-rewriting, which is IMO the largest challenge for an interoperable C->JS translator.

As to how to do this for C++, don't ask me. According to the CERN people, CINT has "slightly less than 400,000 lines of code." (http://root.cern.ch/drupal/content/cint). What a joke.

2
joebo 5 hours ago 1 reply      
I've used libtcc from tcc to do something similar on a prototype listening on a socket to do queries over a 2 gig memory mapped file. I'd pass over the query as a string of C that would be dynamically compiled and executed by libtcc. It worked really well but ultimately didn't go anywhere other than research. Here's an example from the distribution (first google result for the file): http://www.koders.com/c/fidC76C8B834DFF05F1D0BD61220AC19E246.... TCC can be found here: http://bellard.org/tcc/
3
codedivine 8 hours ago 1 reply      
Well, for anyone interested in Cling, check out this Google tech talk: http://www.youtube.com/watch?v=f9Xfh8pv3Fs

This is a CERN project and it uses Clang from the LLVM project. The idea is simple: use the clang to generate LLVM and then use LLVM just-in-time compiler.

4
tree_of_item 1 hour ago 0 replies      
The LLVM infrastructure gets more amazing every day; Emscripten and Cling are very exciting projects. C++ gets a lot of flak but it's still got a huge amount of life in it.
5
mgurlitz 2 hours ago 0 replies      
To be clear, those empty #include's are typos, not Cling inferring desired header files. Both are:

    #include <iostream>

6
jlarocco 5 hours ago 1 reply      
It's a cool idea, but his reason for creating it is kinda dumb.

Creating an entire "project" just to check a code snippet is just silly.

Just create a "testing" directory and throw your one off test files into it and compile/run them there.

I start mine with a comment explaining what I'm testing, why I'm testing it, and what special compilation flags are required, if any. I even have an Emacs macro that fills in the boilerplate includes and main function. The overhead involved is probably less than 15 seconds.

It has the advantages that I can test multiple compilers and I keep a history of the things I've tried.

7
fferen 5 hours ago 2 replies      
I just keep a fixed .cpp file with a bunch of common includes and directives, and alias the g++ command to link to common libraries, so my workflow goes:

> vim temp.cpp

> g++ temp.cpp

> ./a.out

Very similar to my process with Python, actually. Every time I use the REPL for some experimenting, the code ends up outgrowing it and I have to stuff it in a file anyway, so I may as well cut out the middleman to begin with. YMMV.

8
clobber 1 hour ago 0 replies      
For small programs, something like CodeRunner is nice too: http://krillapps.com/coderunner/
9
Mon_Ouie 7 hours ago 0 replies      
Although getting an interactive environment definitely is a nice thing, I'd argue you don't have to create a whole project directory, etc. to play with an idea; I usually write it in a single file.
10
freepipi 4 hours ago 0 replies      
I think it is very useful if you want to know more about c++,especially some feature you 're not very sure about.
interactive interpret make it very intuitive, and it will save you time,because you don't need to compile the code.
but one limit is that now it doesn't support template , you still have to write a source file if you want to use template.
11
deckiedan 7 hours ago 0 replies      
Could you not just use a project directory template?
All you need really is a .c(xx|pp|whatever) file, a Makefile, and then the workflow for a new idea is:

cd ~/src;

cp -R c-idea-template foobar;

cd foobar;

$EDITOR test.c*

and in your editor (say vim) just run :make

or write a .sh file with all of the above in it so it's just one step. No complex install procedure, you get all your normal tools and stuff.

Alternaitvely, create a 'test projects' git(hub) project, with the files you want in it, and create a new branch for each idea. That way you get backups as well.

12
shadyabhi 8 hours ago 1 reply      
So, what are it's limitations, if any?
13
zenogaisis 8 hours ago 2 replies      
Why would I do that :S
10
Debian to use Xfce as its standard desktop h-online.com
32 points by googletron  3 hours ago   7 comments top 5
1
jmillikin 2 hours ago 0 replies      
2
w1ntermute 2 hours ago 1 reply      
I'm very glad to see this happen, for a coincidental reason. Xfce is a great DE and a solid implementation of the traditional desktop UI paradigm that many of us ("us" being power users) are comfortable with and productive in. While the major players (Apple, Canonical, Microsoft) chase after the latest UI fads in an attempt to appease casual users, we're the ones getting the shaft.
3
bootload 18 minutes ago 0 replies      
"... Hess explained that the measures ensure that the standard desktop will fit on the first installation CD, which GNOME currently does not ... Unfortunately, Debian does not have a well-defined procedure for making such choices, So, I've decided to be bold ..."

And that's the only reason? This is a big step, no evaluation, asking users? Is this the only reason Gnome isn't being adopted because it won't fit on the install?

4
factorialboy 47 minutes ago 0 replies      
Ubuntu user and not Debian but XFCE is awesome. It's my default environment after trying out Gnome 2, Gnome 3 (Shell), Unity, Cinnamon, KDE and whatever else I could get my hands on.
5
byroot 46 minutes ago 1 reply      
Using a better compression algorythm (LZMA2) is also planned by Debian maintainers to make gnome fit on the CD.
So this commit will probably be reverted before the final release.
11
The Myth of the Super Programmer simpleprogrammer.com
24 points by jsonmez  3 hours ago   29 comments top 18
1
crazygringo 18 minutes ago 1 reply      
The author blows his own argument out of the water when he says:

> I would venture to say that really good programmers are about 10 to 20 times as effective as just average ones.

Aren't those the super ones? Or at least on that track?

But disregarding that, I'd personally say the super programmers are the ones capable of inventing git, or bitcoin. I don't care what you say, 99.9% of programmers simply couldn't do that. Saying super programmers don't exist is like saying Einstein's theory of general relativity is decomposable into easy parts.

Sorry, I don't think so.

2
spacemanaki 2 hours ago 2 replies      
I think this post has good intentions. There's some truth to it and there's definitely value in plumping yourself up from time to time. No one should be worrying about whether they're a "Real Programmer" or not. I agree that there's no value in beating yourself up or beating up your work.

That said, I think the post takes it too far, and veers off into self-congratulatory back-patting that I think is not valuable, at all.

I also think the author is wrong about hard problems, or at least managed to come up with a tautology that's not very interesting. Sure, many (and perhaps all) hard problems can be decomposed into smaller, simpler problems. But that's often not trivial, and the decomposition step often requires a lot of hard work. Even if hard problems can be broken down into smaller pieces, that doesn't always make the problems easier, even if it makes them more manageable. They still might require tremendous effort.

The example given isn't a very good one; that sum just isn't that advanced mathematics. Now that's not to say it's not Greek to some programmers, and not to pooh-pooh people who have a hard time with math; I have struggled with it a lot. But there's a lot of math that really is hard to grasp, and isn't just a question of learning the notation, which is what the sum example is really about.

The challenge sets up a straw man, IMHO. "Tell us that complex problem that is too difficult for the rest of us to possibly understand." It's asking the reader to be an arrogant ass.

"I've just never seen a problem that couldn't be broken down into simple to understand components, yet."

Again, I think this just isn't a useful criteria. Broken down into simple to understand components doesn't make things easy. Here's an example, from my own dabblings: I am absolutely fascinated by programming language theory and compilers. Recently, I've been trying to get at the heart of two sub topics: parsers and types. I've been struggling to find presentations of parsers and parsing that really peels back the magic and shows how everything works, without just punting and letting Lex and Yacc do the dirty work (btw if you know of a compiler text that doesn't do this, let me know!) Yes, you can break parsing into a lexing step built up from regular expressions. Yes, you can translate regular expressions into NFAs and convert those into DFAs... and so on. But putting this together into a lexer generator is non-trivial. It's not easy, even if the components are simple. It's the same with type theory. You can write simple syntax directed type checkers and they might fit in your hand, but if you really want to understand type theory, it turns out there's a lot of math, and proofs to get through. Sure, you can decompose that into topics from set theory and so on, but this doesn't make it easier.

More and more, I think his core argument is just really flawed. There's some connection here with Rich Hickey's "Simple made easy" talk, where he argues that simplicity can be hard to come by, and how easy and simple are different... but I've written too much and will leave that as an exercise :)

3
pmb 1 hour ago 0 replies      
It's like math. You don't believe the difference between the best and the rest until you meet the best. It is apropos that someone linked to a Terry Tao math post in this discussion, because right now, in math, Terry Tao is definitely in the "best" category. Unless we have chased the people like him away from programming, they must certainly exist in our field as well. However, our works are largely anonymous, which means that it is much harder to tell who is the best.

But Rob Pike probably is up there.

4
btilly 1 hour ago 1 reply      
Nice thesis.

He's wrong.

The super-programmer that I'll hold up is Jeff Dean (see http://research.google.com/people/jeff/ for some of what he has done) and the task that I'll hold up is coming up with new standards and abstractions for creating scaleable software.

Sure, it is possible to break any one of the things he has done down into simple components. As a random example look at any of the many blog posts that discuss how MapReduce works and how to implement it yourself. (Heck, he had to do so when he turned his ideas into running software.) But could you have come up with those ways to think about problems? I am sure that I could not.

5
tytso 16 minutes ago 0 replies      
Taking complex things and making them simple is not the mark of a "super programmer". Or, at least, it's not enough. A highly skilled programmer needs to be able to deeply understand multiple layers of abstraction at the same time. He or she needs to be able to know what an abstraction layer promises, and what its performance characteristics are --- and also be able to grok the layers above and below that abstraction layer to a similar deep level.

I call it "abstraction surfing", and it's not something that all people can do well. I believe having this skill tends to correlate very well with high productivity programmers --- and it's not a skill that all people have.

6
noonespecial 41 minutes ago 0 replies      
"Superstars" seem different to me not because they can or can't decompose problems into steps, but because they seem to have a 6th sense for what is possible.

Everything is easier once someone has done it before. You don't even have to know how it was done, just knowing its possible is enough. Superstars can substitute some sort of magic pixie dust(1) for this foreknowledge and proceed like they already know its been done and they're just rediscovering the minutia of making it happen.

Its humbling to me when I meet them because they think thoughts and have insights that I know I would not have had even if I worked on the same problem for 100 years.

(1) I strongly suspect that its made almost, but not quite, entirely of crazy.

7
simonsarris 2 hours ago 0 replies      
> But I do have a challenge for you. Surely you know a “super programmer.” You might even be one. If so, let's hear from you or them. Tell us that complex problem that is too difficult for the rest of us to possibly understand.

"There are only two hard problems in Computer Science: cache invalidation and naming things." -Phil Karlton

I'm a little hesitant but inclined to agree.

I hesitate because like most discussions on these matters, the terms are defined differently for everyone I talk to and that makes agreeing a little difficult without qualifying a lot of statements.

I'm not sure I've ever come across something that was simply too difficult to understand, per se, just things that take different amounts of time and different amounts of diligence to understand or implement.[1]

A lot of content in programming (and math) is what I like to call "NP-understanding complete". Problems that are difficult to find solutions for but once shown a solution are very easy to comprehend. Coming up with novel solutions to these problems in the first place is admirable, and I would be inclined to call those who do "super", but I don't know if there are very many "super" people who consistently do this. It may well be that finding novel solutions is merely a product of spending a lot of time at the frontier of something, and in fact its not only not a gift, but difficult to see them coming (the solutions) until you have them!

[1] When looking at superhuman accomplishments or worrying that I might not understand some subject (something I was completely frightened of as a child) I'm always reminded of The Ladder of Saint Augustine by Longfellow.

    The mighty pyramids of stone
That wedge-like cleave the desert airs,
When nearer seen, and better known,
Are but gigantic flights of stairs.

The distant mountains, that uprear
Their solid bastions to the skies,
Are crossed by pathways, that appear
As we to higher levels rise.

(The full text: http://www.poetryfoundation.org/poem/173902)

8
TamDenholm 2 hours ago 0 replies      
I think this kinda goes along with the four stages of competence, when you are at a level of unconscious incompetence in any subject matter, it all seems like black magic to you, you've no idea how it works and it seems like the most difficult thing in the world, but just a little insight into how something works can open your mind into understanding it enough to see that like most things, it just takes time and effort to learn just about anything.
9
pcestrada 1 hour ago 1 reply      
Sure you might be able to understand something given enough time and guidance. But to me a super programmer is someone who is able to make that leap and create something novel, useful, and valuable. For example, John Carmack, who many would describe as a super programmer, showed the rest of the game industry what could be done with 2.5D in Doom, and later commodity 3D hardware with Quake. You can certainly review his code since it's open source and understand it, but how many people could have independently done what he did when he did it? Not many I would wager.
10
dustingetz 39 minutes ago 0 replies      
Rich hickey and his collaborators on clojure and datomic are doing things that few programmers are capable of.

Anyway, author sees people who are too humble. I see people who overestimate their skills. Maybe these are the same thing but at least humility is a virtue.

11
slurgfest 34 minutes ago 0 replies      
Any sufficiently advanced craft is, from the perspective of people who do not have that craft, indistinguishable from magic.

Doesn't mean it is not actually ordinary and accessible, though...

I question the premise that there is some qualitative difference between the "super" people and the rest of us dunderheads.

Rather, there is a hard line between what you understand and feel confident to reproduce, and what seems magical to you.

12
kenmazy 14 minutes ago 0 replies      
From my experience with super programmers:

The super programmer converts complex ideas into code coherent to the normal programmer.

The super programmer makes difficult decisions where the normal programmer sees only one option.

13
j45 1 hour ago 1 reply      
One quality I find with above-average programmers is they type so much faster than the average programmers.

100-140 WPM on average.

What's the importance? They code quicker, hit walls quicker, debug quicker, fix quicker, ship quicker. They also have to be able to think quicker once the steps ahead of them are clear.

Getting through this cycle quicker I think makes a programmer much more effective at building lots more software, and through it, finding the lessons from the experiences a lot quicker.

14
dsymonds 2 hours ago 0 replies      
This is nonsense. His key argument is that hard tasks are reducible to simple tasks, and therefore they are actually not all that hard.

Ask anyone who's implemented, say, Paxos, and ask them whether because the individual steps were easy the whole system is therefore easy. Or indeed any other distributed system. Just because something can eventually be decomposed into simpler things does not mean that it is a fundamentally simple construct, nor does it mean that a programmer who can manage the simpler constituents is competent to handle the overall system.

15
Arelius 2 hours ago 0 replies      
> But I do have a challenge for you. Surely you know a “super programmer.” You might even be one. If so, let's hear from you or them. Tell us that complex problem that is too difficult for the rest of us to possibly understand.

I generally tend to agree, and in the majority of computation I think this is generally true. Compilers for instance I think are one thing that the majority of programmers imagine to be simply above them. While in reality they are much like the rest of the software we write.

However as a counter example, the hardest things I've ever written included a rather complicated lock-free data structure, specifically optimized for 3D effects rendering and simulation. The amount of mental capacity that took was pretty astounding, and what I took from the experience is that I'd try to never have to do something of such complexity again. I am also, however unsure if I could complete a similar task again. It did give me an appreciation that some things are just simply harder.

16
papaver 56 minutes ago 0 replies      
its not really about programming.

its either you are a superstar problem solver or a simple problem solver. the programming is just a means to an end. a superstar problem solver has the ability to take most problems and break them down and find the means to solve any problem, weather related to programming or math or most anything else.

superstar problem solvers can jump into most fields of software and learn how to cope in their new environments. every job i've had has forced me to learn a different language and work on a completely different problem. the problems may change, but how one attacks and breaks them down them rarely does.

17
mathgladiator 2 hours ago 0 replies      
I think this could be tied to Terry Tao's recent piece:

http://terrytao.wordpress.com/career-advice/does-one-have-to...

http://news.ycombinator.com/item?id=4370338

- - -

The problem that I have with "programmer" is that there are a lot of clueless people that call themselves "programmer", and I think there is a clear boundary. Given the population and size of clueless programmers, I'd define the non-clueless programmers as super any day.

18
dman 2 hours ago 0 replies      
The super programmer does not necessarily solve a different category of problems. She sees maintanable and performant abstractions when she encounters new problems. She sees order where others see chaos.
12
What Ouya Isn't hazzens.com
44 points by hazzen  5 hours ago   24 comments top 12
1
SCdF 2 hours ago 1 reply      
Initially I was excited for Ouya. Not because I was ever going to own one (console/tv gaming sucks for my lifestyle) but because it would mean there would be better games on Android.

Except now I'm not so sure.

Making a game work on a touchscreen 30cm from your face is a completely different proposition from making a game on a controller 4m in front of your face. It's nice that the underlying OS is the same, but it's not that nice.

It's not a case of just adding controller support-- your entire game changes. Fruit Ninja works on touchscreens, it doesn't work on controller. Street Fighter works on controllers, it doesn't work on touch screens. And FPS works well on neither (go go mouse + keyboard).

It's not just the controller either: playing games on a couch in your living room has different motivations to playing a game on a smartphone. Shallow 'toilet games' make sense on smartphones, they do not make sense on consoles. Deep 1hr+ strategy games, or games with consistent network access etc, make sense on consoles, they don't make sense on phones.

So I think one of two things will happen: games will either heavily target one platform or the other, and have either no support or horrible crippled support for other control schemes and mechanics, or games will genericise to the point where your controls and 'motive' is less important.

I'm not a big fan of either result.

Note: Console vs. PC is a good case study. They've had years to get this right, and there are still lots of horrible console ports. I'm not talking about bugs or graphic quality here either, but stuff like the controls on PC being awful, the UI being targeted toward consoles (Play Skyrim of Oblivion to see what I mean), hilarious console-focused messages about not turning off my computer while the game is saving, etc. If Bethesda can't spend the money getting two UIs right I can't imagine an indie dev being able to.

2
vibrunazo 5 hours ago 2 replies      
The key he is missing is Android. The huge big advantage of the OUYA is that it leverages the Android ecosystem. As a game developer myself, we were already developing OUYA ready games before the OUYA existed. Many (most?) games in the Play Store right now are already ready for the OUYA (gamepad support, freemium, 10 feet experience) or close to it. Which are a great good practice for developers to be following on Android anyway. And now the OUYA is an additional encouragement.

I agree with the author that the OUYA is not the future of gaming. Android is the future of gaming, the OUYA is just a tiny piece of the whole puzzle. There will be plenty of competition in the Android gaming-focused set top box in the next few years. The OUYA is just the first one. Maybe the OUYA itself will fail and lose ground to the competition. I don't know, nor do I care. It doesn't matter. What matters is that in the end of today it will have helped drive the Android ecosystem forward. The OUYA sends a clear message to the incumbent. Even if the OUYA dies because it couldn't outsell the new sony/samsung/whatever Android gaming boxes. It still succeeded on a game developer's point of view. Because the OUYA ever existed. More developers will be making their games compatible with consoles. More OEMs will be building Android gaming devices of different formats. Both game developers and gamers win.

He's right, the OUYA is just a so-so box without much special by itself. But it doesn't make sense to look to the OUYA by itself. Almost no one would've backed the OUYA if it was just a new platform never seen before. But it's not. The OUYA is just a detail that shows a clear trend that was not as obvious before (though many of us were saying this for years). Which is that Android is the gaming platform of the future.

3
kevingadd 1 hour ago 0 replies      
I like the post overall, but the tech analysis is really weak. It makes it look like you spent a few minutes reading specs on wikipedia and decided that was enough to compare and contrast the hardware. It's not, and that comparison doesn't really add anything to the post.

To properly compare the processors you need to note that they are using different instruction sets and that the 360's processors are in-order with hyperthreading. Are the ouya's cpu cores in-order or out of order? do they have hyperthreading? What's the memory latency like? How big are the caches?

To properly compare the GPUs you need to understand the major differences in architecture. The 360 didn't have '512MB of memory and 10MB of video memory'; it had 512MB of memory that was shared between CPU and GPU (which means extremely cheap direct access to memory used by the GPU - something with no analog in modern PC architectures) and then 10MB of extremely, extremely high-speed framebuffer EDRAM on the GPU. These two unusual design decisions meant that overdraw was nearly free on the 360 (because the framebuffer memory was so fast) and that you could use the GPU to help out with CPU computations or have the CPU help out with rendering because both could freely access each other's memory. The Ouya could have double the clock speed and memory of the 360 and still fail to run 360 games if it has no equivalent for those features, because if you have a GPU/CPU memory split, you can end up needing two copies (system memory and GPU memory) of data, and it becomes much more difficult to have the GPU and CPU assist each other.

Someone who's done development for the 360 with the native dev kit could probably provide more detail here, I've only used the XNA dev tools (so GPU access, but no native CPU access) - IIRC there are some other perks the 360 has like a custom vector instruction set that might also give it an advantage over similarly-clocked competitors.

4
malkia 4 hours ago 2 replies      
On one side I want it to succeed (the liberal hobbyist in me), on the other side I see some problems (the conservative console game developer in me)

I don't see how AAA title would be delivered to this device. And without AAA titles, the device can't be primarily about games.

What used to fit in CD-ROM in PSX days, then on DVD in PS2/Xbox, now it needs bigger and more storage. With the recent download limits from internet companies that would become even harder. It's one thing to stream 2-3hr movie - it's completely another to have the assets on time, even to places where bandwidth is not that great.

TRC - Technical Requirements Certification process - This is the GATE to the quality. It's way more hard and complete process than Apple's or Android (if there is any).

Security - Hardest part to get. You can't succeed here, it's a goalie position. But if you can hold long enough, you'll be good. Yes, piracy is what makes video games unsellable in China (so far micro-payment seems to work there).

Original Titles - Without them, or much improved Ports of something else - there is no direct incentive to buy it.

Second nature - The device does not serve as something else to be used mainly instead of games. When I bought my PS2, there were not many PS2 games, but it was (and still is) pretty good and cheap DVD player.

5
jeffool 1 hour ago 0 replies      
What Ouya is, is a free emulator for older games. And an OnLive box. And an XBMC box. And by default of running Android, a Netflix box.

There are 3600 people willing to pay to reserve their usernames. Almost 60k people have bought the hope of having one. At worst, it will get a retail push and be a BlackBerry PlayBook level failure. Or it may not, and it could be a Raspberry Pi like success (in terms of buzz). Or anything in between.

What I don't get, is the immense hate it gets. I don't see anyone calling it "the future of gaming", except maybe the press people behind the product? And that's kind of their job. The derision it gets from people who seem to think they're on a Crusade to teach the unenlightened that they're being bamboozled is weird. It's a game console, primarily aimed at a tech-savvy audience that doesn't mind hacking their toys, and even soldering them. And yet I haven't heard that audience reach even the levels of annoyance of an iOS/Android argument. Much less one that needs to be told (repeatedly) that their toy WILL NOT change gaming! ... Who pigeonholed the OUYA as a flagship in the revolution anyway? Marketing? So air your screeds at marketing people. Not those who enjoy tinkering with computers.

6
stcredzero 2 hours ago 0 replies      
> Ouya is a lot of promises and fluffy ideals in an industry that doesn't give two shits about them. An open machine plugged into my TV sounds cool, but what would I use it for that one of the myriad non-open options doesn't already do?

Not very friendly sounding, but this last paragraph is the very crux of the matter. If it's as open as we hope it will be, then there could be a huge number of things that it does that non-open options don't do.

7
lukifer 1 hour ago 0 replies      
There is a case to be made that mindshare and user culture matters more to long-term success than the actual product, assuming the product meets a minimum threshold of quality. (See: the Mac user-base from around 1995-2005).

If the OUYA succeeds, which admittedly looks like a long shot for all the reasons described here and elsewhere, it will be due to capturing hearts and minds of devs and power users for the years it will take to gestate into a profitable platform, and not because it's the best value prop for either users or devs on day 1.

8
makmanalp 4 hours ago 0 replies      
I think what he's missing is that a lot of people are expressing interest in building games and apps for it specifically and maybe exclusively.

This addresses the weak point of android and also newly released consoles. New consoles (the wii comes to mind) tend to have a lack of games, which tends to in turn make them less attractive. Android phones are pretty damn good these days but the apps are worse than iOS ones in general, which gives me pause when considering it.

Good devices without content suck, and vice versa (maybe a bit less so). So it's all about packaging good content with your good device. I think that's what they are aiming at.

9
thechut 4 hours ago 2 replies      
You make it sound like developers need to develop specifically for OUYA, many Android games can be very easily ported to the OUYA platform. By using Android OUYA is also tapping the giant pool of existing Android and Java software engineering talent. They are attempting to bridge the gap between smart phone games and console games.

No, its probably not the future of set top entertainment but for $99 I would take an OUYA over an Apple TV, Roku, whatever else any day of the week. Just because it isn't revolutionary doesn't mean it won't be cool/useful/successful.

10
heretohelp 5 hours ago 2 replies      
Finally, some rationality.

It's usually the people who aren't actually gamers or are otherwise familiar with the industry that will breathlessly praise Ouya. Usually people who are wantpreneurs dreaming of day when they live off selling digital snake oil.

That the Ouya would be experienced by them as an apotheosis of that is not surprising.

I've been gaming since I was 2 years old (NES). The kickstarters wasted their money. We already have open platforms, we're just ignoring them.

11
pinchduck 24 minutes ago 0 replies      
Fine, Mr. Crankypants, don't buy one.
12
stewie2 3 hours ago 1 reply      
I hope it can be a tegra 4 console with a discrete gpu, something like geforce 660.
13
H - The surprising truth about heroin and addiction reason.com
31 points by dangeur  2 hours ago   7 comments top 4
1
robbiep 1 hour ago 1 reply      
As someone in the medical field, this article does a very good up job of restoring the balance and correcting some preconceptions regarding substance dependence and addiction - ie. that not all substances are going to lead to lifelong dependence and addiction, and in fact it only occurs to a small percentage.
This is established medical fact and is taught in all medical schools now.
However I worry that it might make people consider heroin as something that will be okay to have a crack at... There are very real psychosocial dangers of heroin should you end up being dependent and addicted - and the depths of despair that users end up in should not be ignored. No-one can predict ahead of time if you will be okay on it, or if you will follow the stereotypical pattern with which we are familiar with from popular culture.

The article also fails to mention that once addicted, and then having returned their lives to some base level where they are able to seek help (assuming they have not died of an overdose), 90% of patients that start on the methadone program are still on it 10 years later- in Australia the methadone program grows at about 4-6% per year, representing new people coming on and no-one really leaving.
Not cool, and not a good lifestyle!

2
gee_totes 44 minutes ago 2 replies      
According to this article[0] the street price of a dose of heroin is $10-$25 dollars. However, that street place is suffering 40-50 times inflation[1]. If heroin were relieved of price inflation, a dose would cost between 20 cents and 63 cents.

Compare that to cigarettes. I once heard from a foreign cigarette manufacture that the cost of manufacture for a pack of cigarettes if 50 cents. A pack of cigarettes contains 20 doses of nicotine and costs around 14 dollars (in New York City, American Spirit Brand). Each dose (cigarette) costs 70 cents inflated and took only two cents to produce. Nicotine suffers from a 35 times price inflation due to legalization.

These look like two industries that are ripe for disruption.

[0]http://heroin.net/about/how-much-does-heroin-cost/

[1]http://reason.com/archives/2003/06/01/h/1

3
papaver 1 hour ago 0 replies      
i think this quote really sums it up, "I try very hard not to use when I'm miserable, because that's what gets me into trouble."

it all depends on your personality. if you can get high and be responsible you can probably cope with using in moderation. unfortunately, i imagine a majority of people are not like that. i know several friends that have very addictive personalities and getting into junk would have ruined their lives.

a lot of it depends on how its taken as well. mainlining vs snorting vs smoking are all significantly different experiences, the amount taken as well. one can function when its only a small amount taken, increase the dose and you'll be lying around not being able or wanting to move.

i've tried everything under the moon though and i consider heroin pretty dangerous. i would not advise anyone to try it unless they have a very strong head. it is also one of the most amazing experiences i have ever had. the singer of sublime thought he could jump in and get out when he wanted, he overdosed, quite a sad story.

4
nikatwork 24 minutes ago 0 replies      
If you'd like a deeper look into this issue across a wider variety of substances, I highly recommend the book "Saying Yes":

http://www.amazon.com/Saying-Yes-Defense-Drug-Use/dp/1585422...

14
Show HN: Prune Unwanted Stories from HN Front Page swapped.cc
23 points by latitude  4 hours ago   13 comments top 5
1
vlad 55 minutes ago 0 replies      
Interesting idea. Last month I released a Chrome extension for Hacker News that lets you ban stories by domain name (simply choose the domain names in the options, so you don't have to click Hide every time like you do here). It also shows user profiles in their own popup, including twitter button and photo integration. Again, specific features can be configured or turned off in the options panel. New features coming soon!

http://vlad.github.com/autobahn/

2
eugenes 42 minutes ago 0 replies      
Thank you for making it. Chrome now insists that you install extensions and userscripts from their "Web Store" so the extension doesn't work.

The bookmarklet works great but can't retain state over page refreshes. This could be super useful otherwise.

3
latitude 4 hours ago 0 replies      
I've been using this tweak for a month now, and I converged to hiding maybe a story or two per day. It's a great stress reliever if anything else, especially during the crunch times :-)
4
dfc 3 hours ago 2 replies      
I would love something like this for the "new stories" page that only displayed links with 2 or more votes.
5
stbullard 2 hours ago 2 replies      
Cool. What I'd really like would be a way to hide stories across multiple devices!
15
Midnight project: Password manager without a password manager github.com
20 points by jaseg  4 hours ago   14 comments top 8
1
antimora 1 hour ago 1 reply      
This approach is cryptographically WEAKER than a password manager!

Because password[0], password[1], ... password[n] are all related through common salt and master password string (and known domain name). Where as passwords stored in a password managers are independent.

Therefore, in theory, if I know a few of your passwords (lets say I own 10 top domains and you've got accounts with me), I can crack your salt and password file, or at least, I can generate probable passwords for other domains.

2
rlpb 3 hours ago 1 reply      
You receive an email from a sheepish website owner admitting that your password has been compromised and asking you to change it. Now what do you do?
3
cypherpunks01 1 hour ago 0 replies      
I trust Stanford PwdHash (https://www.pwdhash.com) a lot more than this, and it basically does the same thing with some nice features and browser plugins.
4
greenyoda 2 hours ago 1 reply      
One major problem with this scheme is that if someone steals (or subpoenas) your computer they can discover all your passwords just by looking at your shell history. A password manager that's secured by a strong password doesn't have that vulnerability.

Edit: As jroes pointed out below, this is not a problem.

5
StavrosK 2 hours ago 0 replies      
This sounds similar to SuperGenPass, which I've used extensively since I heard about it.
6
bigbird 1 hour ago 0 replies      
Here's a little script that does something similar with some features that make it more useful (for hackers at any rate):

https://github.com/jtmaher/Passwords

7
jayfuerstenberg 2 hours ago 1 reply      
This is fine for hackers but for mere mortals it looks cumbersome.
8
jhawthorn 2 hours ago 1 reply      
16
Colour printing reaches its ultimate resolution nature.com
48 points by ananyob  8 hours ago   10 comments top 5
1
lini 7 hours ago 3 replies      
It's funny they still use the Lenna image for testing. It's from a Playboy centerfold in 1972[1].

[1]https://secure.wikimedia.org/wikipedia/en/wiki/Lenna

2
DenisM 7 hours ago 0 replies      
Salient point: the diffraction limit, sets in when the distance between two objects is equal to half the wavelength of the light used for imaging. The wavelength in the middle of the colour spectrum is about 500 nanometres. That means the pixels in a printed image can't be spaced any closer together than about 250 nanometres without looking smudged. Yang's images pack the pixels at just this distance.

Hence, "ultimate resolution".

3
geuis 8 hours ago 1 reply      
It's 100,000 dpi, not 10k.
4
Cushman 6 hours ago 1 reply      
Could we theoretically control the resonance of these nano-posts electronically, and use this technology as the basis for a next-generation reflective display?
5
malkia 6 hours ago 0 replies      
Beat that Retina :).... Just kidding... Retina is just fine for our poor human eyes.

But this is incredible nonetheless. Just Enhance!

17
Erlang programmer's view on Curiosity Rover software jlouisramblings.blogspot.com
153 points by deno  16 hours ago   63 comments top 15
1
pron 7 hours ago 3 replies      
I absolutely love Erlang and think that, along with Clojure, it provides a complete ideology for developing modern software.

But the article implies (and more than once) that the rover's architecture borrows from Erlang, while the opposite is true. Erlang adopted common best practices from fault-tolerant, mission-critical software, and packaged them in a language and runtime that make deviating from those principles difficult.

The rover's software shows Erlang's roots, not its legacy.

2
1gor 10 hours ago 1 reply      

   Any _robust_ C program contains an ad-hoc, 
informally-specified, bug-ridden, slow
implementation of half of Erlang...

http://c2.com/cgi/wiki?GreenspunsTenthRuleOfProgramming

3
Tloewald 8 hours ago 1 reply      
Back in the 90s there was a software engineering fad (unfair term but it was faddish at the time) called the process maturity index, and JPL was one of two software development sites that qualified for the highest rank (5) which involves continuous improvement, measuring everything, and going from rigorous spec to code via mathematical proof.

This process (which Ed Jourdan neatly eviscerated when applied to business software) produces software that is as reliable as the specification and underlying hardware.

4
rubyrescue 13 hours ago 0 replies      
Great article. The only thing he left out is the parallel to Erlang Supervisor Trees, which give the ability to restart parts of the system that have failed in some way without affecting the overall system.
5
donpdonp 7 hours ago 0 replies      
"Recursion is shunned upon for instance,...message passing is the preferred way of communicating between subsystems....isolation is part of the coding guidelines... The Erlang programmer nods at the practices."

Best "Smug Programmer" line ever.

6
matthavener 12 hours ago 2 replies      
The biggest difference to Erlang is VxWork's inability to isolate task faults or runaway high priority process. (Tasks are analogous to Processes in Erlang). VxWorks 6.0 supports isolation to some degree, but it was released in '04, after the design work on the rover started. Without total isolation, a lot of the supervisor benefits of VxWorks goes away.
7
vbtemp 12 hours ago 4 replies      
The motivation for writing the software in C is this: Code Reuse. NASA and it's associated labs have produced some rock solid software in C. In space missions commonly the RAD750 is used (with it's non-hardened version, the MCP), along with the Leon family of processors. Test beds and other ground hardware are often little-endian Intel processors. VxWorks is commonly used on many missions and ground systems, but so is QNX, Linux, RTEMS, etc... The only common thing the diverse set of hardware, operating systems, and compiler tool chains all support is ANSI C. This means that nifty languages like Erlang or whatever - though there may be a solid case for using them - is not practical in this circumstance.

I know some clever folks in the business have done interesting work on ML-to-C compilers, but it's still in the early R&D phase at this point - the compiler itself would have to be thoroughly vetted.

8
DanielBMarkham 12 hours ago 2 replies      
Message-passing better than STM? Wonder why?
9
pgeorgi 8 hours ago 2 replies      
"We know that most of the code is written in C and that it comprises 2.5 Megalines of code, roughly[1]. One may wonder why it is possible to write such a complex system and have it work. This is the Erlang programmers view."

Contrast this with https://www.ohloh.net/p/erlang: "In a Nutshell, Erlang has had 7,332 commits made by 162 contributors representing 2,346,438 lines of code"

I'm not sure if those roughly 154kloc really make a difference...

10
davidw 8 hours ago 0 replies      
Great article and comparison, and a nice way of highlighting one of Erlang's strengths.

However: I'm dubious that it's a strength many people here need. No, the article did not say anything about that, but I am. A few minutes of downtime, now and then, for a web site that's small and iterating rapidly to find a good market fit, is not the biggest problem. And while Erlang isn't bad at that, I don't think it's as fast as something like Rails to code in, and have all kinds of stuff ready to go out of the box.

That said, I'd still recommend learning the language, just because it's so cool how it works under the hood, and because sooner or later, something will take its best ideas and get popular, so having an idea how that kind of thing works will still be beneficial.

11
sausagefeet 11 hours ago 2 replies      
Does anyone have any knowledge of why Ada isn't used over C? Specifically, it seems like Ada gives you a lot better tools when it comes to numerical overflows/underflows.

Also, what compiler does NASA use? Something like CompCert? What kind of compiler flags? Do they run it through an optimizer at all?

12
jeremiep 13 hours ago 1 reply      
Great article! I'd like to add that the D programming language also offers a lot of features to create robust code with multiple paradigms, although the syntax is heavily C oriented rather than functional.

'immutable' and 'shared' are added to the known C 'const' qualifier for data that will never change (contrary to not changing in the declaring scope only) data which is shared across threads, everything else is encouraged to use MPI using the std.concurrency module.

Pure functional code can be enforced by the compiler by using the 'pure' qualifier. There is even compile time function evaluation if called with constant arguments, which is awesome when used with it's type-safe generic and meta-programming.

There's unit tests, contracts, invariants and documentation support right in the language. Plus the compiler does profiling and code coverage.

I'd be curious to test D against Erlang for such a system. (Not saying Erlang shouldn't be used, it's the next language on my to-learn list, just that the switch to functional might be too radical for most developers used to imperative and OO and D provides the best of both worlds.)

13
thepumpkin1979 11 hours ago 4 replies      
deno, I was wondering, if so similar to erlang, why not use erlang instead C? What is the major drawback, footprint?
14
deno 12 hours ago 1 reply      
The figure is just mentioned to say that the code base is quite complex. And what are you talking about? It's not a rant on anything. Certainly not the 2.5 MLOC.
15
ricardobeat 12 hours ago 4 replies      
So a Mars Rover is much closer to a browser/backbone/node.js app than I could ever imagine. The basic structure is surprisingly similar to javascript apps these days: isolated modules, message passing/event loop, fault tolerance.
18
Biological impacts of the Fukushima on the pale grass blue butterfly nature.com
22 points by bootload  5 hours ago   7 comments top 4
1
niels_olson 1 hour ago 0 replies      
> However, precise information on exactly what occurred and on what is still ongoing is yet to be established

This is why, if you are trying to publish in another language, you need not only a translator, but a technical proofreader who knows what you're talking about. I do this for a Japanese company (ThinkSCIENCE), and I feel fairly certain they didn't have a proof reader with expertise. I'm starting up a pathology research collaboration in San Diego, and when I proof pathology papers, I still feel like there are things I need to look up or try a couple of different ways before I'm sure I understand. But this is basic grammar!

2
ghshephard 27 minutes ago 0 replies      
And so, we read about genetically altered Lepidopterans, said alterations brought about by nuclear waste release from a japanese power accident. This isn't at all ironic.
3
corporalagumbo 1 hour ago 2 replies      
Why worry? Natural selection will iron out any weird genetic kinks within a few generations.
4
superprime 3 hours ago 1 reply      
this may have a wider impact, what with "a butterfly flapping its wings in Japan"
19
A comparison of C++11 language support in VS2012, g++ 4.7 and Clang 3.1 cpprocks.com
57 points by AndreyKarpov  10 hours ago   6 comments top 4
1
danieldk 9 hours ago 2 replies      
A detailed list that is regularly updated, with exact version numbers has been around for years on the stdcxx Wiki:

http://wiki.apache.org/stdcxx/C++0xCompilerSupport

2
nikic 9 hours ago 0 replies      
This is missing the supports list for the Intel Compiler (which to my knowledge has a pretty good C++11 coverage).
3
cpeterso 4 hours ago 0 replies      
GNU tracks C++11 support in recent gcc versions here:

http://gcc.gnu.org/projects/cxx0x.html

4
mparlane 5 hours ago 0 replies      
This is desperately missing color coded cells :(

So is the linked wiki at apache.org.

21
Show HN: Painless, productive views on iOS with Formotion for RubyMotion github.com
35 points by 10char  8 hours ago   11 comments top 4
1
_frog 6 hours ago 1 reply      
This is one of those things that makes Rubymotion am increasingly attractive prospect for me as an iOS developer. I wouldn't be ready to move off my familiar Objective-C stack quite yet, but if more projects like this one come along i might just consider it.

One thing I'm curious about is the support for TDD when using Rubymotion, there's a lot of Rspec clones out there for Objective-C but they all feel a bit hacky due to the language's less than stellar support for DSLs. On top of that, testing doesn't seem to be all that big a deal in the iOS world. I'm wondering if a healthy dose of the Ruby ethos changes this.

2
poblano 2 hours ago 0 replies      
How hard would it be to adapt this for Ruby written for iOS using RhoMobile? I'm looking into whether to start learning RubyMotion vs. RhoMobile, and don't know much about either yet.
3
tjarmain 7 hours ago 0 replies      
This is insanely awesome, thanks a lot!
4
DenisM 7 hours ago 1 reply      
I looked at page and could not figure out what it is. Is this a server-side html generator? And AJAX library? A native iOS control library? I guess I will never know.
22
Manufacturing is returning to America post-gazette.com
7 points by ph0rque  3 hours ago   discuss
24
Rootbeer GPU Compiler Lets Almost Any Java Code Run On the GPU github.com
203 points by doublextremevil  23 hours ago   74 comments top 16
1
kevingadd 22 hours ago  replies      
I had forgotten just how much I hate java namespaces.

import edu.syr.pcpratts.rootbeer.testcases.rootbeertest.serialization.MMult;

This seems like a pretty amazing project if the claims are true, though - I wasn't aware that CUDA was able to express so many of the concepts used to implement Java applications. The performance data in the slides is certainly compelling!

2
wmf 22 hours ago 3 replies      
I wonder if this is as big a win as it sounds. Regardless of what language you're using, you have to "think GPU" to get any performance from GPUs. The additional overhead of using CUDA/OpenCL syntax seems pretty small in comparison.
3
tmurray 21 hours ago 0 replies      
this sort of thing is why NVIDIA is supporting LLVM:

http://nvidianews.nvidia.com/Releases/NVIDIA-Contributes-CUD...

in other words, with that and the right frontend, you can take Language X, compile to LLVM IR, and run it through the PTX backend to get CUDA.

however, in the grand scheme of things, this probably doesn't make GPU programming significantly easier to your average developer (as you still have to deal with big complicated parallel machines); what it really does is ease integration into various codebases in those different languages.

4
sillysaurus 22 hours ago 0 replies      
It would be good to post the code for the performance tests in the slides. https://raw.github.com/pcpratts/rootbeer1/master/doc/hpcc_ro...
5
mjs 15 hours ago 1 reply      
"Rootbeer was created using Test Driven Development and testing is essentially important in Rootbeer"

I'm not sure what the "essentially" means here, but this is the first "big" program I'm aware of that name-checks TDD, and a counter-example to my theory that programs where much of the programmer effort goes into algorithms and data structures are not suited to TDD.

Was the TDD approach "pure"? (Only one feature implemented at a time, with absolutely no design thought given to what language features might need to be implemented in the future.)

6
pjmlp 22 hours ago 1 reply      
A comparison with AMD's Java offering, Arapi (http://developer.amd.com/zones/java/aparapi/pages/default.as...), would be interesting.
7
ChuckMcM 21 hours ago 1 reply      
Its pretty cool. Of course you are probably using that GPU in a desktop, but an on die GPU in a server class machine? Something to consider.
8
AnthonBerg 16 hours ago 0 replies      
I prefer programming in CUDA over programming in Java. However, I have a lot of respect for the Java runtime.

If Rootbeer or something similar allows me to program CUDA stuff in Clojure, then I am impressed and excited.

9
DaNmarner 10 hours ago 1 reply      
The title captured the true character of Java: Write once, almost run almost everywhere.
10
rbanffy 13 hours ago 0 replies      
> The license is currently GPL, but I am planning on changing to a more permissive license. My overall goal is to get as many people using Rootbeer as possible.

It would be bad to compromise the freedoms of the users in order to be able to limit the freedoms of more of them.

Any reason why the GPLv3 would be considered unsuitable? How about the LGPLv3?

11
damncabbage 17 hours ago 1 reply      

  4. sleeping while inside a monitor.

... Can someone clarify what this is?

12
winter_blue 21 hours ago 1 reply      
Does this simply run your java code on the GPU, or does it parallelize your code automatically? The latter would be really cool.
13
skardan 14 hours ago 1 reply      
Has anybody used Rootbeer with language which compiles to Java bytecode (like Clojure)?

It would be interesting to see how functional languages designed for parallelism perform on gpu.

14
stephen272 22 hours ago 1 reply      
Seems much nicer than coding directly for CUDA or OpenCL
15
pron 22 hours ago 0 replies      
At first glance this seems very impressive.
16
algad 15 hours ago 1 reply      
Any security implications? Malware running on the GPU?
25
The making of the Sony PlayStation pushsquare.com
75 points by CrazedGeek  13 hours ago   3 comments top 3
1
samlittlewood 3 hours ago 0 replies      
A big plus was Sony's attitude to developers, my (edited) comment from when the Edge Online article was posted:

The landscape into which it arrived:

Games for the successful cartridge based machines were selected and scheduled by Nintendo & SEGA. There were a limited number of slots per season - competition within genres was avoided, often in favour of in-house games or those from allied publishers.

There was a wave of CD based machines at various levels of development. These were characterised by wince inducing hyperbole, a lack of attention to any of the media of which they were supposed to represent the convergence, and an inability to engage with the developers who had any chance of creating titles that would sell.

I remember the first 3DO developer conference - big hotel bash - hot swag (why embroider an off-the-shelf shoulder bag when you can have one made exactly to your specs), Incomprehensible eulogies delivered by new-media 'visionaries', and, um, the chocolate CD. Meanwhile, the experienced games developers who were calling out the inadequacies of the hardware and OS were being told that they were irrelevant, and to shut up.

This was not a happy place for games developers - stuck between the politics and uncertainty of the cartridge machines, and the approaching new-media desert.

Into this arrived Sony:

They had bought a UK game developer - Psygnosis, who had (IIRC, courtesy of SN Systems) sorted out a good set of PC based development tools with english documentation.
Once they had something to show, they invited ~100 UK developers to Great Marlborough Street for a chat. Other than knowledge, the giveaways ran to a cup of coffee & a couple of biscuits (1).
The tech. demos using a slow prototype (so T-Rex's head only) were fascinating, but other things were more significant: The attendance list demonstrated to those within it that Sony really 'got it' - this was a peer group of people who had made, and were making, games in the UK, and having assembled that audience, the Psygnosis staff (a part of that peer-group), explained how they wanted to help us make games - IIRC, not much persuading was needed.

A memorable moment for me that captured that attitude was the opening of the first Devcon in London: Several hundred developers in a huge conference room - Phil Harrison(IIRC) walks onto the stage and casually asks if it is anyone's birthday today. A few hand's go up. "Happy Birthday - here, have a Playstation" and indeed, those bodies got machines (at that time, rarer than hen's teeth))

(1) There may also have been sandwiches.

2
incision 10 hours ago 0 replies      
Same topic, more comprehensive article from a 2009 issue of EDGE magazine.

http://www.edge-online.com/features/making-playstation

3
kitsune_ 10 hours ago 0 replies      
I remember feeling excited about the cd-add-on for the SNES... Ahh, tempus fugit.
26
Wristwatch in only HTML/CSS codepen.io
67 points by Flam  12 hours ago   28 comments top 12
1
DigitalSea 1 hour ago 0 replies      
A little disappointed to see Javascript being used even if it's only for getting a users local time. Seems like a trivial thing to me when just having a CSS wristwatch is awesome enough without having to make it accurate. Besides that, this is awesome. Funny to see that a lot of the CSS is in-fact browser prefixed CSS and that if all browsers supported the same properties it would be much smaller.

Aside: the CSS animations absolutely decimate my core i7 PC and almost makes Chrome become unresponsive. The CPU usage is through the roof, can't wait until CSS animations are stable and more widely supported.

2
timmyd 12 hours ago 0 replies      
I love it how the HTML is 30 lines and the CSS is like 300 or so. It's an awesome watch.

Keyframes are a beautiful thing in CSS but boy oh boy I wish jQuery implemented them to reduce CSS size. It's becomming insane to have so many keyframes with 0%, 40%, 80%, 100% and so on.

3
rurounijones 1 hour ago 1 reply      
Is it generally accepted that "only HTML/CSS" really means "HTML/CSS/JS"?

To me the title is inaccurate but no one else seems to have mentioned it so...

4
nnnnni 10 hours ago 1 reply      
Replace line 58 with a base64 encoding of the image to make it pure HTML/CSS!
5
brianshumate 10 hours ago 1 reply      
Here's a zero image version: http://codepen.io/anon/pen/rEfKk

Using a CSS texture from http://lea.verou.me/css3patterns/

6
bkyan 2 hours ago 0 replies      
The exposed CSS source code seems to be missing the following:

#glass #center #smallHand, #glass #center #midHand, #glass #center #bigHand { -webkit-transform-origin: 0% 50%; }

It's working in your demo, so I'm guessing you simply forgot to copy it over to the exposed CSS source code? Awesome work, by the way! :)

7
j15e 10 hours ago 1 reply      
Very cool, but I am still amazed by how CSS animations are killing modern desktop CPU. I guess this is because my Chrome browser is not using GPU acceleration for CSS?

If I duplicate 10 times the watch, my CPU hits 100%.

8
alexquez 9 hours ago 2 replies      
Yea, CSS animations are insanely CPU heavy. CodePen has to kill them after 5 seconds to keep the site responsive.
9
rthprog 11 hours ago 2 replies      
Very cool! Though the title isn't quite right - there definitely are images in the source, so it's not actually 'only HTML/CSS'
10
bvdbijl 8 hours ago 1 reply      
Shouldn't lines 217-221 have 12*3600 = 43200s instead of 86400? Because now it's a 24 hours watch. Fixed: http://codepen.io/anon/full/nieIh
11
martin-adams 8 hours ago 1 reply      
ummm, anyone else notice the time is wrong ... :)
12
dmitriy_q 12 hours ago 0 replies      
Cool thing!
27
What “Worse is Better vs The Right Thing” is really about yosefk.com
114 points by m_for_monkey  17 hours ago   41 comments top 15
1
cs702 12 hours ago 2 replies      
Great essay -- I agree with its main point: "worse" products triumph over "the right thing" when they are a better fit for the evolutionary and economic constraints imposed by an evolving competitive landscape.

Some examples:

* In the case of the rise of Unix, the market of the 1960's and 1970's valued simplicity and portability over "correct" design.

* In the case of the rise of the x86 architecture over the past three decades, the market valued compatibility and economies of scale over the simplicity and elegance of competing RISC architectures.

* In the case of the current rise of ARM architectures for mobile devices, today's market values simplicity and low-power consumption over compatibility with legacy x86 architectures.

2
j-g-faustus 1 hour ago 1 reply      
This actually reminds me of the Plato/Artistotele difference. Plato held that there was an ideal, perfect version of everything in a sort of Idea Heaven, and the goal of the philosopher was to get ever closer to understanding that ideal.

Aristotele, on the other hand, thought that Heaven was too remote, and held that we could learn more by measuring what we see in this world. As opposed to the presumably ideal, but unaccessible, concepts in Heaven.

The medieval church loved Plato, the scientific revolution loved Aristotele.

My point is that the difference between these two frameworks for interpreting the world seems to be fundamental. Fundamental in the sense that the distinction has been with us for at least a couple of milennia, and we are apparently not likely to agree on a single answer anytime soon.

3
loup-vaillant 13 hours ago 4 replies      
Linus' and Alan's citations aren't incompatible. Actually, I think they're both true. Yes, massively parallel trial-and error works wonders, but if you favour the first solutions, you'll often miss the best ones. Actually, effects such as first time to market, backward compatibility, or network effects often trump intrinsic quality by a wide margin. (Hence X86's dominance on the desktop.)

Yes, Worse is better than Dead. But the Right Thing dies because Worse is Better eats its lunch. Even when Worse actually becomes Better, that's because it has more resources to correct itself. Which is wasteful.

The only solution I can think of to solve this comes from the STEPS project, at http://vpri.org : extremely late binding. That is, postpone decisions as much as you can. When you uncover your early mistakes, you stand a chance at correcting them, and deploying the corrections.

Taking Wintel as an example, that could be done by abstracting away the hardware. Require programs to be shipped as some high level bytecode, that your OS can then compile, JIT, or whatever, depending on the best current solution. That makes your programs dependent on the OS, not on the hardware. Port the compiling stack of your OS, and you're done. If this were done, Intel wouldn't have wasted so much resources in its X86 architecture. It would have at least stripped the CISC compatibility layer over it's underlying RISC design.

But of course, relying on programmers to hand-craft low-level assembly would (and did) make you ship faster systems, sooner.

4
gruseom 11 hours ago 5 replies      
I never really got the "Worse is Better" essay. It obviously doesn't mean what everyone says it means and what it does mean isn't clear. This post points some of that out. For example, Worse in the essay was associated with simplicity. But the classic examples of Worse triumphing in the marketplace (the OP cites x86 as an example) are anything but simple: they are hypercomplex. Not only that, their complexity is largely what makes them Worse. Simplicity is rather obviously Better, not Worse. Smalltalk (which the OP cites as Better) is far simpler than its more successful peers. The more you look at the original essay, the more its conceptual oppositions seem muddled and at odds with history.
I've concluded that it boils down to exactly one thing: its title. "Worse is Better" is a catchy label that touches on something important about technology and markets and means different things to different people.
5
jeffdavis 8 hours ago 1 reply      
I often think about software development in similar terms -- evolution versus intelligent design.

The weakness of evolution is that it takes millions of years, it's heavily dependent on initial conditions, there's lots of collateral damage, and most lines die out.

The weakness of intelligent design is that we're only so intelligent, which places a pretty low limit on the possible achievement. (And intelligence is generally regarded as close to a normal distribution, meaning that the smartest people can only handle a small multiple of the complexity of the average person).

Obviously, evolution and design need to be combined somewhat. The question is: how much of each, and at what times during a project? Do you spend 10% of the time quietly planning, 10% arguing with a small group of designers, and 80% trying things and trying to get feedback? Or is it more like 40%, 40%, and 20%? And how do you mix trying things with the designing things?

6
ScottBurson 9 hours ago 1 reply      
Interesting to note, in this connection, the rising popularity of Haskell, which is way off at the "Right Thing" end of the spectrum.

Maybe it is really possible to come up with the Right Thing eventually -- it just takes a lot of research.

7
sedachv 1 hour ago 0 replies      
Thank you Yossi for writing this piece. It's about time that Worse is Better argument was debunked. Worse isn't better, portable, free (libre, gratis, or at least really cheap) is better.

What many people forget is that during the time frame Worse is Better talks about, Lisp machines cost as much as two or more houses. You couldn't get a decent Lisp system on affordable hardware until MCL, and then you still needed a fairly high-end Mac to run it on.

OTOH, Unix and C-based software ran on a bunch of different machines, which you either already had or could acquire inexpensively. The software was easy to get and inexpensive as well. Then 4.3BSD and Linux came along, and you couldn't beat that on price.

8
drblast 12 hours ago 1 reply      
It's not too instructive to look back on things that occurred mostly due to happenstance and try to assign reasoning to it.

And it's a bit of a stretch to associate Linux with "Worse is Better." A major reason for using Linux in the early days was that it was the best alternative to Windows 95 because it got process isolation on x86 right.

9
DenisM 8 hours ago 1 reply      
My angle at the problem is the concept of "engineering debt": if a well-designed product is the state of being "debt-free", and a deviation from good design is a unit of "engineering debt". That debt will have to be serviced in the form contortions you have to make to work around the design flaws, and then eventually paid down in the form of rewrite, or discharged in an engineering bankruptcy (such as abandoning the product).

Engineering debt, much like financial debt, is an instrument one can use to trade some present-point expenditure for a larger future expenditure. Where one makes sense so often does the other.

Sadly, engineering debt is much harder to account for. Old companies are carrying huge amount of debt and are often times oblivious to it.

I think we could advance the state of the art if were to find a way to quantify engineering debt. As a starting point I suggest a ratio of line changes aimed at servicing vs. line changes aimed at creating new features. If 100 lines of new functionality require 10 lines of base code changes, the debt is low. The opposite is true, the debt is high. I believe such metric could speak to both business managers and engineers, so it provides a good common ground for the two groups of reach consensus and prioritize work.

10
gajomi 4 hours ago 0 replies      
An enjoyable and stimulating read. The original essay, by virtue of a few semantic ambiguities (what is "simple" anyway?) is apt to invite this sort of commentary.If I have read this correctly, the author eventually agrees that worse really is better, with the clarification on what this means outlined in the first part of the essay.

However, I was hoping to see a deeper analysis of how the nature of the evolutionary pressure in his domain contributed to the worse is better effect (I am an evolutionary biologist, so I find this kind of thing interesting). For example, if the "product" in question was a mathematical concept of interest to professional mathematicians, there almost certainly be a niche space in which version of the concept exhibiting "consistency, completeness, correctness" will dominate over the competition. For mathematicians consistency and correctness are strongly selected for (completeness, broadly defined, is usually much harder to obtain). For a the average iPhone app, these things still matter, but in a very indirect sense. They get convolved (or low passed, as Alan Kay describes) with other concerns about shipping dates and usability and so on. I would be interested to see a classification of different domains in which "worse is better" and "the right thing" philosophies dominate, and those in which they are represented in roughly equal proportions.

11
PaulHoule 15 hours ago 0 replies      
I try really hard to not take a left vs right view in software design.

I sometimes build systems that are overengineered and I sometimes build systems that are underengineered.

I do believe that every line of code, every function, every class, every comment, every test, everything, is like a puppy that you have to take care of.

If a team adds a "dependency injection framework" that adds a parameter to each and every constructor in a system that has 800 classes, which that's a real cost that's going to make doing anything with that system harder.

I'm a big believer in "cogs bad" because I've seen large teams live the lesson.

From my viewpoint the perfect system is as simple as possible, but well engineered.

12
kemiller 8 hours ago 0 replies      
Another way to put this is that myopically optimizing for perfection along one axis may fatally de-optimize another.
13
charlieflowers 4 hours ago 0 replies      
I think the lesson is, "whatever is available tends to propogate, even if it is shit. Especially if it is for a large mass of humans, who tend to act stupidly in mass."

You can see it all over the place. How great of an Internet provider was AOL, for example?

14
direllama 11 hours ago 2 replies      
I just don't think "simple" is quite the right word for Worse is Better.

"... It's called Accessibility, and it's the most important thing in the computing world.

The. Most. Important. Thing.

..."

https://plus.google.com/112678702228711889851/posts/eVeouesv...

15
dreamdu5t 12 hours ago 1 reply      
There are so many generalizations in this article I don't know where to begin...
28
Does it make you win? kippt.com
32 points by enra  9 hours ago   8 comments top 5
1
csallen 8 hours ago 1 reply      
Another way to think of this is in terms of opportunity costs. Every day you spend working on A is a day you're not working on B, C, D, etc. So, assuming you have a list of ideas that includes some game winners, you'd be remiss to spend time on other things. Even if you don't have any game winners, you have to measure the cost of working on mediocre ideas vs brainstorming for good ideas.
2
yessql 1 hour ago 0 replies      
I wish I had read this and thought about it a week ago, before I decided to teach myself golang, which ate up my free time. Is learning an experimental unpopular language going to help me win? Fat chance.
3
Swizec 8 hours ago 1 reply      
That's a solid and simple piece of advice.

And coincidentally I just discovered Kippt because of this and it looks interesting. Will be checking it out.

4
nanijoe 4 hours ago 0 replies      
It all sounds good in theory, but if we all KNEW what makes us win, we would all just do it and be super successful right?
So maybe someone first needs to come up with a formula for how to know what makes you win
5
tarr11 6 hours ago 1 reply      
Working solely on what makes you win (always) seems to be at odds with 20% time, or "slack" (where you focus on whatever you want, regardless of priority)

Curious as to how you reconcile those things.

29
Why do successful tech companies fail so often? thestar.com
11 points by jamesbritt  2 hours ago   12 comments top 8
1
fleitz 54 minutes ago 0 replies      
Because tech companies don't get bailed out to the tune of trillions of dollars, or really any other populist rhetoric.

This article isn't data it's anecdotes, there's no serious comparison of tech companies vs. everyone else.

Also, it focuses exclusively on public companies, software companies aren't steel manufactures, they don't need billions in public money to create profitable businesses. Software is for the most part a cottage industry. eg. Instagram (yes, it's $1 billion but it's 11 people)

If you look at any of the players involved in the article and examine that case in depth it has a lot more to do with obvious mismanagement than being a tech company.

Q: What killed HP? Carly Fiorina.

Q: What killed RIM? Two CEOs and three CFOs.

Facebook is hardly dead it's got $10 billion in the bank, and the largest company in the world is a tech company

2
SatvikBeri 1 hour ago 1 reply      
The ideas in this article are weakly argued and poorly researched. The theory of disruptive innovations in The Innovator's Dilemma by Clayton Christensen provides a much better explanation of why successful companies fail. We see this happen more rapidly in the tech industry than others, but it happens everywhere-just at different speeds.

Looking at the graphs at the top-Hewlett Packard, Nokia, and RIM all took heavy hits from not recognizing disruptive innovations until it was too late, and lost a huge chunk of their marketshare as a result.

3
Gustomaximus 1 hour ago 1 reply      
I read a great book on this topic: "In Search of Stupidity: Over 20 Years of High-Tech Marketing Disasters"

While there are many lessons in there one theme stood out for me. Companies grow by listening to their users and providing what they want. They then fail when managers start to believe the products success is due their brilliance so stop listening and start telling the customers what they want.

4
dennisgorelik 1 hour ago 0 replies      
Beautiful ending:
“Growth for its own sake is the logic of the cancer cell.”
5
molecule 33 minutes ago 0 replies      
Tech is an industry where success is based on disruption, and eventually the successful become the disrupted.
6
madrona 36 minutes ago 0 replies      
I don't know why this article says that Microsoft was 5 years late to the web search party. Microsoft had a search engine in the late 90's, MSN Search.
7
sliverstorm 1 hour ago 2 replies      
My pet theory is simply that the technology sector moves much faster than anything in the past. Historically, companies rise and fall all the time- so then, it shouldn't be surprising if companies rise and fall quickly in tech.
8
chasm 39 minutes ago 0 replies      
Because they lose their entrepreneural edge, hire managers VS. Leaders and focus mostly on generating installed base revenues VS. Competing for and winning new business
30
Working at big software companies sacaluta.com
57 points by rdcastro  11 hours ago   16 comments top 7
1
bradly 10 hours ago 1 reply      
After working for various start-ups and freelancing most my, I took a full-time contract at Inuit (TurboTax/Quicken) about a year a go. I was very surprised how much I enjoyed the work there. The start-ups I worked at were never huge successes so getting the chance to work on a Rails app that gets hundreds of thousands of requests per minute (it's embedded in TurboTax) was something new for me and provided a different set of challenges.

I've also found that large companies (or at least Intuit) are very open to progression and innovation. When I started our team was on Ruby 1.8.7, Rails 2.3.5, in a physical data center, used Perforce, and a bunch of other "enteprise" software suites. Now we are cranking on latest versions of Ruby and Rails, Github, AWS, and Campfire.

It has been a much more rewarding experience than I would have thought. Oh and you have no idea how good it feels to get real, publicly traded stock :)

2
dotmanish 10 hours ago 3 replies      
The "You're Not Working With The Owner" aspect of a big company is usually the most underrated reason why some companies succeed in hiring and retaining top performers, and some don't.

The immediate manager is a major reason, from what I've seen, for why people leave or love their jobs. The manager may not even share the same ethics that need to flow down in the company from the founders/owners, and may even work towards ensuring her/his own appraisal success. This is why maintaining the big company's culture is such a tough task, and one bad hire at a senior level can slowly destroy major chunks of it.

3
btilly 8 hours ago 0 replies      
Another phenomena that exists in all companies, but that you really see in big ones, is secretiveness about how people are actually being rewarded. This has two purposes. The first is that it allows the company to reward certain employees (particularly top manager) while heading off jealousy among others. The second is that it allows them to lie to people about how well they are compensated versus others.
4
stcredzero 9 hours ago 1 reply      
> Sometimes some compromises will be made in order to keep the team "stable", so the business can keep going.

The thing to remember about big companies, is that they've already figured out some formula to approximately "print money." (Not really in all cases, what I mean more seriously and generally is just they already know how to make a profit.) So not rocking the boat will be seen by many there as the rational thing to do.

5
paganel 9 hours ago 2 replies      
I wouldn't want to work for an entity that implements "decision processes", I'd much rather work for a (small) company that takes decisions. It's that simple.

Also, even though I'm already in my early 30s the word "career" scares me. I'd rather build things

6
sseveran 48 minutes ago 0 replies      
I worked at a big software company and will never go back. The aspiring to mediocrity is the order of the day. I should never have gone to a fortune 500 and I never will again. The culture is corrosive.
7
neil_s 3 hours ago 1 reply      
Interesting viewpoint, timely as I am interning at a big bank that has a tech division and trying to decide whether this is the kind of firm I want to work for.

Also, @rdcastro, you're looking for segue, not Segway.

       cached 13 August 2012 04:02:01 GMT