hacker news with inline top comments    .. more ..    30 May 2011 News
home   ask   best   8 years ago   
NVIDIA's quad-core Kal-El demos next-gen mobile graphics, blow minds (w/ video) engadget.com
51 points by evangineer  2 hours ago   12 comments top 5
gvb 1 hour ago 1 reply      
That means the simulations we're watching require a full quartet of processing cores on top of the 12-core GPU NVIDIA has in Kal-El. Mind-boggling stuff.

Mind boggling indeed, but totally absent in the text and demo was any mention of power usage. Four cores running at 75% utilization plus 12 GPU cores is going to suck down a tablet battery in a hurry.

thret 41 minutes ago 0 replies      
In case anyone is wondering, Kal-El is Superman's Kryptonian name.
UtestMe 1 hour ago 1 reply      
I bet Google will come up with Chrome for phones and let Nvidia fully run on its new OS. I can foresee Android killing more than 2 cores out of Kal-El' 4...
nagnatron 47 minutes ago 0 replies      
Does it run Crysis?
nightlifelover 2 hours ago 1 reply      
Why did Apple break up with Nvidia..? Now they have to face the competition.
Visualisation of Machine Learning Algorithms epfl.ch
12 points by maurits  1 hour ago   1 comment top
vinyl 24 minutes ago 0 replies      
Wow. This could prove an invaluable tool for teaching. Great UI (love the way you can build the dataset with a paintbrush), top-quality visualization, wide choice of algorithms... Much better than R scripts if you want to show basic algorithms at work to a bunch of students. Many thanks
Asus Unveils Padfone Hybrid Android Smartphone-Tablet hothardware.com
8 points by bigwophh  49 minutes ago   1 comment top
borism 7 minutes ago 0 replies      
Phones merging more and more with PC's.

I've made a promise to myself that HDMI-out is a must for my next Android phone (currently I have HTC EVO 4G which has one, but is unusable as a phone since I'm in Europe).

This is even better, at least on paper.

The Architecture of Open Source Applications: LLVM aosabook.org
128 points by ryannielsen  8 hours ago   11 comments top 5
yuvadam 2 hours ago 0 replies      
I've been hearing about LLVM for quite some time, but never fully grasped what it is. This article tackles everything you should know about LLVM. I'm finally starting to get it.


obiterdictum 3 hours ago 2 replies      
I've never built a compiler, but how well does having common optimizer for all platforms work in practice? (11.5.1) Does it cause a "lowest common denominator" effect thus making the code take less advantage of the target architecture than a architecture/vendor-specific compiler would?

I'd love someone with the experience of building compilers to chime in.

nhaehnle 4 minutes ago 0 replies      
To anybody curious about how LLVM looks on the inside, I recommend reading their excellent tutorial on building a front-end, which you can find here: http://llvm.org/docs/tutorial/
forgottenpaswrd 5 hours ago 0 replies      

Very good article, I started liking LLVM some time ago. Now I will use it on my programs, it seems really easy to do, I agree so much with the gcc design flaws(that forced me to create my own parsers and code generators).

henning 3 hours ago 0 replies      
As this essay discusses, GCC has serious technical flaws. LLVM is one of the best examples I've ever seen of "the best way to complain is to make things." That and the iPhone.
Germany to be nuclear-free by 2022 reuters.com
46 points by extension  2 hours ago   67 comments top 13
henrikschroder 1 hour ago 7 replies      
Sweden held a referendum in 1980 about nuclear power, and the only choice was about how fast our nuclear power plants should be shut down.

It took twenty years until a reactor was closed, and five more years until one whole plant was closed.

During this time, Swedens energy consumption has of course gone up, and although there's been some increase in wind and water power, the net result is that each time we close a nuclear reactor, we have to import a lot more electricty from Danish or German coal-powered plants.

So the net result for the environment is negative, but the environmentalist movement thinks it's a huge win.

I'm willing to wager that this German proposal will have the exact same end result. The only way to successfully switch to renewable energy is by making renewable energy cheaper than coal or nuclear.

dexen 1 hour ago 2 replies      
Closing nuclear powerplants early makes for a self-fulfilling prophecy as well, unfortunately.

The only solid argument against nuclear goes along the lines, ``nuclear power is too expensive because of the costs of disposal of wastes''.

Thing is, in normal operation the cost is amortized year by year. Normal life of a nuclear plant is a few decades; say 30 years. Could be longer, but the technological progress is so fast it just makes sensible to replace hardware before it's completely worn down.

But it becomes a hefty one-time fee if the plants are closed mid-life. The overall costs barely go down. The difference in amount of waste is small -- because large portion of nuclear wastes stems from decommissioning of the plant itself. The spent fuel itself is not that much. [1]

And the headlines in press go, ``See? It's too expensive''.


[1] been touring recently a nuclear powerplant in Greifswald, Germany, that undergoes decommissioning right now. Most of the waste is NOT the fuel, but infrastructure of the plant (granted, it was built with old technology).

latch 1 hour ago 3 replies      
Germany is a leader in solar power. They installed more panels last year than the rest of the world combined. They generate more electricity from solar than Fukushima did. If they can further establish themselves as leaders and experts in new-energy, it'll make them even more disproportionally well positioned to most of Europe.

However, it's a shame they see no future in nuclear. Seems like they are betting against something we've just scratched the surface of (my money is on Thorium)..for the wrong reasons. The damage from the oil and coal that they burn is far worse.

andrest 1 hour ago 1 reply      
Throughout the whole article no mention of any specific alternatives to be used? 23% is a big amount to make up to.

A relevant TED talk by Bill Gates (Bill Gates on energy: Innovating to zero): http://www.ted.com/talks/bill_gates.html

In short, the most viable option currently, in his opinion, is the thorium reactors. Current uranium reactors can be converted to work with thorium.

"The main point of using thorium, in addition to the proliferation issues with uranium, is that there is 10 fold amount of it available compared to Uranium. If you take into account also the fact that we only use uranium-235 in our nuclear reactors, and this consitutes only 0.7% of the total amount of uranium, the increase is 100 fold.

Thorium reactors also operate by burning uranium. This is created from thorium by bombarding it with neutrons. This forms uranium 232, which is highly radioactive and is hence hard to deal. This is why U232 can't be used for nuclear weapons, it's hard to handle."

Derbasti 1 hour ago 1 reply      
I guess it is pretty obvious that a non-nuclear energy strategy will definitely result in higher energy prices in the short term. That said, some people might be comfortable with this as long as the price increase is moderate.

In the long run however, renewable energy sources are absolutely the way to go. It might actually pay off nicely to invest a lot of resources in renewable energy early on and come out ahead of the pack once coal, uranium, gas and oil become more expensive.

Whether or not Germany will actually be able to carry this off and reap the benefits of it will remain to be seen, though.

zhoutong 50 minutes ago 3 replies      
That's more than 11 years to complete this task. Who can predict what will happen after 11 years? Generating electricity from nuclear fusion may be possible. Or we may move our nuclear plants to the moon.

We shouldn't ignore the fact that 70% of France's energy consumption is backed by nuclear plants, and this makes France one of the cleanest country in terms of energy consumption in the world.

I just don't understand why people are so hesitant to invest in nuclear energy, which is extremely cheap, infinitely available, non-polluting and moderately safe (burning coals may emit radioactive materials directly to the air).

Nuclear energy won't kill thousands of people underground (like coal), nor will it kill tens of thousands of bird (like wind mills). It also doesn't trap heat (like what solar panel does to retain all the heat), or emit greenhouse gases.

It seems that nuclear energy is an ideal energy source for the future. We should spend more time and money to find out better ways (like fusion) to build nuclear plants instead of worrying about the rare accidents (in terms of death per watt, I don't think nuclear energy is going to lose out).

blumentopf 40 minutes ago 1 reply      
Debunking myths:

"This won't have consequences, it didn't have any in Sweden either."

=> After the nuclear phase-out was enacted in 2000, two power plants were shut down as planned (Stade 2003, Obrigheim 2005).

=> After the Fukushima incident, eight additional plants were shut down and will never return to the grid again.

"This just leads to increasing energy imports."

=> Germany exported 9 billion kWh in 1Q2010, can hands-down afford shutting down nuclear power plants. Source:

erikb 45 minutes ago 1 reply      
I'm a German and I think myself that it is really stupid. The way to think should not be "Don't do nuclear power anymore" but "do X". We have no plan what X is. So we didn't solve any problem.It's so frustrating. We really had a chance with the strong movement from most people here.
bluedanieru 1 hour ago 2 replies      
Meanwhile Japan itself, which has weathered the effects of several nuclear disasters (Fukushima being the mildest), is doing no such thing.

And apparently Germany will continue to import power derived from nuclear plants.

mrich 1 hour ago 1 reply      
Let's see how long this decision holds up, In 2000 they decided to switch off old plants by 2011 (labor+green party). This was anulled by end of 2010 by the conservatives. Only because of Fukushima (and pending elections) did they anull this again. The green party has been winning in the last elections since Fukushima, and the ruling conservatives have lost ground. This is their opportunistic way of trying to avoid becoming meaningless.

In 5 years, energy prices will be high, Fukushima will be forgotten, new reactor designs may be available (pushed by Chinese who will have to depend on nuclear?) Let's see what happens then.

sapper2 1 hour ago 3 replies      
Good move aka burning bridges. If managed right, this will enhance the development and use of better technologies.
sktrdie 20 minutes ago 0 replies      
Italy has always been nuclear free because it couldn't afford it. I guess sometimes it pays off being economically weaker.
Kliment 41 minutes ago 0 replies      
10 Oddities about JavaScript smashingmagazine.com
5 points by fogus  25 minutes ago   discuss
Higher-Dimensional Type Theory existentialtype.wordpress.com
15 points by fogus  2 hours ago   discuss
Exceptions are Bad atlassian.com
23 points by chrisbroadfoot  3 hours ago   10 comments top 2
dexen 1 hour ago 3 replies      
Let me get this straight: he's doing a a full circle -- without examining how the current situation came to be.

At some point we went for exception, because (warning: big simplification ahead!) library authors figured they can't trust client code to always check return value for reported error condition. We got automatic, forced checking injected by compiler -- `exceptions'. With all the pains of fugly syntax in certain languages, because in many projects the trade-off makes sense. With the reasonable default for unexpected and unhandled condition: error is logged and process is stopped.

Now the proposal -- `Let's have [[EDIT]]a tagged union: Either<error_code, return_value> ' takes us back to where we started -- client code is free to not check for errors. The process will not stop nor even print backtrace if the programmer forgets to check; the process will happily go on with invalid / unsupported / mangled / whatever data. No automatic propagation of error down the stack to outer handler in our function's caller. No stack unwind nor automatic object destruction. No hierarchy of error types (no specific errors derived from general ones). No nothing. And no discussion of those matters.

Oh, my. Reliability in software has just gotten a new name.

EDIT: fix'd mistake pointed out by tianyicui & thesz. Thanks!

Strilanc 1 hour ago 0 replies      
I don't agree that returning an Either is better than returning/throwing. It depends on what you do in the error case. If you try to recover, then Either is better. If you fail fast, then throwing is better.

The main problem with checked exceptions, at least in Java, is the clumsiness of doing things right. Ideally every function would expose exceptions corresponding to useful error cases. Instead, because there's so much boilerplate involved in creating new exception types and wrapping exceptions as the proper type, programmers are more likely to swallow or blindly propagate.

I believe making checked exceptions signifcantly better is as easy as adding two features to Java:

- Easy wrapping. 'wraps Type1 as Type2' for try blocks and function signatures.
- Easy inline defining of new exceptions. 'throws forge ParityMismatch extends IllegalArgumentException'

Hidden device distorts news on wireless networks (with technical details) hackaday.com
44 points by szx  5 hours ago   5 comments top 3
JonnieCache 1 hour ago 0 replies      
I'm enjoying one of the comments on HaD, along the lines of "Let's configure it to tell the truth."

That's probably the most disruptive way you could use the thing.

Also, cross-injecting articles from sources of opposing bias could be a winner as well.

EDIT: Direct link to (very elaborate) build guide: http://newstweek.com/howto

andrest 25 minutes ago 0 replies      
No mention of encrypted networks, so I'm assuming it only works on open, non-password protected network?
mikeknoop 3 hours ago 2 replies      
Can anyone comment on the legality of packet manipulation? What if you own the source providing the internet connection?
Is There a Social Media Tech Bubble? [infographic] mashable.com
4 points by instakill  43 minutes ago   discuss
Julius Caesar's Last Breath berkeley.edu
110 points by signa11  10 hours ago   15 comments top 6
hugh3 10 hours ago 3 replies      
That's great, but one problem: molecules don't stay molecules over two thousand years. Certainly not oxygen, which is extremely chemically active. N2 somewhat less so (but http://en.wikipedia.org/wiki/Nitrogen_cycle), but the chances of any particular nitrogen molecule retaining its identity over two thousand years is incredibly low.

Redo this calculation with atoms, and I might believe you. But it'll need to be a bit more complicated, since I don't think the amount of oxygen and nitrogen getting sequestered in the ocean or the soil is actually "trivial" as stated.

MBlume 7 hours ago 0 replies      
This is a really cute little Fermi calculation, and so I hesitate to say this...

(I also hesitate because it will kick off a storm of People on the Internet Arguing about Physics...)

but this is just wrong. It is irreparably wrong. The idea that a given oxygen molecule (or an oxygen atom, or an electron) in BCE 44 can be identified with an oxygen molecule (etc.) in the present day runs fundamentally counter to the way the universe works.

Put it this way. In python, we have mutable variables, which have identity. so we can say

>>> a=b=[]

>>> c=[]

>>> a is b


>>> a is c


>>> a.append(5)

>>> a, b, c


Starting out, a and b are the same empty list, and c is a different empty list. It seems naively that we could say the same of particles or atoms. That though we couldn't see it, or hope to trace its history, there existed some electron in 44 BCE that "was the same electron as" some electron today. But that is not how the universe is implemented. Every electron is the same as every other electron. Think immutable, not mutable variables. The state in 44 BCE is not "electron #4892489 is here, and electron #4892490 is there", it is "there exist electrons here, here, here (etc.)" (and they have thus-and-such spins, momenta, etc. etc.)

Edit: http://lesswrong.com/lw/pl/no_individual_particles/

raquo 45 minutes ago 0 replies      
This assumption alone holds OP's argument from folding:

> To determine the probability of not just one thing but of a whole bunch of things that are causally unconnected happening together, we multiply the individual probabilities

You can multiply probabilities of individual events only if you know they are independent, i.e. their correlation is zero (as opposed to anything involving causation). I am not convinced that the molecules that were once in one place would get distributed to zero correlation even after 2000+ years. Well, this at least requires an analysis on its own.

huhtenberg 9 hours ago 0 replies      
Anyone liking this sort of thing should have a look at A Mathematicians Miscellany by Littlewood [1]. It is an exceptionally enjoyable collection of mathematical anecdotes and some such. It covers Caesar's Last Breath topic, but it also goes over the probabilities of highly unlikely events such as an upright drumstick not falling over during a long train ride.

[1] http://www.archive.org/details/mathematiciansmi033496mbp

dpbrane 9 hours ago 1 reply      
What is more mathematically interesting about this is that if the "assumptions" are off by a factor of 10 or so (say, the atmosphere actually has more like 10^45 molecules instead of 10^44, and a breath contains 1x10^22 molecules, not 2x10^22), the result is reversed:

[1-10^-23]^[10^22] ~ [e^(10^-23)x(10^22)] = e^(0.1) ~ 0.9

-> 90% chance that any given breath contains none of Caesar's last.

lurker19 9 hours ago 0 replies      
This puzzle was printed in college application brochure for (I think it was) Princeton in the 1990s.
It was an example of the stimulating and irrelevant academic university culture, or something like this.
EveryJS: The Right Tool for the Right Job everyjs.com
90 points by bradly  11 hours ago   30 comments top 12
thaumaturgy 9 hours ago 4 replies      
Error: Object.create is not a function

Source File: http://www.everyjs.com/js/libs/sproutcore-2.0.a.2.min.js

Line: 9

(Page doesn't list anything for me. Firefox 3.6.15/Mac.)

justincormack 3 hours ago 0 replies      
"Just 2k", "Just 3k"... Then "Just 295k" seems rather out of place. Maybe selectively replace "Just" with a different word here?
zmmmmm 7 hours ago 0 replies      
Since YUI3 has no recommendad usage scenario, I would submit for consideration:

Use YUI3 if you want a comprehensive framework that includes DOM manipulation and event handling along with a highly consistent set of widgets, layouts and utility components all in one package.

At least, this is when I choose YUI over arguably lighter / smaller / simpler solutions (like jQuery).

zapf 6 hours ago 1 reply      
Why are you using sproutcore for a static plain ol html page? There is no need to use much js on such a page, is there? And then you are ending up breaking the page on old browsers. Whats the point? Are you using the right tools for the right job?
skrebbel 7 hours ago 2 replies      
Nothing here either (Opera 11, Windows). Seriously guys, Sproutcore is so horribly not-cross-browser that it's a joke. Please stop making sites with it.
duopixel 10 hours ago 1 reply      
Good resource! Just a little fix: Dojo is linked to backbone.js.
glenjamin 5 hours ago 0 replies      
Is this intended purely as libraries to use in the browser? JavaScript doesn't just mean web anymore - but if this is aimed at web it might be worth saying so explicitly.
smhinsey 7 hours ago 0 replies      
Ender looks really interesting but I feel like it's missing the example that really shows the power behind it. I'm not sure I'm seeing it right now. Anyone have/know of such an example?
ashish01 6 hours ago 0 replies      
Also take a look at Knockout.js (http://knockoutjs.com/)

Its a lightweight MVVC framework which brings WPF like data binding to HTML.

See some live examples at : http://knockoutjs.com/examples/

and I ported backbone.js todo example to knockout over here : https://github.com/ashish01/knockoutjs-todos

jschuur 9 hours ago 1 reply      
Similar, but with less content to http://microjs.com, which is credited in the footer.

I prefer EveryJS's presentation though, since it's easier to read the description.

chrischen 7 hours ago 0 replies      
Don't forget NowJS. http://nowjs.com/
drewda 11 hours ago 1 reply      
A ruby-toolbox.com for JavaScript would be useful, but it'll need a bit more organization than this.
Designing Incentives for Crowdsourcing Workers crowdflower.com
6 points by wslh  1 hour ago   discuss
Introducing the Startup Genome Project maxmarmer.com
22 points by miraj  5 hours ago   discuss
Structure and Interpretation of Computer Programs (lectures) berkeley.edu
31 points by helwr  7 hours ago   3 comments top 2
swombat 2 hours ago 1 reply      
SICP is such an incredibly clear book, I've never really understood the need for the lectures - but then again, different people have different learning styles. I guess some people prefer to sit through X hours of lectures than to read through Y hours of a book...
philipDS 2 hours ago 0 replies      
There are some webcasts for these available too.


Don't know if they run in parallel completely, but could be handy.

Shell Programming for Beginners ontwik.com
5 points by ahmicro  1 hour ago   discuss
Importance of Side Projects rawsyntax.com
107 points by rawsyntax  13 hours ago   44 comments top 13
cletus 11 hours ago 5 replies      
I have mixed feelings about side projects. Sure it sounds good and looks good but let me tell you my own example.

I am a single core processor.

I am pretty much capable of doing pretty much just one thing at a time. I've tried to multi-task to the point of switching off from what I'm doing at work to something else but it just doesn't work. One or both of them suffer.

I tend to be engrossed in what I'm doing at work. If I'm bored with what I'm doing I'm in the wrong job and that just doesn't last long before I throw in the towel.

Sometimes I'll work on that problem more. Sometimes I'll just do research/reading on that or related topics.

But I just don't really have the knack of switching off from solving that problem and picking up something else with equal vigour.

I have the same issue when it comes to learning new programming languages. I tend to have limited success when doing it on my own. Where I succeed is when I HAVE TO learn a new language, typically because that's now my job.

It reminds me of the scene from Swordfish where Travolta and Jackman are being chased in the car by a hit team, Travolta hands over the wheel to man the machine gun. Jackman says "I don't know how to drive this." "Learn!" was the response.

I get the feeling that the US (I'm Australian) has cultural differences that come into play here. For example, the education system in the US seems to have a very strong focus on extracurricular activities, something that doesn't seem to exist (to anywhere near that degree) in Australia. This includes sports, social clubs, community service and so on.

So much so that it can be an important part of getting into the right college and then the right graduate school (Australian universities, at least when I got my degree, typically just looked at your Tertiary Entrance Score and that's about it; in fact the whole system was AUTOMATED on that). The TES being a scaled combination of exam and coursework.

I wonder if there is a culture of multitasking because of this?

Whatever the case... I am a single core processor.

barrkel 11 hours ago 5 replies      
Per my contract, every creative thought and act I perform while employed belongs to my employer. My contract could be interpreted to mean that even my comments here are owned by my employer. Rather demotivating for genuine side-projects.
JoeAltmaier 41 minutes ago 0 replies      
My side project turned into a consulting company and fed my family for 4 years. Now I'm on my own again, still consulting. Its kind of like every project is a side project - they change from month to month and lots of new development.

Now my main project gets to be my family, which is really a great life. Envy me.

pnathan 11 hours ago 0 replies      
I see side projects as a key indicator of someone who is focused on becoming better, instead of just paying the bills.
jswinghammer 11 hours ago 1 reply      
I've found that most of my coworkers over the years don't really react very well to the idea of me having a side project-unless they have one themselves. I think they believe it's pretty strange to making something not work related.

I'm not sure I've worked with many programmers at all who read on the side let alone make much of anything. Maybe my experience is just weird.

Sandman 5 hours ago 0 replies      
I can definitely relate to the part that talks about analysis paralysis. When starting a new project, like everybody else, I'm usually faced with several different options on how I'm going to design it, which libraries I'm going to use, what the overall architecture is going to look like, what features I'll implement and so on. And very often, I get stuck deciding between two options, weighing which one is better, which one might prove to be a bad decision in the future and which one might make my code more flexible/inflexible to change and refactoring.

But then I slap myself (figuratively, of course) and remember to "just do it". To just release any kind of working code, never mind if it's a suboptimal design, never mind if there are a few bugs or if it's slower than it could be. The important thing is to release anything. The code can be improved upon after that.

Swizec 12 hours ago 1 reply      
Balance, though, is key. I've never quite managed to figure out how to make sure an exciting side project doesn't consume my life leaving everything else to rot, or conversely, how to keep a side project sufficiently alive when swamped with more important things.
bsmith 10 hours ago 0 replies      
I recently applied to a PHP dev position. I told the CEO that if hired I would still be working on a side project or two...didn't get the job. Obviously not the kind of place I want to be working, anyway.
dkersten 12 hours ago 0 replies      
My latest side project just attracted attention in a big way and it looks likely that we (myself and my brother; we worked on it together) will be founding a startup around it soon. We started the project for fun and all we expected out of it was some cool toys to play with, a little "oh thats cool" publicity and a fun learning experience. Its been a crazy past few weeks.

So, yes, side projects are very important!

ww520 10 hours ago 1 reply      
Side projects are great. They are great motivator for learning new stuffs. There are so far you can go with tutorial or sample codes. You have to do it in a serious enough project to learn the nitty gritty details.

Some of them will die away but some will stick. Here're my recent more successful side-projects.



Android game Starxscape.

exratione 12 hours ago 3 replies      
Seems to be a common theme around here:


"It seems strange to me that there exist developers who do not have side-projects: exploratory exercises in coding, tinkering, and scratching personal itches that run on the weekends, or here and there in the evenings as a replacement for mindlessly consuming mass-market entertainment. Do these people not enjoy their chosen profession?"

rawsyntax 12 hours ago 1 reply      
I'm the author, will answer any questions / criticisms of the post. Thanks for the upvotes
coenhyde 10 hours ago 0 replies      
I like side projects. I have a few myself for many of the reasons listed here. In fact I'm much more inclined hire someone who has side projects. It shows they have a passion for programming/creating things.
Reverse Engineering Firmware : Linksys WAG120N devttys0.com
81 points by peppaayaa  12 hours ago   17 comments top 3
JoeAltmaier 48 minutes ago 1 reply      
I use GOLEM, a disassembler that allowed annotating the resulting source (adding lables and comments) and re-running the disassembler for ever-more-informative disassembly. It was also script-based so it could disassemble any machine code (if you were patient enough to write up all the patterns). It could also recognize complex sequences and produce pseudo-instructions e.g. Loop, TestAndBranch etc.

Anyway, not really related, this article is about detecting linux's signature and identifying the kernel, but it brings back old times.

boredguy8 9 hours ago 5 replies      
Relatedly: is there a good home wireless router on the market today? Hardware & firmware revs have made googling for an answer somewhat difficult: I figure here I'm likely to find a good replacement for my less-than-stellar WRT54G.
leon_ 6 hours ago 0 replies      
This is really neat.

I'd be also interested in the reverse: modding the firmware, packing it up and flashing it onto the device.

Cookiejacking: 0-day exploit of all Internet Explorer versions google.com
159 points by jpadvo  18 hours ago   27 comments top 5
tptacek 17 hours ago 3 replies      
I mean, this is clever and all, but what am I missing? Isn't this just an IE bug? That you can access cookie files as IFRAME targets? Is there some part of the IE architecture that depends on that functionality, or is Microsoft just going to patch that?

Because pretty much all the browsers, on a better-than-quarterly basis, fall victim to attacks that allow arbitrary web pages to upload code into their processes and run it.

Just not sure this needed the "attack class" name.

pluies_public 48 minutes ago 1 reply      
Apparently that page has been shut down by Google?

"This site has been disabled for violations of our Terms of Service. If you feel this disabling was in error, please fill out our appeal form."

trotsky 16 hours ago 1 reply      
Novel approach, but I'm curious how many networks let 445 smb over tcp out? Enterprise networks sure shouldn't, my office doesn't, my house doesn't though admittedly most people won't be configured this way. But don't big carriers like comcast also filter common microsoft ports like this and 139 because of worm and exploit activity?
cppsnob 16 hours ago 1 reply      
Can someone describe the white hat credo with respect to 0 day exploits?

Did he give Microsoft a head's up about these and a chance to respond before going public? Or does he just give a conference talk and post it to his blog, potentially providing the information allowing thousands of browsers to get compromised (assuming they weren't already) before privately letting Microsoft get a chance to patch it?

dopechemical 13 hours ago 2 replies      
If this attack involves "simply sniffing TCP 445" why not just MITM the whole session?

The state of security is becoming an over-hyped sideshow of late where the most trivial attacks, which would work maybe 1% of the time in the wild, are getting mass exposure.

I have a 0day in RHEL 5, you simply need to log onto the machine as root and run this script...

How Unity, Compiz, GNOME Shell & KWin Affect Performance phoronix.com
15 points by CrazedGeek  4 hours ago   discuss
Human Based Translation API mygengo.com
12 points by franze  4 hours ago   4 comments top 2
robert_mygengo 4 hours ago 2 replies      
(Disclosure, I'm the CEO of myGengo)

We just this damn second announced a special offer whereby you can switch to our translation API and get $25 free credits (AND you'll continue to get free machine translation). Bit ironic that you posted your link just now.

Check it out: http://mygengo.com/talk/blog/translation-apis-google-shuttin...

steilpass 1 hour ago 0 replies      
I was actually looking into mygengo as a translation service. Asked a question, didn't get an answer, went to toptranslation.com
CEOs vouch for Waiter Rule: Watch how people treat staff protocoladvisors.com
102 points by jkuria  15 hours ago   76 comments top 18
apenwarr 8 hours ago 2 replies      
The waiter rule works - if someone is rude to a waiter, it's an incredibly bad sign, because at the very least, it means they've never heard of the waiter rule.

However, just because someone is nice to a waiter doesn't mean they have good social skills or don't treat inferiors badly. It turns out there's an even better variant of the waiter rule: listen to all a person's interactions with other people. Listen, for example, to how that person talks about other people behind their back. As the aphorism says, what people will say to you about other people, they would say to other people about you.

In my real-life example of this lesson, there was a person who always gloated about how they had screwed or were about to screw other competitors, negotiators, etc, but of course always made sure to point out how much he was helping me. I foolishly believed it, when of course all the evidence was that I shouldn't. Eventually, the situation changed, he didn't need me anymore, and sure enough, he took advantage of me too.

Knowing how people treat others is supremely important as a defense mechanism.

Mz 18 minutes ago 0 replies      
I've skimmed the comments and haven't noticed these two thoughts, so will add them:

1) Opening line of the article: Office Depot CEO Steve Odland remembers like it was yesterday working in an upscale French restaurant in Denver. Waiters won't necessarily remain waiters. We are all human beings, regardless of our current role at the moment. There are also cases where someone may be more important than they appear to be -- for example, managers run cash registers during busy times at my local grocery store. I have long liked collecting true life anecdotes about such things: Charlie Chaplin entered a Charlie Chaplin look alike contest and came in third; a bank turned down some long-haired, jeans-wearing young person for an account and then they were given the account when one of the younger employees recognized them as a wealthy rock star; someone in a bank was an utter ass to a guy in overalls covered in paint who threatened to move his accounts. She said something like "feel free". He was the owner of a construction company worth millions and did, in fact, move his accounts.

Treating people badly who appear to be your "social inferiors" at the moment is simply stupid. That person may not really be the "social inferior" they appear to be. And even if they are at this moment, they may not remain so. If you are an ass to them, they will remember it when the table has turned. It is shocking how often you run into people later, sometimes many years later.

2) Even if they truly are your social inferiors and will always remain so, it is still stupid to mistreat them. That waiter carries your food from the kitchen to your table. Do you really want to give him reason to do something to your food? If you think he won't, then you think he's a nicer guy than you are. Don't be so sure. People with power may be assholes to your face. People without power are very likely to reciprocate but in a way that covers their ass, so it is more likely to be done behind your back.

So aside from my hippie-tree-hugger, give-the-world-a-hug, we-are-all-first-and-foremost-souls-on-a-spiritual-journey idealism for trying to be decent to people I meet, I think folks who mistreat others in this manner are simply stupid. Last, I will note that my observation has been that folks who try to lord it over others in this manner are also usually insecure and hiding behind their degrees, titles, big paycheck etc. I'm not impressed by such behavior, at least not favorably so.

hugh3 13 hours ago  replies      
The purple sorbet in cut glass he was serving tumbled onto the expensive white gown of an obviously rich and important woman... Thirty years have passed, but Odland can't get the stain out of his mind, nor the woman's kind reaction. She was startled, regained composure and, in a reassuring voice, told the teenage Odland, “It's OK. It wasn't your fault.”

That woman had admirable self-control. It's difficult to retain your composure when you're inconvenienced by the incompetence of your social inferiors, but it's usually worthwhile.

The trick, I find as I get older, is to make the move from asserting status to assuming it. Instead of the mindset "I'm so fucking awesome, why won't these goddamn idiots stop inconveniencing me", you move to the mindset "I'm so fucking awesome, which means I have a responsibility to set a good example for these people". As soon as you start seeing your role at the top of the pyramid as being all about setting a good example for those below you, rather than being served by those below you, dealing with them gets a whole lot easier.

(Then again, maybe that's just me, and I've just over-shared enough to mark myself out as a supremely arrogant prat.)

Another story I like of noblesse oblige is about casino magnate Steve Wynn, who shortly after purchasing a $139 million Picasso accidentally ripped a hole in it at a cocktail party. "Oh shit", he said, "Thank goodness it was me". If anyone else had ripped Steve Wynn's $139 million Picasso it would have ruined their life, but if Steve Wynn accidentally rips Steve Wynn's Picasso then the painting is just as ripped but nobody's life is ruined.

Anechoic 11 hours ago 1 reply      
I find it just fascinating that anyone (CEO or whatever) would ever be rude to a person that is handling their food.

Oh well.

Nycto 12 hours ago 1 reply      
I was a bit surprised to see an entire article devoted to this, as it's a very common in the theatrical world. Because of the short turn around time, directors are constantly having to work with new people. As well as calling people they know on the resumes, they will often ask the people collecting headshots and herding cattle outside the audition room if anything notable occurred. It can make or break an actors chances.

This is easy to apply to business. If you have a secretary greeting interview candidates, just get a quick opinion. Nobody wants to work with an asshole, and everyone remembers them.

mgkimsal 11 hours ago 1 reply      
And yet, somehow, we continue to see psychopaths promoted up the corporate ladder while decent folks - who pass the 'waiter test' with flying colors and are otherwise qualified for the positions - are passed over.
j_baker 12 hours ago 0 replies      
I'm always amazed at how nice people are when they want something from you. If you want to see their true colors, you have to watch how they treat people they don't want something from.
vaksel 13 hours ago 2 replies      
It's kinda sad that we need the waiter rule...is it too much to ask for people to treat others nicely?
nraynaud 1 hour ago 0 replies      
I like the idea of the Waiter Rule ; but here in France it would be quite difficult for a client to be more rude than the waiter. :)

2 anecdotes:
An American friend of mine wants to become a waitress when she's back to US, just to give people the experience she didn't have here in France.
Last time I was shocked to find such a soft-speaking waiter, and I though for a few seconds he was just joking.

kingkilr 13 hours ago 0 replies      
Interesting, a friend of mine used to own a chain of restaurants in Chicago, recently he went out to dinner at a nice restaurant with family, he said that his sister-in-law's boyfriend had some habits that indicated a similar to the personality that's described here, but that aren't as obviously obnoxious. For example if a waiter didn't introduce themselves by name he'd ask theirs, apparently waiters hate stuff like this because it's such an unequal relationship.
sayemm 6 hours ago 1 reply      
I could see this. During my senior year in high school I worked as a waiter at a four-star restaurant on Park Ave in Manhattan. The clientele included a lot of business execs working nearby and they often met for meetings. Some of the regulars were casual, friendly, and very cool, but then there were always a few who had a snobby and obnoxious air about them.

I like this as a psychological indicator, not just because of the employer-employee dynamic, but because I think it shows how one perceives their own self-worth, as well as that of others. If you're elitist and too self-absorbed in your own status, you're probably not well-suited for a lean merit-based culture and you're probably also going to be blind to spotting hidden talent in others as well.

danielharan 13 hours ago 3 replies      
Rather ironic that it's the CEO of a weapon manufacturing company that's moralizing about how to treat others.
evangineer 13 hours ago 0 replies      
Bill Swanson may not have invented the Waiter Rule:


tuhin 6 hours ago 0 replies      
Here is a simpler version:
"Would you be OK with someone talking like that to you?"

If the answer is no, then you know what to do.
If the answer is Yes, then the other person probably did cross the line or whatever.
If the answer is Yes, but ________"Insert any statement of the contrary nature or how infallible you are here"____________, then you are a jerk.

It is not just about waiters, but any human being be it higher in whatever illusionary status system you believe in or lower than you.

Treat others how you would want to be treated.

tzs 10 hours ago 1 reply      
I'd worry about someone who mistreats waiters for the simple reason that it is a sign that person is a complete idiot. Unless all that's left is for the waiter to bring the check, you are pissing off someone who is going to be handling your food. Mistreating the waiter is just asking for the next dish to be seasoned with spit or worse.
rilindo 11 hours ago 1 reply      
That is why when you are interviewing at a company, you treat everybody you meet politely and cordially - even the cranky secretary. :)
PostgreSQL tips and tricks gabrielweinberg.com
132 points by swah  17 hours ago   26 comments top 11
birken 15 hours ago 2 replies      
A lot of this advice is not very good, or at best is misleading.

1) Turning "enable_seqscan" off is almost never a good idea. If Postgres isn't using an index that you think it should, you should figure out the underlying reason why. This would be akin to killing a fly with a sledgehammer.

2) The main thing that is going to impact whether a given index is in memory is how often that index is used, not it's size. The advice about reducing the size of indexes is true (in that a smaller index uses less memory, which is always good), but if you have 10 indexes and you only actively use 1 of them, then those other 9 indexes aren't going be taking up much memory. The biggest issue is they are going to slow down your inserts.

3) Manually vacuuming isn't really necessary for newer versions of Postgres (>8.3 I believe, where autovaccum is enabled by default). There are times when you do need to VACCUM, but probably not that often in standard usage.

Here is some simple practice advice for postgres users:

1) If you are using an older version of Postgres, upgrade to Postres 9. It has a lot of huge improvements over previous versions.

2) Purchase & read Greg Smith's PostgreSQL 9.0 High Performance (as mentioned previously by another poster). This book is phenomenal.

rarrrrrr 17 hours ago 3 replies      
As an aside, I'm reading Greg Smith's PG 9.0 High Performance, and it's an excellent in depth study of database optimization. The first 90 pages are bottom up: storage technologies and how to tune and benchmark them, memory, cpu, file system choices, and operating system tuning parameters. Once the fundamentals are in order, it covers the internals of PG in even more interesting detail.

Trivia: A fsync call on any file on ext3 will force a sync of not just that file but the whole volume; ext4 fixes that. If you buy large SATA drives, then partition them to "short stroke" using only the outside of the disk (discarding the slower spinning inner region) you get competitive performance to SAS disks for many workloads.

One of the best low level books I've found in a long time.

rosser 16 hours ago 0 replies      
Be very, very careful following the advice about disabling table scans ("enable_seqscan = false") " especially globally. In performance tuning terms, that's often the equivalent of swatting flies with a howitzer. (You can also set that on a query-by-query basis, which, depending on the query, may be more sensible.)

The fact is, though, that often-times, a table-scan is the most efficient query plan. Yes, indexes can speed things up, but they do so by increasing random IO (leaving completely aside the write-time penalty you pay for indexing). When your storage medium is spinning rust, increasing random IO will eventually " and much sooner than you think, at that " cost more than buffering an entire table into RAM and sorting/filtering there. Moreover, all that additional random IO will degrade the performance of the rest of your queries.

gbog 9 hours ago 0 replies      
After many years using PostgreSQL too (8.2), I have other kind of advices. We didn't tweek much the configuration. Instead, we use date indexing, denormalization triggers and different kinds of materialization to have a good performance overall.

Date indexing: This was tricky in the beginning as a timestamp with timezone column can not be indexed (because the value is not fixed). So we use timestamp without timezone columns and handle zones another way. Then, for a big table with events that you want to keep for a long time, you can CREATE INDEX i_date ON events(ts::date). Then in all queries fetching for this table, make sure to add a WHERE ts::date = 'YYYY-MM-DD' clause, this way you will hit the date index and get fast queries even with billions of rows.

Denormalization triggers: Suppose you have a Russian puppets structure like usergroups, users, events, items. You need to access to items, and also most of the time filter by a 'type' value that is properly stored at usergroups level. The straight way is to do a quadruple join, but this can be slow. So we add a denormalized column 'denorm_type' in items, with two triggers: one trigger forbids direct update of 'denorm_type', another trigger reflects any change done at usergroups level in the 'denorm_type' column. This helps a lot.

Materialization is a higher level of denormalization than the one above. It is necessary to keep the 'real' data highly normalized, but most often one need to access to the data in a more user-oriented form, specially for reporting purposes. Views are excellent at that task, but they are computed in real-time. Materialization is the process to write some of these views on the disk, and index them, for faster access. With proper triggers, it is feasible to keep these materialized views safe (ie. read-only) and always up-to-date (any change in the base data triggers an update or insert in the materialized view).

ilikepi 16 hours ago 1 reply      
While I agree with the second tip ("Replace a live table with DROP TABLE/ALTER TABLE") for ad-hoc stuff, its big disadvantage is that it requires knowledge of the indexes on the table. If you have one or more scripts to maintain that use this method on a particular table, and the indexes are changed in an unrelated update to your table structure, you have to make sure those changes are also reflected in your scripts.

Oracle allowed you to disable the indexes on a table (something like 'ALTER INDEX foo DISABLE'), which allowed them to become stale when data in the table was modified. You could do this right before a big import operation for example, and you could then rebuild (a different 'ALTER INDEX' syntax) them when the operation completed.

PostgreSQL doesn't appear to have an equivalent at the moment.

edit: wording

jnsaff 10 hours ago 0 replies      
One thing that helped us a lot was setting

checkpoint_completion_target = 0.9 # up from 0.5 default

So the checkpoints get written out more evenly and cause less and shorter "stop the world" events. This was especially painful on Amazon EBS (which is a relatively bad idea by itself btw) with large shared memory size. Decreasing this actually improved performance for us.

lysol 11 hours ago 0 replies      
Increasing shared buffers should not be so far down the list. It is the first, single biggest impact on performance that you can do. I think everyone knows that it's something that should be tuned right away, but I think it should very first on the list, since it's so easily overlooked by someone just starting out.
mark_l_watson 14 hours ago 0 replies      
Good stuff! I just permanently archived a searchable copy.

Off topic, but some PostgreSQL love: after using DB2 on a customer job for the last 5 months I much more appreciate PostgreSQL which is the RDMS I usually use. It seems like PostgreSQL is developed and documented in such a way to make it easy to use while DB2 is designed and documented to optimize IBM consulting services revenue.

chuhnk 16 hours ago 1 reply      
A few of these tips apply to mysql also.

Copy table from a tab delimited file. In mysql you can use load data infile which will do the exact same thing.

Indexes in memory obviously very important in mysql land too. Also when using joins make sure the columns being joined on are indexed.

Using innodb o_direct will prevent linux from swapping out the mysql process.

Mysql's default in memory table tables are very small which usually results in creating on disk tables, to prevent this increase tmp_table_size and max_heap_table_size. Alternatively you can specify mysql engine memory for in memory tables if you know how big they are going to be.

hrasm 14 hours ago 1 reply      
Almost always, a lightweight connection pooler like pgbouncer can do wonders. It is so easy to configure and have it up and running that there is really no reason not to use it.
juiceandjuice 15 hours ago 0 replies      
I've been meaning to get around to playing with Python/PL and Postgres for a little while now
Regulation, not technology is holding back driverless cars nytimes.com
146 points by ultrasaurus  19 hours ago   123 comments top 19
jxcole 18 hours ago  replies      
My dad works in the airplane industry and had an interesting story to tell me that relates to this. (I'm not sure exactly how accurate it is or what the source is, sorry). Apparently, it is illegal for pilots to read while flying. Even if they are heading in a straight line with no one around for miles. This is because one time an pilot wasn't paying attention and due to a series of software failures, the plane turned into a mountain. Interestingly, the plane was tuning at a very exact amount so that the number of Gs remained constant.

In any case, this single crash caused regulation to state that computers can never fly planes by themselves. This strikes me as rather unfair. If a single human crashed a plane, it does not make it illegal for humans to fly planes by themselves.

Another example is that in London, subways must be driven by a human. Even though driving a subway may be trivial (there is no way to steer), Londoners apparently do not feel comfortable being driven by a non-living thing. They want to be sure that if they die, the driver dies too, adding a level of accountability.

It seems that this sort of wide-spread mistrust of machines is driven more by socially normal paranoia than any kind of logic. I for one am rooting for machines to take over all forms of driving. There may be a few mishaps, but it will probably become hundreds of times safer eventually.

cletus 18 hours ago 3 replies      
No surprises there.

The transition to driverless cars is (IMHO inevitable. At some point it will be cheap enough that the additional cost will pale in comparison to the lives that will be saved as well as the simple convenience of being able to do something else while commuting somewhere.

Likewise I see this kind of thing replacing many forms of public transportation. There will simply be a fleet of cars. You'll say where you want to go and some system will route people and cars to destinations.

But, the transition won't be quick or easy. You need look no further than the aviation industry to see why.

Basically, automation in modern aircraft is a double-edged sword. It seems to erode the ability of pilots to actually fly [1], software errors causing deaths [2] and (I can't find the link to this) I also read a study that in more automated planes, pilots are more likely to believe erroneous instruments rather than their own senses and experience.

The issue won't be how the car normally behaves because as demonstrations have shown, current systems require very little human intervention.

The issue will be extraordinary circumstances plus the huge liability problem of any errors.

Example: if someone runs a red light and causes a crash, killing someone, that person is responsible. If an automated car does the same thing, the manufacturer will be responsible.

That alone will impede adoption.

Instead I think you'll have what we already have: slowly adding automation to cars. Cars already have radars and can stop themselves from colliding, they can park themselves and so on.

But at some point the driver will need to go away and that will be a tremendously challenging leap forward for society.

[1]: http://www.tourismandaviation.com/news-4530--Pilot_Reliance_...

[2]: http://catless.ncl.ac.uk/Risks/8.77.html#subj6

Jd 16 hours ago 0 replies      
The article doesn't make its case very well. The core problem people are presumably worried about is safety, and saying it they have a "good safety record" is hardly enough to reassure the senators, etc. who would presumably be responsible for relaxing restrictions.

For example, what about edge cases? Suppose the Google car does just fine in normal driving conditions, but in a blizzard w/ 26 mile per hour gusts of wind (as I drove in recently), or when a tractor trailer flips over on the road in front of you? Humans have a certain intuition that allows them to do bizarre twitches in extreme situations (even including supernormal strength) that presumably no machine intelligence will be able to approach for a long time (if ever).

Or what about the possibility of someone hacking the car? Could a worm engineered by some hostile government take millions of cars off the road -- or, worse, cause them all to steer into the median and cause mass damage and thousands of instant casualties?

It is, frankly, irresponsible not to consider edge cases like these when drafting legislation, and while I'm all for gradual introduction and more testing, the author of this article has convinced me that senators sitting on their hands not doing anything are probably acting on the interests of the people much more so than those who wish to simply hand over driving and navigation functions to machines as soon as possible.

erikpukinskis 17 hours ago 1 reply      
I wonder if driverless cars could start out as a tool for people with disabilities. If such use were challenged, I can imagine the supreme court taking seriously a case by a person who is quadriplegic or blind demanding the right to use a self-driving car. If they can prove them safer, it will be hard to find a compelling government interest that could offset denying the use of this assistive technology.
georgieporgie 18 hours ago 4 replies      
No state has anything close to a functioning system to inspect whether the computers in driverless cars are in good working order, much as we routinely test emissions and brake lights.

Having lived in Oregon, Arizona, and California, I have never had anything other than emissions routinely inspected. Demonstrate a car smart enough to monitor its own brake pad wear, alert on burnt out bulbs, and provide a clear readout of all detected issues (i.e. not a coded blinking service light, or plug interface) before you start trying to make it drive itself.

(I do love the idea of an automated train of cars, and driving my drunk self home, though)

uuilly 12 hours ago 0 replies      
Regulation and fear are to be expected. The question is, what to do about them? I predict the largest PR campaign in the history of technology. Public opinion generally drives regulation. So less public fear will lead to less regulation.

While I have no way to prove it, I'd bet my right hand that Google's PR people made this story happen. I'd bet they also made the first NYT piece blowing up the Chauffeur project happen and they made it look serendipitous for authenticity. I think "The Suit is Back," and I think it's going to come back again and again.

Prediction: Driverless cars will be portrayed in a very positive way in a major motion picture within the next year.

chrismealy 18 hours ago 3 replies      
We don't need driveless cars, we need carless people:


melling 18 hours ago 2 replies      
One idea would be to make some long haul roads, or sections of them, completely driverless. Maine to Miami along a section of I95, or NYC to LA. We could start the test with tractor trailers. Let them drive for a few years and tune the system. There would be a huge economic benefit to allowing trucks to run 24x7 without drivers.
RyanMcGreal 12 hours ago 0 replies      
This is the crux of the matter:

> imagine that the cars would save many lives over all, but lead to some bad accidents when a car malfunctions. The evening news might show a “Terminator” car spinning out of control and killing a child. There could be demands to shut down the cars until just about every problem is solved. The lives saved by the cars would not be as visible as the lives lost, and therefore the law might thwart or delay what could be a very beneficial innovation.

It's otiose to point out that the premise of personal motor vehicles is not called into question every time a human driver spins out of control and kills someone.

______ 18 hours ago 3 replies      
Speed limits are another realm in which regulation can hold back the development of driverless cars, besides merely allowing them on the streets in the first place.

With computerized drivers, it will finally be possible to fully enforce speed limits, by introducing some ceiling to the speed attainable by the car. I'm sure some "well-meaning" legislator will make it his or her priority to ensure that speed limits are never exceeded. However, at least in Massachusetts, if you go on the highway, everyone (including police) drive at ~75 MPH even though the posted speed limit is 55 or 65 MPH. Few will buy a car with this kind of handicap, were it to exist -- and I worry that it will. Many speed limits in the US were imposed decades ago, with less safe and responsive cars -- it would be a pity if potentially revolutionary technology advances were thwarted by this fact.

Legislation has already crippled or made useless many useful automotive innovations. In the US, technologies that allow for adaptive cruise control (maintaining a distance to the car in front of you) can only decelerate the car, and not accelerate the car. This forces the driver to have to constantly accelerate, greatly reducing the effectiveness of this feature. Many computer-laden vehicles with navigation systems are similarly crippled -- they automatically "lock" when the car is in motion, and some (like in Lexus vehicles) cannot even be overridden by people sitting in the passenger seat... often causing unintended risks like drivers pulling over on busy highways just to readjust their GPS target.

wallflower 9 hours ago 0 replies      
I'm not sure I trust the underlying architectures that are being developed with my life...

DDOS and MITM attacks take a whole-new meaning if the networked entities are 3-ton objects moving at 65 mph.

pnathan 12 hours ago 0 replies      
Some things aren't brought into focus here.

(1) Off-highway driving happens. That means that the algorithms have to manage an uncontrolled environment where 'anything' can happen.

(2) It is very expensive to create bug-free software.

(3) You can't iterate by failing fast on life-critical systems after it is released. Failure means killing someone.

(4) Legal liabilities. It's not going to work to say something like, "This car's driver software is not warranted free from defects".

(5) Humans can manage situations utterly outside the norm; algorithms can not see beyond the vision of the designer.

I work in an industry which operates below the levels of software assurance that the medical/flight industries work at, and it is incredibly painstaking as it is. A fully automated car will be very expensive to build.

I am not a paranoiac regarding software. I am a paranoiac regarding software bugs and the limits of the software designers.

jomohke 9 hours ago 0 replies      
Ars Technica did a great series on the technology and economics of self driving cars a few years ago:


joel_ms 16 hours ago 0 replies      
>But it's clear that in the early part of the 20th century, the original advent of the motor car was not impeded by anything like the current mélange of regulations, laws and lawsuits.

They did try in the 19th century though, at least in the UK, with the Locomotive Acts[1]. The way those laws went out of their way to protect the status quo (i.e. horse-powered transport) is an interesting parallell to today's possible transition from human-controlled to computer-controlled transport.

[1] http://en.wikipedia.org/wiki/Locomotive_Acts

blue1 17 hours ago 0 replies      
I suspect that this kind of "risky" technology will be deployed first in more adventurous countries, like China.
ultrasaurus 18 hours ago 1 reply      
So much of technical progress happens through delivering most of the value of the previous solution at a fraction of the cost (email vs postal mail). Society seems to rule this kind of progress out for a few industries like health care, I assume something similar is happening here.
zandor 15 hours ago 0 replies      
A somewhat similar note put very nicely by James May from Top Gear;


kmfrk 15 hours ago 0 replies      
Maybe we just need to emphasize the negative impact of long commutes. Suddenly you have that commute time to do something else. :)
toddh 17 hours ago 1 reply      
Perhaps it's because software has an obvious history of being buggy. A web service won't be down for a while if this fails, hundreds of lives being at risk on a 24 hour a day basis. Maybe a little wait-and-see is a reasoned approach for a complex interactive dynamical system like this?
The brain's 5-million core, 9 Hz computer biophilic.blogspot.com
187 points by liuhenry  21 hours ago   84 comments top 13
archgoon 18 hours ago  replies      
"Unlike transistors, neurons are intrinsically rhythmic to various degrees due to their ion channel complements that govern firing and refractory/recovery times. So external "clocking" is not always needed to make them run."

Transistors don't need a clock in order to run. They can in fact be set up to create their own clocks. The purpose of clocks is for synchronization across the chip so that we mere mortals can modularize the operation of a CPU. That is, clocks exist mostly so that we can think in terms of sequential gate operations (or from the programmer point of view, assembly code).

The author seems to confuse the chosen approach to designing computers (VLSI), and the actual physical capabilities of a transistor. We have opted over the last forty years to develop the CMOS logic gate way of organizing computers. There are other ways, as the brain demonstrates, but it is not clear at all that you can't do it with novel transistor topologies.

CountHackulus 13 hours ago 1 reply      
Just a small nitpick. While the state of the art in x86 land might be 3GHz, the IBM Power chips and I think the Z mainframes too, not sure on that, have gone far beyond that speed.

POWER6 chips reached 5.0GHz in 2008: http://en.wikipedia.org/wiki/POWER6

POWER7 chips however have been clocked down to 4.25GHz: http://en.wikipedia.org/wiki/POWER7

lallysingh 18 hours ago 6 replies      
Hmm, so, if their metaphor really held, the brain's computation could be simulated with a 45MHz CPU? Well, let's fix this up a bit...

(1) Give it 1000 clock cycles of cpu work to simulate a single neuro-tick.

(2) The clock is actually variable 5-500Hz (from the article).

So, 500Hz*5m = 2.5GHz, at 1000 cycles required, so 2500 Ghz of CPU power. An amazon large-instance cluster box is 8 cores of xeon at 2.93 Ghz, about 110 cluster instances to simulate a brain?

1337p337 18 hours ago 1 reply      
This reminds me of Chuck Moore's 144-core, unclocked (!) colorForth CPU:

I wonder when he'll catch up to 5 million.

3pt14159 19 hours ago 3 replies      
At the time of this comment this article has 35 points and 0 comments. Over the past 6 months or so I've noticed a trend that the best HN worthy articles often have points : comments ratios of 5 to 1 or less. This is a clear indication of that.
vl 3 hours ago 0 replies      
My layman thinking on related subject of building brain-like computer is that currently it's possible (although expensive) to build required hardware - i.e. custom system that will have required number of connected electronic neurons, but I don't see a way to "boot it up". Human brains essentially boots from the DNA - i.e. it's layout and basic functions are predetermined by the DNA and then it gets trained for specifics. Even if we can train machine brain, how do we boot it up to the trainable state?
SeamusBrady 1 hour ago 0 replies      
Some of the comments seem to revolve around the confusion of maps with territory - CF Lewis Carroll's map "the scale of a mile to the mile" - http://en.wikipedia.org/wiki/Map%E2%80%93territory_relation#...
forkandwait 19 hours ago 5 replies      
When we finally design a simulated brain, I wonder if (1) it will be really good at spatial and behavioral behavior (balancing, motor skills, quick non-explicit decision making etc), but really bad at doing math?

Not to say we shouldn't keep trying, but we all seem to think that the best computer will evolve from solving lots and lots of partial differential equations into being like the brain, but animal brains have evolved to solve really different questions than why we have been making computers.

ignifero 18 hours ago 1 reply      
While it's true that some cells, e.g. pyramidals in the hippocampus can exhibit intrinsic oscillations, it's not true for most of the brain. Plus rhythms usually arise in networks, not single cells, and require the network to sustain themselves (that's why for example theta doesn't persist in vitro). Is there an example of a cell that can generate rhythms?
guscost 18 hours ago 0 replies      
I've been fascinated by Buzsaki and others in complementary fields since learning of them, and I've also written down some of my own ideas on the subject, from a purely theoretical perspective at least. I can't wait for what we might see in the next few decades.


arapidhs 3 hours ago 0 replies      
transisotr technology cannot emulate brain activity. something new and more analog oriented suits better imho
asadotzler 19 hours ago 0 replies      
nature's had a long time to sort out some decent hardware and software configurations. let's follow with computers.
programminggeek 18 hours ago 1 reply      
The brain is also water cooled. Without proper water cooling it overheats causing segfaults and a white screen of death.
The Brits are going to space (maybe) bbc.co.uk
5 points by tomelders  2 hours ago   1 comment top
hugh3 31 minutes ago 0 replies      
Like I said on the last thread, this project has been in the works for forty years, and the latest milestone they've reached is that the ESA has looked at the idea and declared that there's nothing obviously wrong with it.

That's still a pretty darn long way from a finished product.

Grammar, Semantics, Knowledge and Fractals miislita.com
3 points by wslh  1 hour ago   discuss
       cached 30 May 2011 14:02:01 GMT