hacker news with inline top comments    .. more ..    4 Feb 2015 Best
home   ask   best   4 years ago   
Windows 10 for Raspberry Pi 2
points by velodrome  2 days ago   372 comments top 54
danieldk 2 days ago 15 replies      
I am usually not the one to blow the free software horn (being a staunch believer of non-copyleft licenses such as the Apache License) and the geek in me likes this announcement. I also don't hold a grudge against Windows, I think it is a fine system for most people.

However, Windows seems very much contrary to the goal of the Raspberry Pi: providing a device for children to tinker with and educate themselves. Although .NET is slowly opening up, Windows is closed as ever. So, it does not actually let you check out how stuff works: looking up that function in the Python standard library, seeing how it calls libc, and then diving into libc to see how it is actually implemented.

It's in Microsoft interest to keep kids in their ecosystem, which has been pretty much unproblematic in the nineties and before the smart phone revolution. As the RPi becomes more popular in education, it would be bad for them when kids see that there is something else that not only works well, but also allows them to do more.

My fear is that teachers will now choose for Windows on the RPi, since that is what they know. And we are back to where we started: a fundamentally unhackable system.

Now, if Microsoft would open up the core of Windows. I would be impressed. Now? Not so.

benn_88 2 days ago 12 replies      
Note: Windows 10 on Pi 2 will be an IoT platform, not a desktop OS.

You'll develop your app in Visual Studio on a Windows PC and deploy it to the Pi.

Raspbian will continue to be the main supported OS.

- Ben from Raspberry Pi

ap0 1 day ago 1 reply      
There's nothing wrong with learning to first code on Windows. If it is easy for a kid to get going and start making stuff, they have a much better chance of sticking with it long enough to actually be interested in it as a hobby or career. My first programming experience was VB6 on Windows 98. I went from there to C in Visual C++. Fast forward to today and I write Java on OSX that is deployed to Linux machines. Most of this stuff is pretty transferable. Starting on Windows by no means stops you from exploring Linux later.

CS fundamentals can be taught on any system. Most kids don't have really technical people in their lives to get them past some of the hurdles of even getting started. I think we sometimes forget that the whole world doesn't want to be Linux gurus -- they want to be able to watch their cat videos and check their Facebook accounts. Most people don't care about the technical parts. If we can try to make the more technical parts accessible to a wider audience, I think that's moving the needle in the right direction.

cliffbean 2 days ago 1 reply      
While most people don't hack kernel code themselves, there is still value in promoting open source operating systems. A future in which Windows becomes popular on mobile/embedded devices is likely a future with more binary blobs, more OS-locked hardware, and fewer opportunities for those people who do want to hack on kernel code.
hfaran 2 days ago 4 replies      
Out of all of the exciting news coming from Redmond this past year, this has somehow taken the cake for me. In 2013, even suggesting Windows on Raspberry Pi could be a possibility would've gotten you laughed out of the room, and for it now to be actually happening made me take more than a few double-takes.
jbandela1 2 days ago 0 replies      
I think this is awesome. For all the people that are complaining about not being open source and not friendly to hackers, the truth is probably most people that program Raspberry are not even using a native compiled language but are using Python. The people that want to hack on a kernel, can install Linux.

I am really excited about the prospect of Microsoft enabling writing and debugging for the Pi using Visual Studio. They already have the feature for android in the Visual Studio 2015 Preview[1] and making it work for Raspberry Pi on running Windows should be easy. With an excellent development environment and debugger, I think interest in C/C++ programming for the Pi will be increased as it will be more accessible.

[1] http://blogs.msdn.com/b/vcblog/archive/2014/12/05/developing...

spdustin 2 days ago 1 reply      
I think there's too much arguing for things that kids don't give a shit about. The argument can't be, "think about the children!"

The argument must be, "preserve the children's rights to think for themselves." It is on us as adults - parents, teachers, trusted elders, whatever - to instill in the kids the information needed to make decisions, and the fundamental skills to objectively utilize that information in a rational decision making process. In my humble opinion - reflected in my own choices as a parent - that means Linux (a term I use in its colloquial form, please do not nit pick on that word) on the Pi. My kids see Windows at school and at friends' homes, OS X and iOS at home, and as of today I'm taking my RPi off duty and showing it to my kids. It's a snow day, they're a captive audience!

It isn't about a six year old compiling a new kernel or a teen getting a Minecraft server to run a mod, it's about creating a spirit of curiosity and experimentation.

rcarmo 2 days ago 1 reply      
Nicely played. I'm a bit surprised at the amount of people in this thread equating Windows 10 for RPi with a desktop environment, though -- I'd be _very_ surprised if it had a desktop environment at all, considering that they're targeting it as an embedded system.

You _might_ have graphics, but not a DE.

Edited to add: based on this video, http://channel9.msdn.com/Events/Build/2014/2-511, this will probably be considered as a "micro" system without any UI.

beagle3 2 days ago 2 replies      
I would love to here the inside story here.

Based on my experience with Win7 and Win8, I expect that Win10 would have crawled (or not at all worked) with the slower CPU and smaller memory.

My guess would be that the Raspberry Pi 2 is, in fact, a Microsoft initiative rather than an RPi Foundation initiative - and that Microsoft have actually contributed significant funds to make this possible. Either way, that is good for the community - I don't have a problem with Microsoft financing my stuff as long as it does not come with any strings attached (which, I trust the RPi foundation to have insisted on).

Pure speculation, of course - but I'm sure we'll hear a lot more about this deal.

tegeek 2 days ago 2 replies      
I personally want a trimmed down version of .NET (perhaps the new Core CLR which is just 10 MB) so I can run F#.

Mono is too heavy and have millions other dependencies I dont want to grab.

Running a modern & practical functional language on RB Pi will be extremely joyful for me.

patja 2 days ago 0 replies      
The devil is in the details of the licensing terms. Just like the "Windows 10 upgrades done in the first year will be free for the supported lifetime of your device" leaves a lot of wiggle room around who gets to decide what constitutes the lifetime of a device (you can be sure if you upgrade more than a few components you will get thrown out of the land of "free"), the terms of this program are not necessarily what you might expect from the PR: you get to install one single license and it is only good for 60 days!
fnordfnordfnord 2 days ago 0 replies      
I want this to be a good thing. I probably still have some Ballmer-induced "PTSD", so I approach these things with extra skepticism (even though I like the recent changes seen at MS). More choices often means a better ecosystem. More kids hacking is a good thing.

One problem that I see is that Broadcomm and peripheral makers will continue to release binary-only blobs/drivers, and the MSVS based toolchain could start to be the only way to do some of the really cool things. ie a few "killer" apps will done up and those will only be accessible on the Win/MSVS (free-as-in-bait) platform. Linux will again be relegated to the kind of blind-reverse-engineering, playing catchup, and being a second place late finisher; which we have all seen, and continue to see today.

Another is that educators at the lower level will take the "easy/safe/familiar" route and teach Windows exclusively, as is the case today even into college.

danbruc 2 days ago 0 replies      
Okay, come on! Who are you? And what did you do to Microsoft?
castell 2 days ago 1 reply      
Next step, Microsoft announces FX!32 for Win10 ARM.


FX!32 is a software emulator program that allows Win32 programs built for the Intel x86 instruction set to execute on DEC Alpha-based systems running Windows NT.

20kleagues 2 days ago 1 reply      
IoT devices running Windows 10. So I guess my coffee machine will also have the blue screen of death (BSOD) now.

Jokes aside, strategic move by Microsoft. The new CEO has some interesting stuff in mind indeed. Can't wait for the build conference where they open the HoloLens dev tools.

comboy 2 days ago 0 replies      
OK, Microsoft somehow got its sh*t together. I've been seeing them on local programming conferences (in Poland!), they started caring about open source, this company may still have a bright future.
orbitingpluto 1 day ago 1 reply      
I have been using the Windows 10 preview to play around with Powershell 5. Up until the 9926 build it was a big improvement.

The 9926 build is the first I've noticed that the started Windows 7 "Search Program and Files" has been replaced with "Search the computer and web". Apparently you can't turn it off. I could not find a group policy or regedit tweak to turn it off. A Microsoft support article said you cannot, period. Everything you do is a search. This default behaviour, even in Ubuntu, was what helped me switch distros.

The problem with this default behaviour is that it won't fly in corporate and government environments. Any place that does a security audit will have to redo a security audit every single time a new system is placed onto their network.

Microsoft seems to be moving towards the trend of easy use rather than productivity.

Trying to make a child's first impression of computing being tied to Windows makes sense. But aren't there going to be issues with blatantly monitoring kids like this?

cm2187 2 days ago 1 reply      
I hope it will support the .net framework. That would make it really accessible to unsophisticated hobbyists like me! There are very few things one can't do with a USB port!
teh 2 days ago 2 replies      
It's nice to see Microsoft cares. OTOH part of the attraction of the RPi for me is that people freely share their knowledge, code and cool hacks.

Adding a humongous blob of unmodifiable software doesn't really align with any of that. In the best case people will shun this, in the worst case it could harm the culture.

databass 2 days ago 1 reply      
Does anyone know what they mean when they say "grounded in trust"? (Seems odd to imply that other OSes are not trustworthy.) Referring to code signing, perhaps?
bambax 2 days ago 1 reply      
Apparently "IoT" means "Internet of Things" ("Windows Developer Program for IoT").

Is this a wildly recognized acronym so that it doesn't even need a footnote? (In a Sans Serif font you can't even tell if the first letter is an i or an l; it's really bad).

linuxlizard 2 days ago 1 reply      
Is Windows/Pi still an astonishing pain in the ass to talk to the local hardware? Linux (Unix) /dev is the best part. I don't have to search for obscure GUIDs to talk to a USB device.
ch4ch4 2 days ago 1 reply      
I hope they get rid of the stupid code-signing requirement that's currently in RT which prevents all third-party apps from running in RT desktop mode! I'd love to be able run Chrome on a Windows ARM device...
bohol 2 days ago 1 reply      
Somewhat surprised that there are around 20 comments and all of the seem to be from more or less a consumer perspective. The Raspberry Pi is one of the few embedded devices with aspires to be used as a desktop computer and on top of that as a way to learn computers. Embedded systems is also one of the few threats to Windows, but this hasn't really materialized yet since most devices only comes with android drivers. All in all it's a great strategic move for Microsoft and confirms my suspicion that Linux is losing the second round of the desktop wars.
mantenpanther 2 days ago 3 replies      
I can't see this working, hence windows is not exactly a lean OS. Coinincidently I'm stuck in an update/reboot cycle at the moment after a fresh install. Also i need to extend my partion of 60GB after just installing the OS (8.1) to get Visual Studio installed (which will also take at least 10GB).

I like the new direction of MS, but i wonder if it's possible to succeed with so much old baggage to carry...

belgianguy 2 days ago 3 replies      
I don't think Microsoft is doing it for the Open-Source spirit, as many seem to believe. They have a long history of attacking/smearing open source and displaying a "I heart Linux" slide is not sufficient to erase all the FUD and their questionable software patent licensing schemes (which are completely anti-open). They called Linux a cancer, even.

I think they're approaching this from a redundancy point of view, if person X already has Windows on his desktop (on which they have the market cornered), provide them with subsidized Windows platforms on other hardware where the monopoly is not yet in place. This then would cause the interest in other platforms to dwindle, as people would flock to what they know. Here also the EET mantra could come into play again, by offering features not available to other platforms.

They need a foothold in other hardware, most likely for the coming of Internet Of Things.

It makes business sense for them to increase their reach, but don't think that equals Microsoft being fond of Open-Source.

Mikeb85 2 days ago 0 replies      
Even though its Windows 10, the fact its ARM means you won't be able to run most desktop Windows software. Unlike Linux where most things have already been ported to ARM (thanks to open source!).

People will discover the limitations very quickly, especially compared to Raspbian/Ubuntu/OpenSUSE/etc...

erjjones 2 days ago 0 replies      
Anyone have a screenshot of Windows 10 running on the Raspberry Pi 2??? Does it actually have a GUI?
alisnic 2 days ago 0 replies      
I was waiting for a long time for something like that. Now I will be able to make a very cheap parents' pc
m_mueller 2 days ago 2 replies      
So I guess windows RT will be continued? I first thought that they're going x86-only, including the phones.
rasz_pl 2 days ago 0 replies      
Fantastic news. Just as positive as that time Microsoft decided to "help out" with OLPC .....

All of WinRT and Metro you would NEVER want, without that pesky Windows applications (x86) compatibility every statistical user associates with Windows brand.

4ydx 1 day ago 1 reply      
The raspberry pi is supposed to be an educational tool. You cannot learn anything about the inner workings of an operating system when it is closed source. There isn't much of an argument to be made really for Microsoft on Raspberry Pi. Programming in c# can be done on Linux these days. Why one would want to use Microsoft is really beyond me.
jadeddrag 1 day ago 0 replies      
If the windows source is not open, and we can't verify the builds we run, then how are we to know if the kernel and drivers, and userland apps are not tainted with NSA backdoors?
alkonaut 2 days ago 0 replies      
Cool news indeed. What is the main selling point of running windows over linux? Given that the CLR can (or will soon) run in a core version also on raspbian, what will be the main reason to choose Win10 as a base for an IoT project? Normally the case for windows over linux involves a desktop, but not in this case.

Could it be that microsofts API's for IoT/Smart homes will not be part of the cross platform (core) .NET package? If so then at least this move makes sense: "use a windows IoT device and you get all these handy microsoft IoT API's".

datalus 1 day ago 0 replies      
I think this is great, everyone should learn the Win32 API as an example of how not to design an API for wanting to build applications on top of the OS.
hoodoof 1 day ago 0 replies      
It's kind of weird Microsoft courting developers again. It's like an ex-girlfriend wants to rekindle the relationship.
marknadal 2 days ago 3 replies      
Could somebody explain how this is possible, given RAM and CPU constraints? Like, does W10 actually run with that small of a footprint it works on a Raspberry Pi 2?
BobMarz 1 day ago 0 replies      
Interesting quotes from the CEO Eben Upton. http://www.theregister.co.uk/2015/02/02/windows_10_raspberry...
mathnode 2 days ago 0 replies      
I had (optimistically) predicted the way to to ARM desktops was with Steam and gamers coming over to Linux and the majority of code bases being more portable they could cross-compile games to Linux on ARM.

But this could work too!

VLM 2 days ago 1 reply      
I wonder what the redistribution license will look like. Most stuff I've used a pi for have been "copy image to flash card, install flash card, boot and run" not running apt-get or make install type commands from the console. (I'm installing an octoprint box this week, sometime, bye bye repetier-host) Historically microsoft has been a little displeased with people redistributing their software.

Another issue little discussed is the intense PR campaign that it's only useful for education despite it being a pretty good generic appliance. I've never used one for education and have zero contact with that small subculture. Most pi users are using it as a low power small solid state replacement for "pull that old used desktop and slap linux on it" I wonder how the PR campaign that its an educational device will be enforced at the OS level, perhaps a watchdog timer will reboot it every hour or it'll be DRMed to heck and back requiring signatures for all executable software or who knows what could be implemented to ruin it for hobbyists without significantly impacting the edu market.

georgeecollins 2 days ago 0 replies      
This is very smart of Microsoft and good for the Raspberry PI. All the alternative OSs for Raspberry Pi are still there, and now there is another with a huge library of software.
aceperry 1 day ago 0 replies      
This is one way for Microsoft to get into the IoT game. It'll be interesting to see if their ARM port is useable on the new Raspberry Pi.
listic 2 days ago 2 replies      
Has there been 'a version of Windows' (how is this one called, by the way?) that works on ARM, before? Does this version support any other ARM board?
secfirstmd 2 days ago 0 replies      
This is great. Especially for the developing world where the Pi has great potential but people only really want to use and learn in a Windows environment.
masterzoozoo 2 days ago 1 reply      
Awesome news to hear. But I am not sure how far it will be compatible with the low resources on a Raspberry Pi.
TazeTSchnitzel 2 days ago 0 replies      
This is awesome.

Actually, I wonder if I could get ReactOS running on a Pi, might be the perfect testing platform. Hmm.

Alupis 1 day ago 0 replies      
I can't be the only person who first found out about the new Raspberry Pi here!

Just ordered 2 of them!

tkubacki 1 day ago 1 reply      
hah good old MS - no single link to Rasp project - still afraid of L word.
brickmort 1 day ago 0 replies      
Microsoft is playing a solid game of chess with their competitors.
jbb555 1 day ago 0 replies      
Well I see a lot of people already moving away from Linux due to the systemd fiasco. This gives people options so that;'s good.
z3t4 2 days ago 0 replies      
Wasn't the goal of windows 10 to run on any system??
yuhong 2 days ago 0 replies      
OT, but why did they took so long to go to ARMv7?
gamesbrainiac 1 day ago 0 replies      
wow. Microsoft is pulling out all the stops.
venomsnake 2 days ago 1 reply      
Shot over the bridge in some spat with intel?
CoreCLR is now open source
points by xyziemba  21 hours ago   300 comments top 25
fsloth 20 hours ago 2 replies      
De-Ballmerization and Microsoft is oozing delicious developer love. What the hell happened. It's like Skeletor became He-Mans best buddy all of a sudden and started helping everyone.

I'm thrilled. The MS tooling is really, really good and the only thing stopping me from committing to the stack fully has been it's lack of open sourceness (vendor lock in is still feasible but getting less of an issue).

Edit: Pardon the fanboyism but I've tried a set of feasible Non-MS language options for my particular domain and F# in Visual Studio beats for me, my particular use case and coding style Scala, Clojure, Ruby, Python, Haskell, "browser technologies"...

Nelkins 20 hours ago 10 replies      
I would be curious to see the effects of completely open-sourcing Windows. Businesses would continue to use it, because it's Microsoft and they want enterprise support. I think it would get even more love than it already does from the development community. Piracy of Windows is already rampant, so they're not really in a worse position from that (plus I think that most people who can pay for Windows do so already). Foreign governments who are concerned about NSA backdoors would have their fears allayed. Is there any way it could seriously damage their business model?
felixrieseberg 10 hours ago 0 replies      
This post is a few hours old, but I just want to put it out there: We're hiring OPEN SOURCE ENGINEERS. I'm one and our job is awesome[1]. Please get in touch with me if you're interested[2].

[1]: http://instagram.com/p/yqQe0bK3Bq/[2]: felix.rieseberg@microsoft.com

pmontra 18 hours ago 6 replies      
I wonder how this strategy is going to affect the bottom line of Microsoft. People writing with CLR languages are deploying web apps mainly (or only) to Windows now. They're going to have an option to deploy to Linux soon. This means less revenues from OS and DB licenses, so it looks bad. Do they expect a large number of people leaving Java, Node, Python, Ruby and picking up C# because of the Linux deploys? Those people would probably have to buy Windows and VisualStudio licenses to code in C# in a VM or just ditch Macs for PCs. More desktop licenses could make up for lost server ones but if I googled well a server costs more than a desktop. Or maybe they're playing a longer game: open source as much as they can, hope some network effect builds up, find out how to profit from it. In the medium term they might be losing money tough. Am I missing something obvious?
stevecalifornia 21 hours ago 3 replies      
I am really interested to see what happens once ASP.Net is running on Linux. C# and Visual Studio are fantastic, mature tools and I think a lot of developers would enjoy using them whereas they might be hesitant at the moment due to OS lock-in on the code they are writing.
josteink 1 hour ago 0 replies      
Noteworthy from the source: Microsoft does NOT use (unmergable) SLN-files for their projects, but instead scripts-msbuild invocations against specific projects:



I guess this explains why they saw no need to fix the somewhat broken SLN file-format in the first place, but actually did something about project-files. They don't share their customers pain on this point.

benreic 21 hours ago 5 replies      
Quick link to what I think is the most interesting class in the CLR:


sz4kerto 21 hours ago 2 replies      
Interesting commit:


"This change fixes a potential problem in unwinding on Linux"

dkarapetyan 21 hours ago 2 replies      
I really like the new Microsoft. Ballmer was really trying to run the company into the ground.
phkahler 20 hours ago 3 replies      
It's actually Free Software, not just open source. MIT license.
NicoJuicy 8 hours ago 0 replies      
This has all been set in motion during Ballmer.. Not from Ballmer himselve, but from inside out ( a lot of good employees there).

Now one of the guys pushing it, is CEO of Microsoft and we are finally seeing a (real) difference.. I joined the MS community a long time ago and this is (again) a heart warming addition!

Good job Microsoft, you're a bit late to the party. But no doubt, the ROI will show sooner or later! ;-)

j_baker 21 hours ago 2 replies      
This is on github. Does this mean that MS is abandoning CodePlex?
Touche 21 hours ago 2 replies      
And they have Issues and Pull Requests turned on.
ixtli 20 hours ago 1 reply      
Even more reason to port all of your C/C++ code to CMake. I'm excited to see upstream contribution from MS.
saosebastiao 17 hours ago 2 replies      
So I'm still a little fuzzy as to what this means. Is this basically the same thing that Mono is trying to provide? Would it now be possible to have the F# front end link to the CoreCLR backend on Linux?
josh2600 15 hours ago 0 replies      
I made a snapshot of a CentOS 6 box with CoreCLR cloned if anyone wants to play with it: https://www.terminal.com/snapshot/f34341a1b529a9141529cda006...

Note: You'll need a terminal account to boot it, but it only takes 10 seconds to come online once you do that.

yellowapple 21 hours ago 1 reply      
MIT-licensed, even.
cm2187 17 hours ago 2 replies      
Should we expect more zero days against .net as a short term effect of the source code becoming available?
giancarlostoro 21 hours ago 5 replies      
I wonder how this would aid projects such as IronPython and IronRuby if at all, just out of curiosity. My only dream is that they eventually have VS on Linux.
frik 21 hours ago 5 replies      
Next step, WinForms
lukasm 16 hours ago 3 replies      
jader201 21 hours ago 0 replies      
bJGVygG7MQVF8c 12 hours ago 0 replies      
_random_ 18 hours ago 0 replies      
If they could add native C#(+HTML5) support to Spartan now that would be amazing...
vinceyuan 10 hours ago 0 replies      
Great, but it's too late. .NET will still be the Windows-only technology. (I do know Mono) .NET becomes too big and too complex. I don't think it's easy to make it cross-platform.
F.C.C. Is Expected to Propose Regulating the Internet as a Utility
points by NearAP  1 day ago   434 comments top 32
jkn 1 day ago 10 replies      
Pretty impressed with the negativity here. Just a year ago, I remember all the stories on HN with people lamenting that the FCC doesn't have the guts to regulate broadband as a utility under title II [1,2,3] (admittedly with a minority speaking against title II, most prominently rayiner, whose comments I generally look forward to for an interesting contrarian view).

Y Combinator even published an open letter[4]. Quoting:

Title II of the Communications Act seems the most appropriate way to properly define broadband ISPs to be offering telecommunications. Speaking on behalf of Y Combinator, Im urging you to adopt such a rule.

And here we are. It seems to me that the community has always been complaining about the different FCC approaches on the subject, and in favor of title II, until the FCC yielded. Politicians must think the tech community is a frustrating beast to work with.

[1] https://news.ycombinator.com/item?id=7750036

[2] https://news.ycombinator.com/item?id=7057634

[3] https://news.ycombinator.com/item?id=7637147

[4] http://blog.ycombinator.com/y-combinator-has-filed-an-offici...

skywhopper 1 day ago 7 replies      
The major critique quoted here comes from David Farber of Carnegie-Mellon, who worries that classifying Internet access under Title II will allow it to be taxed.

If that's the worst thing you can think of, we're in good shape.

greggyb 1 day ago 5 replies      
To me, net neutrality is pretty much a red herring.

I have had good and bad internet service in my life. The least reliable was at my somewhat rural childhood home from TWC.

The best is my current connection from that most reviled of companies, Comcast. I have had no service interruptions, and I get a reliable 50Mbps at all times of day (I check regularly with speed tests and checking Torrent activity).

The reason my internet service (and customer service experience) has been so good here is that I have two viable alternative providers, a high speed DSL carrier offering similar speeds and rates in the city, and a local fiber provider (recently introduced 10Gbps connection - yes you are reading that correctly).

While I would prefer to have no rate limiting based on usage or content, I don't view this as some inalienable right. There is a price I'm willing to pay for that service, but there is also a price low enough where I'm happy to accept rate limiting. I'd like to have the choice.

The problem seems to be that the competition which gives me the service I'm happy with and the regulation regarding whether I am even allowed to reason about my preferences as in the above paragraph keep getting tied up with one another.

To me the biggest benefit comes from having multiple options in providers. A legal monopoly who can't do rate limiting can still give me awful service. Many providers who can rate limit will most likely give me service I'm happy with, even if the plan is rate limited.

The history of utility regulation is rife with cases of legally enforced monopolies.

djrogers 1 day ago 7 replies      
Setting aside the ultimate politics and ramifications of this, it is interesting (and kind of scary) to watch the fight between a non-elected bureaucracy and our elected house of representatives to regulate the internet.

Congress seems to be completely incapable of doing more than grandstanding, while the bureaucrats may wind up dramatically increasing their own power with a single stroke of their unelected pens.

It's no wonder we have many times more regulations than laws in this country... If I got to 'regulate' who I had power over on a regular basis, there are a lot of people who would suddenly find themselves inexplicably under Title Darren. Human nature.

dkhenry 1 day ago 4 replies      
I understand the desire to regulate the internet as a utility, but are they going to actually regulate it like a utility or are they going to call it a common carrier and then just make up a few BS rules that need to be followed.

I have a sinking feeling that all this will result in is a situation where monopolies are given out to the existing players and no one is forced to do anything to upgrade the existing service.

This isn't like a water utility where the product has little variance. Even now in the marketplace there is huge variance in the quality of the product I can buy ( I am one of the lucky few with choices of providers ). I have a feeling my area is just going to get given a blanket monopoly to Comcast and I will have to deal with a crappy connection forever with no hope of another company ever gaining traction to replace them.

TheHypnotist 1 day ago 4 replies      
One part of me hates this because the idea of our government dipping their hands into what has become the major source of information, entertainment, conducting business, etc. is quite scary. We all know their tendency to fuck things up because of politics, money, stupidity, or a combination of any and all of these.

On the other hand, I look at my other utilities and realize I have absolutely no complaints. Yeah, the power goes out from time to time, but that is just an inconvenience. As it stands, my internet is just as reliable as my power in terms of outages (not including the occasional speed fluctuation).

As long as it doesn't turn into a pay for use type deal and sticks with the current model of pay for bandwidth I suppose I'd be ok with this. There are just too many different moving parts for the cynical and rational parts of my brain to agree on.

Edit: Oh, and as long as censorship never becomes a thing.

CurtHagenlocher 1 day ago 3 replies      
What a coincidence; I just posted this to my Facebook feed yesterday:

Dealing with ISPs (and mobile providers, for that matter) is a never-ending hell of time-sucking and abysmal customer service. Communications as a business has some unfortunate features that drive its pathologies. The overhead of the infrastructure is very high and the marginal cost for adding a customer is relatively low. When there's more than one option, there's very little practical difference to distinguish the competitors from each other. Whether it's wireless or wired, I suspect we're doomed to be subjected to this kind of bullshit until the businesses in question are treated and regulated as the basic utilities that they have become. I don't know anyone who's frustrated with their electrical or gas service.

kyrra 1 day ago 2 replies      
Does anyone know if this would impact the issues we saw between Netflix, Level3 and Verizon? My understanding of the original issues with Netflix and Verizon was that the interconnect between Netflix's peering company (Level 3) and Verizon wasn't big enough to handle the bandwidth that Netflix was attempting to pump out.

In this case, Verizon was just refusing to upgrade that path and wanted to force Netflix to connect with them directly (get them to pay more money in some way.

Is that the general problem Netflix had with Verizon? If so, how would Title II help this situation? Does Title II require these various companies to maintain the interconnects between their networks and other companies out there?

bpodgursky 1 day ago 0 replies      
Probably way too late to get noticed here, but while I think the THREAT of Title II might be useful to keep ISPs in line, it would be a pretty rash move to actually reclassify broadband.

The US has painfully little competition in most areas, there is no debate about that. But new technology has been fixing that, albeit slowly, via increasingly expansive and competitive 4G, Satellite, and even improvements in moving data across copper wires or existing phone lines (not going to get you 100mb service, but quite plausibly "good enough" that you can realistically threaten Comcast with quitting service.)

I don't want to get in a libertarian vs flame war, but the fact is, it is far easier to regulate than deregulate when it becomes unnecessary. If the non-competitive market we have now is an artifact of old technology and installation costs, it would be a mistake to set up a regulatory infastructure we won't be able to roll back.

nitinics 1 day ago 0 replies      
I wonder if autonomous local mesh networks and hyper local mesh networks which is essentially a network of networks would fall under the term "Internet" and under these regulations. That would certainly stop innovation on that front (?)
sparklepoof 22 hours ago 0 replies      
Contrarians... gotta love em. They keep the running Looney Tunes reverse-psychology joke going. Counter-cultures are no different than the trend-following cultures they try to polarize themselves from. I guess I shouldn't have been shocked into commenting on this one for the oddly negative responses from some, but I do believe that the formulation of that negativity demands some more research. Has the internet created a society full of contrarian trolls bent on using sheer will to overcomplicate simple subjects?
supergeek133 1 day ago 20 replies      
I get what they're trying to accomplish, we all want faster more reliable service, but as comparison is anyone 100% happy with their utility (power, water, etc)?

That's where this is headed.

zkhalique 1 day ago 0 replies      
The real solution to all this is more competition. Stop the states from suing cities to prevent them from implementing municipal fiber. Open up the field to companies like Google to come in and partner with cities who can provide the last mile, competing with businesses. Even libertarians should say, "well, a city is a large organization and a giant corporation is a large organization, and both are kind of monopolies at this point..."
twoodfin 1 day ago 0 replies      
Wheeler is apparently basing the 'light touch' Title II regulation on similar regulations applied to mobile video (but not mobile data). Does anyone have some examples of the applications of mobile video being so regulated (The YouTube app on my phone? Probably not?) the kinds of regulations imposed and what the results have been?
dmfdmf 23 hours ago 1 reply      
I am concerned about the implications for free speech. And I don't mean the freedom to say whatever I want as long as the FCC approves or that I don't piss off a large enough political constituency. If I start a blog to post pictures of Alah am I going to be shut down? What about forums that discuss illegal drug use? How long till the FBI, DEA, etc. petition the FCC to shut these sites down? What about sites like reddit that carry porn, post pictures of Alah AND discuss illegal drug use? I don't care about these sites but I do care about their right to run these forums and the participants right to discuss any issue openly, even socially unacceptable ideas.

Free speech is a political principle but people pay lip service to it because they don't understand it and how important it is to a free society. If FCC starts to regulate speech on the internet (i.e. actual censorship which only the government can practice) by defining acceptable content and shutting down websites for so-called "hate speech", requiring a government "blog" license, etc. then it is game over for our freedoms and the future of the country.

cubano 1 day ago 2 replies      
I think the radio analogy is spot on, and I really believe that the endgame to all this government interest may very well be the same sort of regulatory control that the FCC now imposes in radio stations and content via the [1] Communications Act of 1934.

This can easily been seen as a power grab by plutocrats who want to start filtering and controlling the money making and informational aspect of the web, and while I am, of course, speculating about this outcome, I think in general, it has been on the back burner for years.

You know, to protect the children and all that FUD.

[1] https://en.wikipedia.org/wiki/Communications_Act_of_1934

DanielBMarkham 1 day ago 1 reply      
As a libertarian who knows tech, I'm on the government's side here, although I'm grimacing while I'm supporting them. It's simply the lesser of two evils.

One thing to help clarify the debate if you're talking to somebody who opposes this move: do not confuse the issue of how the government should be treating utilities in general with whether or not the internet is a utility. We can have a grand old time debating the overreach of a regulatory statist society, but that has got jack squat to do with the issue at hand. Is the internet more like electricity, where you pay so much for a bucketful, and then you can do whatever you want with it? Or is it more like Star Wars, where George Lucas and Disney can charge us 17 times at 17 different rates for different versions of what is essentially the same thing? These are two different discussions to be had; do not let folks conflate them into one.

cplease 1 day ago 0 replies      
The HN title is wrong and misleading. The NYT headline is "In Net Neutrality Push, F.C.C. Is Expected to Propose Regulating Internet Service as a Utility".

That's a little different than "FCC regulating the Internet." ISPs provide internet endpoints to consumers. They should be utilities.

vandeaq 1 day ago 0 replies      
The net becomes a utility (and ISPs utility providers), and every breach, exploit, or unprotected private data leak becomes a terrorist action at prosecutorial discretion.

Something to think about.

mdev 1 day ago 0 replies      
"Today, 55 percent of online traffic happens on smartphones and tablets, according to the F.C.C.." I did not know this, it's surprising for me. Mobile-first approaches makes a lot more sense now.
jeffdavis 1 day ago 0 replies      
What could possibly go wrong?

Fears about big companies are legitimate. I just fear what the FCC might do a lot more.

It all seems fine now, but the benevolent regulators in charge now might be replaced by less-benevolent ones later.

joshontheweb 1 day ago 0 replies      
Will any of this matter once the global satellite internet networks come into being. SpaceX and o3b are aggressively working on these it seems.
Selfcommit 1 day ago 0 replies      
I suspect the conversations in this thread will be extremely interesting 5 years from now - should this regulation come to pass.
Zigurd 1 day ago 1 reply      
That's a peculiar choice of word for an article title. The FCC can't "regulate the Internet." The FCC can regulate service providers in the US in certain limited ways.
grellas 1 day ago 2 replies      
Law does not operate in a vacuum because, in the end, it is closely tied to power - to fine, to jail, to sanction, to regulate and restrict - and that makes it scary when it becomes unhinged from a sense of principle in its application.

Is it wise, then, to grant unchecked, plenary power over the internet to the government in the name of trusting that those who currently exercise the F.C.C.'s power will exercise that power with self-imposed restraint? Lord Acton's dictum that power corrupts and absolute power corrupts absolutely comes to mind in considering the implications of this step. Once we grant that the F.C.C. has open-ended authority to do what it wants with the internet, where is the formal protection against abuse and who will exercise it. Certainly the courts will not. The Telecommunications Act being relied upon here certainly grants the formal authority to do this. Those who passed that Act did so 80 years ago and never contemplated that it would be so applied. But the courts will say, it was for Congress to make the law and for the appointed agency to administer it within the bounds laid out by the legislature, and that means this exercise of authority will be upheld. But so too will any attempt by the F.C.C. to impose detailed regulations over pricing, usage, and all sorts of other areas that those who favor a free and open internet clearly do not desire. Once this step is taken, all formal protections against abuses of this type are gone. What, then, is the remaining form of protection. It is that we choose to trust those who exercise open-ended power to use "restraint." They assure us they would never change the way things are. They will never succumb to the power and influence of lobbyists. They will never exercise so vast a power that is given to them without checks for any corrupt motive. After all, governments worldwide and throughout history have demonstrated that they can be trusted with unchecked power without abusing their citizens. And so we can all rest easily knowing that our benign government is and will always remain in good hands and will always keeps its promise. After all, who needs the formal protections of the rule of law when you can give all over to the discretion of leaders who will be wielding the very powers whose potential abuse we all fear. So, for those who want net neutrality at any cost, the end justifies the means and any fear in principle of giving unchecked theoretical power to an unaccountable governmental agency goes out the window in pursuit of the immediate goal of net neutrality and in trusting current leaders who tell us that they really never intend to use all those unchecked powers. I truly hope that is so but I am very saddened that people never learn the lessons of history about what can happen when political leaders suddenly find themselves with vast amounts of unchecked power.

The free internet we know today will be utterly dependent on their good graces. I for one am not so sanguine as others about where this may lead.

coldcode 1 day ago 1 reply      
Now we find out who has the deepest pockets.
paulhauggis 1 day ago 0 replies      
How is complete government regulation equivalent to a "free and open Internet"?

Everyone seems to think that this will just be exactly what we have now, but with more freedom and options. I don't think this will be the case.

It will open the door for any future governments to start regulating things like freedom of speech.

wsloth514 1 day ago 1 reply      
Is anyone else here afraid of this little word called, "regulate"? Maybe I am being naive.
higherpurpose 1 day ago 3 replies      
Wait, doesn't calling the Internet a utility mean all data will be metered by the MB from now on? Or does that not apply here? If it does, wouldn't that give carriers exactly what they wanted - the ability to charge video providers much more than they would say a news website?

The idea behind the whole net neutrality movement was to have "all you can eat plans" where all data is treated the same. What if now we get the "all data is treated the same" part, but not the "all you can eat" one?

Another issue: government spying. I know the ISP's/carriers have given the government virtually everything they've asked for, including direct access to the cables for the plain-text data + the recent cookie tracking inserted into people's traffic, but some of them have refused to do much of it, like Sonic, and I think Google takes a similar approach, fighting for users' rights, even if Verizon and AT&T do not. So what does it mean now that the government classifies all Internet under Title 2. What can we expect in terms of surveillance? And does it make it much easier to force all ISP's to comply with certain surveillance requests?

droopybuns 1 day ago 2 replies      
Netflix traffic and some hypothetical future tech like Surgery-over-IP traffic should not be prioritized equally.
thissideup 1 day ago 2 replies      
Yeah the government's goal is totally a "free and open Internet."

Except that part of the government dedicated to vacuuming up and storing and analyzing every signal from every network-enabled device on the planet.

IanDrake 1 day ago 8 replies      
That will be a sad day. Internet technology still has a lot of room to improve before the government steps in and kills all market forces that encourage those improvements.

If you don't like your current service levels now, just wait until the government gets involved.

Turbocharged Raspberry Pi 2 unleashed
points by mmastrac  2 days ago   198 comments top 35
schappim 2 days ago 5 replies      
What's the same:

- Same form factor as the model B+ (your enclosures and daughter boards should still fit).

- Same full size HDMI port

- Same 10/100 Ethernet port

- Same CSI camera port and DSI display ports

- Same micro USB power supply connection

What has changed:

- A new turbocharged Broadcom BCM2836 900MHz quad-core system-on-chip with performance at least 6x that of the B+.

- 1GB of RAM

Source: http://raspberry.piaustralia.com.au/products/raspberry-pi-2-...

zumtar 2 days ago 1 reply      
Interesting to see a Broadcom logo and chip markings visible on the CPU now.

Love 'em or hate 'em, I suspect Broadcom are very happy about this as those chip markings are prime marketing real estate.

Considering the phenomenal success of the previous Raspberry Pi units this probably formed part of the negotiations for the CPU price.

There are of course other considerations such as trace lengths and availability of packages for both the CPU and LPDDR2 but that logo being directly in the hands of the engineers of tomorrow makes a big difference.

The previous models used a PoP (Package-on-Package) stack of the CPU and SDRAM and now they've moved to discrete SoC and DDR2 packages (with the DDR2 chip now on the underside of the PCB).

rasz_pl 2 days ago 1 reply      
Wonder if they fixed USB problems (dma/internal bus bottlenecking or something). Info about quad core CPU is a bit sketchy, either its one of Cortex cores, or BCM tweaked ARM11/bumped L1 cache to max 64KB value and/or added proper L2 cache.

So many unknowns.


Milhouse, Team-Kodi Member: "ARMv7 and NEON instructions"http://forum.kodi.tv/showthread.php?tid=217040&pid=1911780#p....

WOOOHOOO now we are cooking, Broadcom is back in business, first hiring real flesh and blood open source GPU drivers developer, now this. Next thing you know they will open up DSI port specification or something :o

tbrock 2 days ago 6 replies      
Why would anyone buy this over the much more powerful ARMv7 odroid-C1 for $35 that has been out for months now?
Al__Dante 2 days ago 3 replies      
It's a 900MHz quad-core ARM Cortex-A7 CPU. ARM7, it's official: http://www.raspberrypi.org/raspberry-pi-2-on-sale/
nathan_f77 2 days ago 4 replies      
Awesome!! I just wish they would release one with onboard Wifi instead of ethernet. I have no statistics to back this up, but I'm pretty sure that USB Wifi adapters would be the most common accessory.
lucaspiller 2 days ago 0 replies      
Here is the official announcement:


It has an ARMv7 processor.

seba_dos1 2 days ago 2 replies      
The most important info is missing: does it still require a non-free blob to even just boot Linux?
wyldfire 2 days ago 3 replies      
bayesianhorse 2 days ago 2 replies      
Any news if the Raspbery Pi 2 will have USB 3.0 ports? That seems to be a good selling point for the Odroid XU3 currently...

USB 3.0 ports would mean better performance as NAS or multimedia recording solution, as well as alleviating some of the current troubles with usb performance.

WhitneyLand 2 days ago 7 replies      
ARMv6 still? I don't see how that's necessary to save on the BOM. I hope legacy code was not the deciding factor here.
rcarmo 2 days ago 0 replies      
Well well. I'm going to hold off an wait until someone can benchmark this against the ODROID-C1, though, largely because the original Pi design had a few shortcuts and I'd like to avoid any teething issues (been using ODROIDs for a while, got full hardware support in Ubuntu and Android and don't have to put up with Debian 7.x).
wyager 2 days ago 2 replies      
I wish they would release one with DMA based gigabit Ethernet. That always seems to be the bottleneck for me.
20kleagues 2 days ago 1 reply      
Finally more power for graphics. I am seeing some interesting VR projects with the more horsepower, especially since VR is THE thing right now. And the pi is light enough for head-mounting!
yAnonymous 2 days ago 1 reply      
ARMv7... so Ubuntu will finally run on this. No more hours of compiling software.
sagarm 2 days ago 0 replies      
The only place I've found to buy this thing so far is here:http://uk.rs-online.com/web/p/processor-microcontroller-deve...

Where can I buy this thing in the US?

Any info on whether there will be an RPi-compatible Python library for accessing GPIO?

aceperry 2 days ago 0 replies      
Nice update, and still at the $35 price point. Too bad it's still ARMv6. It would be nice to have ARMv7 which is where all of the high performance stuff is.

Edit. I take that back, it will be ARMv7 arch. Other articles state that the new Broadcom SOC will be using A7s. That makes it huge for this price point.

bugsense 1 day ago 0 replies      
Install resin.io on it and smile :)I could even use them for light-weight app server :)
mrmondo 2 days ago 0 replies      
Still no GbE which is a massive disappointment for me.
majc2 2 days ago 1 reply      
Will it run some version of flash? (and yes I tried gnash to no avail).

I spent the weekend explaining to an 8 and 6 y/o that they can't share and look at other Scratch projects at scratch.mit.edu on the RPi :(

AceJohnny2 2 days ago 2 replies      
ARM11/(ARMv6)? I thought that core was a dead-end that was quickly replaced by the Cortex-A (ARMv7) series. Surprised to see ARMv6 in new hardware in this age...
lmedinas 2 days ago 0 replies      
This update is very welcome and for sure it will make people create more interesting "projects". Personally i'm very excited to get one of these to run owncloud at decent performance in my home. Also the possibility to install more Linux distributions or even Windows 10 will open much possibilities for development. Well Done!!
ryan-allen 2 days ago 2 replies      
While a bit more expensive, I purchased a Hummingboard [1] instead of a Pi, and the performance was great! I had a NodeJS/Postgresql app running on it for kicks. I was surprised at how quick it was.

[1] http://www.solid-run.com/products/hummingboard/linux-sbc-spe...

cpks 1 day ago 1 reply      
I really want 2gb. I'm surprised there isn't a market of Pi-like devices, with compatible software, but higher price points and higher specs.
Thiz 2 days ago 1 reply      
Remove the ethernet port. Remove the double stacked usb ports. Use single rows of micro USB or Lightning, 4 on one side. Make it flatter, simpler, more beautiful. Encase it in alluminum. Steve Jobs would do it and push it through our throats. And we would love it.

Or at least make it a premium choice, and make extra money from those who love beauty.

Gnewt 2 days ago 4 replies      
Anyone know where this can be purchased now?
phkahler 2 days ago 0 replies      
How is the wayland/weston support going? This thing should be quite capable now.
SSilver2k2 2 days ago 0 replies      
Excited to see how well this will help my distro PiPlay (and PetRock's RetroPie).

Can't wait to get my hands on the hardware


ropable 2 days ago 0 replies      
Maybe the Kano will be able to perform worth a damn, now (post-upgrade).
programminggeek 2 days ago 4 replies      
Am I weird for wanting to use this for really simple web hosting?
davexunit 2 days ago 1 reply      
It still requires proprietary firmware to function at all. No thanks. Please fix this serious flaw!

Edit: And to make things worse, they are giving away gratis Windows 10 licenses for it!

gregorymichael 1 day ago 1 reply      
How do I purchase this in the US?
exabrial 2 days ago 1 reply      
Need gigabit!
higherpurpose 2 days ago 0 replies      
Why didn't they just skip to Cortex A53? It would've been worth waiting a year longer for that.
brudgers 2 days ago 2 replies      
The headline gave me hope. Alas no button on the front for overclocking. I am wistfully disappointed.
Google Is Developing Its Own Uber Competitor
points by geoffwoo  1 day ago   275 comments top 59
dojomouse 1 day ago 3 replies      
So much criticism of Google! My question: Why on earth would Google want to cede any significant chunk of the personal transport value stream to Uber - or acquire them at a $40Bil+ valuation - when the main (only?) semi-uniquely valuable thing Uber have (a large network of drivers) is worth approximately zero dollars in a self-driving vehicle scenario. Uber are valuable and effective in a human-driven vehicle model because they have achieved critical mass in the pool of human drivers on their network, and are in a position to grow that pool. In a self driving vehicle model critical mass of vehicles/drivers is available to anyone with a decent line of credit.
softdev12 1 day ago 10 replies      
I'm glad that the article highlighted how valuable the mapping information is that Google collects. While Uber probably could still run its current operations without Google map data, there is a much smaller probability that they would be able to run a self-driving car service without a highly accurate mapping product. And to build a mapping product is really difficult and expensive. I was surprised when Google starting putting money into mapping the streets because it's a giant effort with major barriers to entry. Nokia bought a company called Navteq that was the clear leader (and basically a monopolist) in driving the streets and collecting the data (they originally supplied data to Google Maps). Navteq supplemented the data with satellite images and other sources, but the roads change often enough that you need people driving on the streets.

So, Uber really needs to start thinking about the mapping aspect. Perhaps they can incent their current drivers to start reporting back all the street info as they pick up their passengers. Of course, this would bring up problems if the drivers eventually realize that this data could be used to replace them. It would be interesting to see uber cars with Google StreetView style cameras on their hoods.

hosay123 1 day ago 5 replies      
My inner paranoid finds this interesting from the perspective of Google entering yet another domain where they have high accuracy data on the present/future whereabouts and private concerns of a large number of people.. add it to the hundred other properties they maintain that appear to have no direct business value other than capturing masses of sensitive data that was previously nicely decentralized and private.

Can't book a flight (ITA), order a taxi (this), book a hotel or chat with a friend (Gmail), or pay for dinner (Wallet) without generating an activity log with a single company.

Even if (and perhaps even probably) Google weren't doing this intentionally, they've already demonstrated through failing to encrypt their inter-DC connections how they're becoming a massive single point of failure (remember Snowden showed us the NSA were tapping Google's internal network already). Whether the end result is an intelligence service tap, or some legislative measure affecting the company done in the open, I'll simply never be comfortable with one company concentrating so much personal data affecting so many people.

fidotron 1 day ago 2 replies      
Uber have struck me before as a bigger potential headache for Google than even Facebook were/are, and I suspect this is all an effort to acquire them and drive the price down.

There is something about the automated dispatch at giant scale business which overlaps with the search as interface idea. "Get me to [wherever]" is a natural extension of what you might do after searching. Furthermore, the act of finding a driver is itself a search. Feed the driver information back into search and it gets entertaining, with queries like "What are the restaurants right now with 80% rating, tables available and will cost me less than $20 to get to?" That's a query you can only answer with the driver and price data. As such Uber have the hard bit and can grow the rest, Google face a move into a physical world full of people, which is pretty much their Achilles heel.

Vermeulen 1 day ago 2 replies      
Google seems unable to have a partner without eventually entering their market and becoming enemies. Apple before Android, Twitter before Google+ - I guess it's inevitable with how many different industries Google tries to tackle. I think if this service is entirely defined by autonomous vehicles it's really a complete different service than Uber. No more driver ratings or passenger ratings, all the same car type, and it's own vast legal challenges
bcantrill 1 day ago 2 replies      
This is not at all a surprise, and I fully expect Amazon to also ultimately enter this space. (I have no insider knowledge of either company.) It's amazing to me how many people think that Uber is somehow building a deep moat when these other companies (Google, Amazon, etc.) have a much deeper connection with their customers -- to say nothing of the data that have collected. Given perfect rider competition and (especially) perfect driver competition, how does the advantage not lie with the established company and brand? Given Uber's nose-bleed valuation, I suspect that they may become the Webvan-esque poster child of this bubble: visionary, but ultimately a ludicrous valuation and absurd misallocation of capital that was obvious to all only in retrospect.
k-mcgrady 1 day ago 5 replies      
If Google competes directly with a company they are invested in via GV they're going to really damage the reputation of GV. Why would you accept investment from them if they're going to get access to your private info and then turn around and screw you.
billsimpson 1 day ago 0 replies      
Many people here are suggesting that Google views Uber as a threat or a competitor. My take is that (a) Google wants to follow through on its self-driving car experiment, (b) public transportation would be a natural fit this product, and (c) Google doesn't want to be reliant on Uber, Lyft, or any other middleman for introducing self-driving car to the word. Initially, it will be a tightly-controlled roll-out that will eliminate as many variables and risk factors as possible.

This is their way of getting a head start in that direction, and smoothing the transition of the technology to partners like Uber down the road.

sushirain 18 hours ago 0 replies      
The key words are "long term". I am concerned that these obstacles will not be overcome in "3 to 5 years":

* "Chris Urmson of Google has said that the lidar technology cannot spot potholes or humans, such as a police officer, signaling the car to stop." [1]

* "Another big problem for Google is the current cost of its driverless car, which is reportedly outfitted with a whopping $250,000 in equipment." [2]

[1] http://en.wikipedia.org/wiki/Google_driverless_car

[2] http://www.fool.com/investing/general/2015/01/21/3-reasons-g...

ajju 1 day ago 2 replies      
Google is the only company with a potential edge over Uber right now. The point in time at which self-driving cars are usable by the the public is the only visible inflection point where Uber's hegemony is truly threatened.

(edit) The article suggest self driving cars, and by extension Google's ridesharing service won't be ready for 2-5 more years.

xivzgrev 1 day ago 2 replies      
I'm really surprised. G ventures puts ton of money into uber, uber makes a huge pre order for google cars, uber gets put into google maps.

And now this. I guess google is like "well we got the hard part, self driving cars. Why not just go after this ourselves?"

What is the thing google really wants? Is it more valuable to try to directly monetize all this vehicle traffic, or be the platform every else uses? Google could set up rev share, get access to all the vehicle data (even if they don't use google maps), and set google as default software in the car web experience.

It seems greedy, arrogant, and unaligned with their core ways of making money. They're trying to be vertically integrated player which can work if you control all supply but once Google introduces the tech how long is it going to take for a rival to copy, parents aside?

relaxatorium 1 day ago 1 reply      
They've well and truly inherited the 90s Microsoft banner of being the tech company that wants to be in every line of business at all times.
rottyguy 1 day ago 1 reply      
Am I the only one thinking a targeted convergence with driverless cars down the road (no pun intended)? Once you remove the largest cost factor (the human), does pricing reduce to $.56/mi (or whatever the Standard Mileage Rate, or derivative thereof, is at the time).

Added with Express to overload deliveries to maximize utility (maybe offer riders a discount if they can make an express stop along the way...)

bhaumik 1 day ago 0 replies      
Google replied to the Bloomberg article with a cryptic tweet.

"@business We think you'll find Uber and Lyft work quite well. We use them all the time."


geoffwoo 1 day ago 8 replies      
Google could cut Uber down hard by pulling Google Maps API access. Ballsiest play Google has ever done if they execute on this. Love it.
rottyguy 1 day ago 0 replies      
I'm sure this has been discussed on a different (uber/lyft) thread but being a nyc'er and amazed at the prices of taxi medallions through the years, I'm glad to see this bubble finally popping (prices are still insane mind you)!-- now if we can do the same for the cost of higher education... With google coming into the fray, this can only drop further.


wkcamp 1 day ago 0 replies      
Out of curiosity, are there any companies in previous history that model the approach Google has done thus far, such as expanding into a vast amount of industries and successfully becoming a contender in those?

Anyway, I'm sure Google has enough money to support any failures (this point was made in a previous article about the multi-tool card). But, in the long run, how will Google prevent itself from appearing to look like a monopoly (such as Apple's iPhones in the earlier days)?

dannymick 1 day ago 3 replies      
You can talk about valuations or Google becoming a cheeky competitor to Uber but there are some deep economic implications. Millions of cab drivers or anyone who drives for a living will be losing their jobs, as a result of this.
haberman 1 day ago 4 replies      
> David Drummond, Googles chief legal officer and senior vice president of corporate development, joined the Uber board of directors in 2013, and has served on it ever since.

Not super related to this story, but I always have to wonder: why do powerful people join boards? What's in it for them? It seems like an awful lot of work, responsibility, and potential for conflict of interest. What do board members get out of it?

don_draper 1 day ago 0 replies      
The very definition of being a victim of their own success
kanamekun 1 day ago 0 replies      
Makes a lot more sense now why they bought Waze. It wasn't just to snatch it out of the hands of Apple and Facebook to protect the Google Maps franchise. It was to play keepaway from Uber, and prepare for the enteral launch of Google (Autonomous) Car.
andy_ppp 1 day ago 0 replies      
I misread the headline; I actually think that all big companies should attempt to set up their own competitor (with a very small flat team, but resources, users, internal datasets etc.). It would be a big investment but would cement monopoly positions and make it even harder for competitors to gain any sort of traction.

A new search engine from Google written with different goals and views of how things should be done would be very interesting and probably gain a good portion of Bing and Yahoo! Users in the process. This applies to a lot of businesses of course.

An Uber from Google could also be great too but their once clear idea of what they are is getting fragmented and that'll show in the implementation and UX of Google Cabs.

mathattack 1 day ago 1 reply      
I look at this with great anticipation. This will be an enormous reducer of congestion. Driverless cars will remove many of the headaches and hassles of high-density commutes. Imagine being able to pop a beer in the backseat as an automated car drives you home.
obilgic 1 day ago 1 reply      
Google maps has already started showing Uber price estimates when you enter a route
CPLX 1 day ago 1 reply      
Every single time there is a big discussion thread about Uber on HN about half the commentary is about self-driving cars and how various decisions are being made now based on the inevitability of self driving cars. Every time this happens I always wonder why nobody seems to recognize the obvious fact that self driving cars have nothing to do with any of this stuff and won't make economic sense for a company like Uber in any time horizon that it makes business sense to take into account now.
domoarevil 1 day ago 0 replies      
Why don't G just _block_ all Uber drivers from utilizing Google Maps, forcing them to an inferior Mapping/Route product, lowering the quality of the Uber experience. (Only Uber'd a few times and I'm sure the drivers had G maps, but I may be wrong, I was surely drunk.)
bbody 1 day ago 0 replies      
You didn't read the article did you?
tkrupicka 1 day ago 0 replies      
This concept seems like the next logical step for Google's push for mapping/autonomous vehicles. I think the most interesting challenge they'll have going forward is dealing with the same union/labor opposition that Uber has had to deal with. If they can push their driverless technology it would really change the landscape of adoption in cities.
jackgavigan 1 day ago 0 replies      
Meanwhile, Uber is partnering with Carnegie Mellon University to build self-driving cars: http://blog.uber.com/carnegie-mellon

HN discussion: https://news.ycombinator.com/item?id=8987441

miguelrochefort 1 day ago 1 reply      
Is there anything special about Uber, other than its brand?

How hard can it be to implement an Uber clone? Like a week?

What's stopping anyone from entering this space? Is it technology? Marketing? Regulation? Trust? A secret sauce?

I'd love to understand.

comrade1 1 day ago 4 replies      
On the one hand, this is great. Competition and I couldn't wish Google's infinite pockets on a more terrible group of people.

On the other hand, Google has only been successful with search and advertising and are known for terrible customer service. I mean, who are you going to call when your driver rapes you? You'll never be able to talk to a human.

xixixao 1 day ago 1 reply      
I also don't understand how this is news to anyone. The fact that Google wasn't planning on selling its driverless cars but instead provide a service was discussed months ago, including the fact that the way this service would be purchased via a phone. I am especially doubtful Uber's board wouldn't notice this...
deepGem 1 day ago 1 reply      
Regarding mapping - one overlooked aspect is the fine grain detail required for driverless cars. For example - current precision is for a road, required precision for driverless cars is for a lane. Not sure how Google can leverage Uber to solve this.
IgorPartola 1 day ago 0 replies      
Is the long game here to have self-driving cars everywhere so that the driver can spend more time on their phone and therefore clicking on their ads?
avodonosov 1 day ago 0 replies      
Shuld Facebook be afraid of Google Plus, and Amazon of Google App Engine?

Google wants to be everywhere, but it looks like it doesn't have enough good developers.

I think Uber has chance to overcome Google, even if Google really decided to compete in this area.

S_A_P 1 day ago 0 replies      
Just think what a cash cow it would be to have google self driving cars run an uber like service...
kunle 1 day ago 0 replies      
Imagining a shorter path here would be Google buying Lyft and integrating into maps?
patronagezero 1 day ago 0 replies      
Your very own government sponsored taxi, less city and state control and now with improved tracking and information awareness! Red or blue pill, they'll choke 'em both down with a few sips of progress.
bhartzer 1 day ago 1 reply      
What are the chances that Google's "uber" program will involve self-driving cars?
shinamee 22 hours ago 0 replies      
It does actually makes sense since they have all the resources (MAP,CAR,MONEY)
msoad 1 day ago 0 replies      
Google proved that it can do logistic-heavy business with it's Google Express. I'm sure if they do something like Uber, it would be as good as Google Express.
Animats 1 day ago 0 replies      
Google General Services. Whatever you want done, whenever you want it done, Google will find someone to do it.

"We also walk dogs."

rmc 1 day ago 0 replies      
Maybe it will be as good as Google's Facebook compeditor
sparkzilla 1 day ago 1 reply      
Meanwhile Google's core search business is looking increasingly vulnerable.
akurilin 1 day ago 0 replies      
Either way, I will be happy to be driven around by my digital overlords.
cjbenedikt 21 hours ago 1 reply      
makes you wonder if Uber disclosed this to Goldman or their clients when raising money???
zkhalique 1 day ago 0 replies      
How much can Google squeeze its Maps customers without antitrust kicking in? Could they just cut off Uber the same way FB or Twitter or Apple can revoke an app's access to their platform?
kumarski 1 day ago 0 replies      
hahahahaha. Good luck google.

Anyone game to take longbets on Google building a successful product internally and scaling it with their traditional approach?

I'm always doubtful.

jbigelow76 1 day ago 0 replies      
Will I have to have a Google+ account to ride?
oimaz 1 day ago 0 replies      
Do partners from google ventures have a board seat in Uber. If so, isn't this a conflict, as Google is privy to all the internal strategy at Uber
BobMarz 1 day ago 0 replies      
Too bad Google Drive is taken.
nakedrobot2 1 day ago 0 replies      
Considering how many people hate Uber, I can't imagine much sympathy being headed in their direction with this news.
xacaxulu 1 day ago 0 replies      
joering2 1 day ago 1 reply      
Imagine you wake up Monday morning and your Android tells you that it automatically upgraded (beautyness of default settings) the default Google Map app on your phone, you tap it out of curiosity, and it asks you "would you like the cab downstairs in 5 minutes?"

Oh well; there goes Uber's 50 billion dollars valuation...

FrankenPC 1 day ago 0 replies      
Guber. I like it!
xiaoma 1 day ago 0 replies      
rev_null 1 day ago 0 replies      
inmyunix 1 day ago 1 reply      
this is the least surprising news of the decade.
closetnerd 1 day ago 0 replies      
Good god, that'd we way too much power in the hands of a single company. They're awfully close to Googlizing the world. Which I think is theoretically the closest civilization could get to utopia. But I'm not one for utopia.
What Color Is Your Function?
points by jashkenas  2 days ago   129 comments top 26
tel 1 day ago 3 replies      
We can massively generalize this by calling "blue" "pure" and "red" "impure". The end result is essentially Haskell (but you can take it much further, too!).


There's something horrifyingly restrictive about having merely (blue -> pure, red -> impure), though. All "blue" functions behave roughly the same (e.g., very, very nicely and compositionally) but "red" functions come in many classes: async, stateful, exception-throwing, non-deterministic, CPS'd, IO-effectful, combinations of the prior.

What we want is a nice way of representing different kinds of red functions and how they all interact.

What we'd also like is a nice way of composing different kinds of red functions so that the bundled piece has a sensible kind of redness too and we can keep composing.

And this is exactly monads and monad transformers.

There are other ways to achieve this end as well all under the general name "Effect Typing". Very cool stuff.

But what I'd like to emphasize is that Java/C#/Go have not solved this larger problem. They each introduce a fixed number of "rednesses" and very specific ways that different kinds of red can be connected together. Monads are a generalizable solution.

The situation is exactly the same as HOFs themselves. We've had subroutines and functions for a long time, but first-order and higher-order functions are a marked increase in power since you can now refer to these "function things" directly.

Monads take the notion of "CPS Transform" and allow you to refer to it directly, to mark sections of code which use it and compose them intelligently. They allow you to invent your own kinds of redness on the fly and ensure that your custom colorations compose just as nicely as the built-in ones.

If this article is even slightly interesting then you owe it to yourself to learn much more about effect typing. It'll change your world.

(That and sum types, because nobody is served by a bunch of integers named Token.LEFT_BRACKET.)

overgard 1 day ago 2 replies      
I'm really out on most of the "async" stuff, after having used it. (Mostly in Node and Tornado)

Remember in the early 90s when Windows and Mac OS were "cooperatively" multitasked? Which is to say, you had to explicitly yield to allow other applications to run (or risk locking up the entire system). And then it was replaced with pre-emptive multitasking, which allowed the scheduler to figure out what process deserved CPU time, while allowing the programmer not to have to think about it. You could call a blocking IO function, and the OS would just go do something else while you waited.

All this "async" stuff seems like a return of cooperative multitasking, only worse. Not only do I have to explicitly yield, but now it's to some event loop that can't even properly use multiple cores, or keep a coherent stack trace. It's a nightmare to debug. It's theoretically fast... except if one request forgets to yield, it can clog up the entire thing. I guess you use multiple processes for that and a dispatcher, but at that point you've basically reinvented preemptive multitasking... badly.

Threads aren't perfect, but excluding STM and actor models they definitely suck the least.

jenius 1 day ago 2 replies      
So I've been writing javascript full time for a couple years at this point, client, server, and open source, and what I have adopted is coercing everything into promises, which I suppose would be the author's way of saying making everything red.

If you have something that is not async mixed in with something that's async, you can still add it to the promise chain and it will resolve right away. If you have a library that uses callbacks or some other thing, you can just wrap it such that it now uses promises. And then of course you can always look for alternate libraries that use promises from the start and skip step as well.

I've found that using promises for everything works super well. There is no confusion or doubt at all. Everything has the potential to branch into async at any time with no consequences and without complicating the flow. And an additional benefit is that rather than checking for errors after any operation you do, you choose where to check for errors. When a promise rejects, it skips everything else in the chain until it gets to a catch. So rather than running 4 async operations and doing an "if error do this" after each operation kind of deal, you can catch the error in once place and handle it once. Promises surpress the error in the promise chain until you choose to handle it, which is dangerous if you don't understand how promises work, but really useful once you do.

There are really solid promise-based libraries for all common operations in node right now. When.js for general promises, composition, and coercion, rest.js for network requests, bookshelf and knex for database connection and orm stuff, etc. If you are a js developer, give them a shot!

Don't get me wrong, I'm not trying to claim that this is better than any other language-level construct by any means, but if you are working in javascript, where you have to deal with javascript's limitations as a language, from experience I can say that working in an all-promises environment makes things quite pleasant.

adrusi 1 day ago 3 replies      
A lot of commentors are mentioning that this is just a specific case of effect typing. Haskell and monads have been brought up as an example of effects typing, but I'd like to present another example that more closely resemble familiar static type systems.

Nim[1], at least at one point (I'm looking at the current manual and can't find it documented), had support for tagging functions with a pragma and the compiler would enforce that functions without the pragma can't call function with the pragma outside of a special block. The compiler interpreted certain pragmas like "impure" and "exception" in a special way, outputting a warning when certain language features were used inside functions marked with the pragma. The language manual shows that the compiler still at least supports these special pragmas. It's possible that it never supported custom pragmas and I'm just misremembering.

Interestingly, the author dismisses promises as not a major improvement and calls async/await and generators at least a half-way solution. It turns out that the are just a simple syntactic transform that isn't powerful enough to express everything that coroutines can. Promises, on the other hand, can. Promises are actually a monad: `.then(...)` is the bind function (`>>=`). This is essentially how the IO monad in Haskell works.

[1]: https://nim-lang.org/

malfist 1 day ago 3 replies      
I had no clue this was about async functions. I assumed it was safe/unsafe functions until I got to the part about it not being. I think that is a much stronger issue than sync/async.
gfxmonk 1 day ago 1 reply      
It seems like a lot of people are interested in fixing this, and would be keen to see a solution. I believe StratifiedJS is precisely that solution (for JS at least), and it has existed in working form for years: http://stratifiedjs.org/ it's not just an experiment - it's remarkably stable).

StratidiedJS completely eliminates the sync/async distinction at the syntax level, with the compiler/runtime managing continuations automatically. A `map` function in SJS works just like you would want, regardless of whether the mapping function is synchronous or not.

In addition, it provides _structured_ forms of concurrency at the language level, as well as cancellation (which most async APIs don't support).

Disclosure: I work on StratifiedJS, and just wish more people knew about it.

aidos 1 day ago 0 replies      
For those that haven't noticed this article is by the chap who wrote the absolutely wonderful http://gameprogrammingpatterns.com/
dvirsky 1 day ago 2 replies      
Tornado made async code slightly less painful by using yield and coroutines, but you still have to run blocking methods on thread pools using futures. They abstracted it really nicely and I can now write clean code if I need an occasional blocking library in my tornado code.

But after writing tons of Go over the past 2-3 years, going back to async code, even with the tornado sugar, just feels like driving a manual car after getting used to automatic. It's just redundant. I've seen better, I've written way cleaner code and got better concurrency. Promises, futures, yielded generators - they are all syntactic hacks. The only language I've used that really addresses this properly is Go (disclaimer: I haven't written any Erlang).

viewer5 1 day ago 2 replies      
I don't have anything of substance to add, but author, if you're reading this, I enjoyed your writing style a lot.
echoless 1 day ago 1 reply      
No actor-model based language has this problem, so perhaps all it comes down to is baking in the right(or even any decent) concurrency support from the start, at the language level.
Nilzor 1 day ago 9 replies      
His solutions is threads? Really? Has he read no history? There are problems with threads. That's why async I/O is hot right now. Threads is a limited resource. Threads are expensive to create and dispose. Context switches are espensive. Threads must be synchronized. Threads can have race conditions.

Good rant, but I didn't expect him to serve such a shallow conclusion after a solid and insightful introduction.

pka 1 day ago 0 replies      
[1] is a nice read regarding continuations (and the Cont monad), though a bit more advanced.

[1] http://blog.sigfpe.com/2008/12/mother-of-all-monads.html

kmike84 1 day ago 1 reply      
https://glyph.twistedmatrix.com/2014/02/unyielding.html is a good read - there are reasons why explicit sync-async "coloring" (i.e. await/yield) is better than green threads/coroutines which author admires.
anonymoushn 1 day ago 1 reply      
I don't really consider this solved in languages or runtimes that lack green threads. If you want to make 300,000 threads in Lua or Go, go right ahead, but if you port that application to Java you're going to have a bad time.

An orthogonal useful thing that is sometimes not solved in languages with green threads is the ability to copy continuations. If you have call/cc or coroutine.clone, you can e.g. use rollback netcode in your fighting game and store state in execution states, but if you cannot copy execution states, you will have to choose one or the other.

dwenzek 1 day ago 0 replies      
A related way of composing code is railway oriented programming already commented on HN (1)

Underneath there are monads, sure, but the author has deliberately chosen a more mechanical metaphor which may help those for whom monads sound too abstract.

The post focuses more on how to handle errors that asynchrony,but it shows well the key steps to lift a blue function into red ones and to compose these constructions.

(1) https://news.ycombinator.com/item?id=7887134

Roboprog 23 hours ago 1 reply      
Funny. I thought he was talking about Java 8 for a while. We (eventually, when the rest of the stack catches up) get functors / lambdas, but:

* red = instance methods.

* blue = static methods, which cannot call an instance method.

z3t4 1 day ago 2 replies      
I don't get it ... But hey, it took about six month for me to figure out how to write asynchronous JavaScript ... The key is to not use anonymous functions, it will flatten out the "Christmas tree" of callbacks. And it makes it possible to read what the code does, or at least what the programmer wants the code to do. It's much better then "then", then what, but, then, why complicate things when it's actually possible to be verbose.
fixermark 1 day ago 0 replies      
Sadly, you can build a Java API to introduce callback hell if you want to.


It's nice you don't have to, though.

Paradigma11 20 hours ago 0 replies      
Relevant paper: http://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdfEspecially p34+ regarding concurrency paradigms.
totony 1 day ago 2 replies      
You can always make a function that sync all async operations: sleep until a global variable is changed by the callback.

A pain, but still not that bad.

StrykerKKD 1 day ago 1 reply      
What about isolates in Dart? I mean isolates are isolated processes, which also can be a thread and they also can communicate with each other.
mmphosis 1 day ago 0 replies      

  POLLIT      CMP $C050      BNE POLLIT
It's not a function, it's a procedure. It doesn't need to return anything, it produces an effect. Interrupts will break it.

   await until an as yet indeterminable time in the future

dmitrig01 1 day ago 0 replies      
Interesting allegory. Pretty much the exact same thing could be said about pure/impure functions in a language like Haskell -- where I thought this was going (until I realized it was about JS
aidenn0 1 day ago 0 replies      
Or you could be like scheme and make all functions red.
karlheinz 1 day ago 0 replies      
js-csp enables go-like concurrency in javascript:https://github.com/ubolonton/js-csp
parfamz 1 day ago 1 reply      
What about FRP?
Join the U.S. Digital Service
points by danso  1 day ago   224 comments top 36
inmygarage 1 day ago 3 replies      
I saw Mikey Dickerson (in the video) speak to a group of ~200 people last summer about the work that he and his team did on healthcare.gov. He was at Google for nearly 8 years and left to run the recovery team for healthcare.gov. Their team is the real deal -- they saved the site in just a few months and now over 6 million people have signed up. Read the Time Magazine story for the full account.

He does not seem like the type of guy that willingly puts up with government b.s. He gets it, and after seeing him speak I believe in him.

When their talk was finished they got a ~5 minute standing ovation and even a few stray tears.

I know it's cheesy but the government simply needs to catch up and I think they are finally ready to try.

I applaud the effort and hope to help out in some way.

joelanman 1 day ago 5 replies      
I work for the UK Government Digital Service - my views don't necessarily represent my employer.

There are lots of comments here around the idea of whether you would work for a government if you disagreed with their political views.

What I love about working on GDS projects is the potential to improve so many people's lives. People often don't have a choice when it comes to interacting with government - it's often a legal requirement, and there's only one way to do it.

So, even if sometimes I may disagree with aspects of policy, the reality is there is going to be a digital service based on it. And I can help a lot of people by being part of making it as simple as possible.

wpietri 1 day ago 5 replies      
I have met some of the 18F folks here in San Francisco, and I have gradually become convinced that there is a lot of substance to this effort. People, apparently including people at the highest levels, recognize that information technology has made enormous leaps in the private sector, and they are serious about helping government catch up.

One of the most interesting bits to me was something said by Todd Park, a former CTO of the US and still a White House advisor. I was, frankly, suspicious that Washington's culture could really accept a lot of the Agile and Lean notions that are commonplace in the high-tech world. But he pointed out, correctly, that waterfall projects are enormously risky. A big reason Lean Startup advocates place such a strong emphasis on frequent small releases is that it helps us reduce risk by finding and fixing problems early. He pointed out that if bureaucrats really want to play it safe, using agile, iterative approaches is exactly what they'll have to do.

I don't know enough about Washington to say if this really will work, but improving government efficiency through smart use of tech strikes me as exactly the kind of thing people from all parties can get behind. I'm excited to see how it turns out.

gyardley 1 day ago 8 replies      
Let's say it's January 2017 and President Bush or Rubio or Paul or Walker is being inaugurated. Does the 'U.S. Digital Service' actually keep getting funded? If it does, do the staff actually want to stay? Will any of them be willing to work on government objectives that might not be in line with their personal politics?

I'll believe this is actually what it claims it is when I see it and its staff persist through a change of political control in the White House. Until then, I'm more than a little skeptical.

tdaltonc 1 day ago 4 replies      
It would be really great to fix the IRS's digital presence. Make TurboTax obsolete. I hate TurboTax.
sloanesturz 1 day ago 3 replies      
I would love to see a program like ROTC -- the government pays for you to get a CS degree and then you work for the Digital Service to start your career.

You end up being 26 with a great education and a solid work experience under your belt. Seems like a win-win!

gm3dmo 1 day ago 0 replies      
I worked a year on the UK government digital service, they really were trying to do things differently, it was a really great experience with real teams working in a supportive atmosphere. I think the 2 services share some DNA and If you get the opportunity to work at the USDS then you should grab it with both hands.
joshdotsmith 1 day ago 3 replies      
I wish there were something like this akin to the National Guard. If I could serve a weekend a month (or maybe even more, if it were something closer to my civilian skill set), I probably would have opted for this over joining the Army.

Hell, if they did that, I'd probably serve after my time in the Guard is up.

anarchitect 1 day ago 3 replies      
Would anyone from the Government Digital Service [1] in the UK care to comment on this? GDS is a world-class operation which has many smart people using modern technology to solve real problems for UK citizens. Much (all?) of the work is open source [2].

[1] https://gds.blog.gov.uk/about/[2] https://github.com/alphagov

tdaltonc 1 day ago 1 reply      
The US digital Service should also make and maintain a framework that state and local governments can implement. Paying a parking ticket in Los Angeles is a trip to the 90's.
kazagistar 20 hours ago 0 replies      
Here is what worries me:

Right now, the complexity of regulations and procedures is kept in check by the inability for bureaucrats to manually execute them. Full automation might not lead to a simplification of how people interact with the government; it might just result in the ability to add even more regulation, beyond normal human capabilities.

We have a world where bureaucratic complexity means few people can navigate it, but we might just be growing a world where no one can navigate it without paying a lot of money to established software companies to do it for them.

Automation is good, but we shouldn't let it hide the need to simplify.

danso 1 day ago 1 reply      
While I've been initially skeptical of the White House's digital efforts because of my assumption that tech development, especially when it comes to public information, will always be handicapped by bureaucracy and politics...the 18F group has been doing some great work, the kind that is not at all a bad standard for private startups to follow...certainly much better than the kind of things that were rolling out in 2009-2010 (such as the overhyped petitions.whitehouse.gov and the first CTO's pushing of Drupal)

The 18F Github contains a lot of interesting work, with reliance on contemporary frameworks and practices (Jekyll seems to be their choice for microsites) https://github.com/18f

I loved 18F's work with overhauling the Federal Register site (https://www.federalregister.gov/)...and even if you think the federal government's data efforts are paltry...then you haven't seen what existed before 2009...which is pretty much nothing. The wide array of data and information that has been machine-readable and public accessible via the Internet is pretty astonishing, and while I doubt that President Obama has made it a point to keep tabs on his IT, whoever has been whispering in his ear has been very effective.

Edit: a cool project I noticed on 18F's github: https://github.com/18f/mirage ...a Django/PostgreSQL project to assist procurement officers in surveying the market for vendors...another 18F repo contains the Chef recipe for its deployment.

ipsin 1 day ago 5 replies      
Two things I wonder about:

1) Is an outfit like the U.S. Digital Service considering people who don't have undergraduate degrees, but have a consistent track record or are a part of established teams?

2) Is the salary roughly comparable with what's available in the private sector? I understand that the government might pay less in return for some of the perks of government work, but how close are they coming these days?

_cudgel 1 day ago 5 replies      
Does joining the US Digital Service require a clearance, and if so does occasional marijuana consumption automatically disqualify you?
efriese 1 day ago 0 replies      
So what is this? I love the idea of contributing to something like this, but they don't make the process clear. Is this a job? Can I work part time on specific tasks? There are people willing to leave the private sector to do this work, but that's not always an option. They need to find a way to leverage people's spare cycles. They also don't mention anything about security. The skill gap in security is MUCH larger than development. They are missing an opportunity there...
sneak 1 day ago 2 replies      
Remember, y'all: this is the government that brought you PRISM and XKEYSCORE and TAO. This is the government with the department who tortured Manning and is pressuring the UK to illegally interfere with Assange's right to asylum.

Think twice before you provide material aid to the enemy.

mattdeboard 22 hours ago 0 replies      
Too bad the VA positions are only open in DC and Boston. I'm a veteran and a software developer. I'd love to be able to help out but I'm in Austin and unwilling to move my family again. Bummer.
netman21 1 day ago 0 replies      
I am of two minds here. On the one hand making the onerous task of complying with government regulations, paying taxes, and receiving benefits is all good. It even has the potential of reducing the overal burden the government imposes on our everyday life. On the other hand, anything that reduces the frustration of dealing with the government puts off the time when there will be fewer government bodies, regulations, and inefficiency. I hope that the smart people employed by US Digital Service ask the question: "Do we really need this?" Before they set about fixing particular inefficiencies.
kalvin 1 day ago 4 replies      
(I've been working on Healthcare.gov for the past year and know some of the people at USDS; I was also YC S11 and appreciate the great parts of startup culture even more now)

PSA: I know it's not easy to tell from the outside, or from a website, but this is the real deal. Things are starting to change; by government standards, at ludicrous speed. The Healthcare.gov crisis really started a useful fire.

Todd Park, Mikey Dickerson, and the team he's building at USDS and the people he's placing into new "digital services" teams at other agencies like the VA-- if you meet them you'll quickly find out why they were superstars at their tech companies.

One important thing to understand is: yes, things really can be vastly improved IRL, not just in theory. It's not that government IT services ("IT") suck because the people responsible for it don't care. They're just in a completely different world, expectations and otherwise, and don't know how much better it could be.

E.g., they don't know there exists a hosting option that is more secure, more reliable, and less risky (and costs 90%/$90M less per year). Or that there exist software people who can build a far better user experience (#1: the worst UX is downtime, #2: product lead needs a) exist, and b) fight for the user on every decision), while still meeting all business requirements (for 80%/$20M less per year).

Some people do know, but they can't do it themselves [0], and they also don't have access to the right people to do it for them; their practical options are Lockheed Martin's "small-business" subsidiary, or ACRONYM's federal IT division.

USDS and 18F are fixing this, and much more. They need your help. I'm not sure what's public, but the progress is incredible. It's still going to take a long time. Most of all it's going to take more software engineers, designers, PMs, etc. The tech gap between Silicon Valley and DC is unbelievable until you've experienced it. Go east! (Or west [1])

It's definitely frustrating to work for/with the government (not sure how it compares to other institutions that provide many real-world services to a diverse 300+ million population), but if you're put in a position where you can actually change things, the impact is enormous. And now you can do that as a software engineer/technologist with no existing clue about government, because USDS/18F has leverage and the ability to place you where you can make that impact. [2]

The federal government deeply impacts all of our lives [3] and whether you think it should do more or less, there's no reason for what exists to be so incredibly inefficient and customer-unfriendly, especially when there's a huge pool of tech people with the skills needed to fix it, and now, a pipeline that can get them (you!) to the right place. Please apply!

(There's also small teams of engineers [4] on the inside tackling this problem, if you prefer that (less direct, but avoids gov. pay being capped at ~$150k or so and the strict background checks.) Contact me if you want more details.)


[0] They're also too busy trying to keep their heads above water in a bureaucratic system that seems to function only because many of the people in it work so hard, but I digress.

[1] AFAIK, USDS is based in DC (and I think has positions in Boston), 18F is mostly in SF (Civic Center) and DC, with a few remote people around the US.

[2] Imagine if you could direct and organize a team of people to rescue expanded healthcare coverage in the US (oops, Mikey did that already)

Imagine if you could help 22 million veterans get the care they deserve on time, instead of hundreds of days late

Imagine if you could help "fix" the IRS with auto-prepped tax returns (definitely seems like the hardest one on this list-- but it's also one that everyone wants to see happen. And yes it's a policy problem, but it's also a technical problem-- imagine policymakers having to trust the system that resulted in the original healthcare.gov. And informed engineering opinions have much more weight than you'd imagine.)

[3] Yes I know there's a whole world outside of the US, especially on the Internet :)

[4] "startups" and "small businesses" both have connotations on HN that I'm trying to avoid...

jrochkind1 1 day ago 1 reply      
But does it require a drug test?
zedpm 18 hours ago 0 replies      
Are there any specifics anywhere regarding the positions, desired skill sets, or anything other than generic descriptions? I've seen a few people comment here about positions being in D.C., but I can't find any details like that.
kzahel 1 day ago 2 replies      
I would seriously consider joining as a developer. If it didn't mean moving to DC. I couldn't determine if there were any remote opportunities or positions in the Bay Area.
canterburry 1 day ago 1 reply      
While I do not doubt the intentions are honest and government is waking up to the new reality of digital, until the new CTO has any kind of budgetary powers...this is just a dream.
karmacondon 1 day ago 1 reply      
I just find the name "U.S. Digital Service" to be god damn inspirational. It reminds me of Kennedy creating the Peace Corps. "Ask not what your country can do for you" and all that. I understand that in reality this is essentially a rebranding of the traditional government job, but it seems like it could be so much more.

I know that during the Healthcare.gov debacle, many people with software experience legitimately wanted to help. We look around at our communities, cities and states and see problems that could be fixed and software that's dying to use common sense modern best practices. But government bureaucracy is an impermeable wall, sometimes for the right reasons but often due to turf protection, intransigence and lack of funding. I would be happy to volunteer my time and experience, to work collaboratively with others and to work within strict standards and specifications in order to improve the quality of government technology for everyone. But the information about how to do so is hard to find and the process isn't very encouraging. I understand that this is a bit off topic, but when I hear "US Digital Service" I think "Peace Corps" for technology.

From wikipedia: "The stated mission of the Peace Corps includes providing technical assistance, helping people outside the United States to understand American culture, and helping Americans to understand the cultures of other countries."

WTF aren't we doing this for technology? Provide assistance, help the government to understand tech culture and help tech people understand government culture. If we train volunteers then we don't need to give high priority projects like Healthcare.gov to the contracting firm with the lowest bid and the snazziest powerpoint presentation. Government agencies would have their pick of people who'd proven their technical skills and reliability on projects at the local, state and federal levels. Citizen programmers could feel like they were making a difference for their country, even just a small one, which is something you can't put a price on. Government at all levels would reap huge benefits and improve services for a fraction of the cost, which would increase the general satisfaction of millions of non-technically inclined citizens.

The obvious problem with volunteer labor is that it's hard to hold someone accountable if things go wrong, and the speed and quality of ongoing maintenance and support can vary. But my thought is, the current system isn't exactly working out great either. We have catastrophic disasters at the national level and outdated and incomplete software at the local level. There are millions of people who could help, and working together they can do more good than harm. The greatest asset of the United States has always been the ingenuity and dedication of its human resources. If technology isn't the greatest opportunity and threat of the 21st century then I don't know what is, and it seems like a good time to marshall those resources to make the most of it.

coin 1 day ago 0 replies      
-1 for disabling zoom on mobile devices
bowlich 22 hours ago 0 replies      
As someone whose gone through the hiring process for the USDA and Dept. of Interior, how is it that the USDS is bypassing the normal usajobs hiring process?
mcguire 22 hours ago 2 replies      
I notice the two magical questions on the application page:

"Are you eligible for veteran preference?

"Are you a current or former federal employee?"

Are these civil service positions?

einrealist 1 day ago 0 replies      
That is great. I wish my country has something like that. Is there a way to join as a foreigner and become a US citizen after a time? ;)
tsaoutourpants 1 day ago 2 replies      
Will work for fedgov once fedgov stops using tech to violate the people. Until then, GFYS.
ancarda 1 day ago 0 replies      
I'm guessing it's not possible for non-US citizens to join this?
in3xes 23 hours ago 0 replies      
Really BIG, COMPLEX, DIFFICULT problems! :-o
piratebroadcast 1 day ago 1 reply      
Looks like the entire whitehouse.gov site is built in Drupal, which is about as insecure as a site can possibly be.
DanielBMarkham 1 day ago 0 replies      
I'm upvoting this because the issue deserves more attention and discussion. I am somewhat leery of implementation details, having tons of experience working for both large governmental and private organizations. Large organizations are notoriously difficult to deal with -- large government organizations are an order of magnitude (or two) worse. Them's just the facts.

I am completely supportive of any effort to cut through the BS and make the government more transparent and accessible. So easy-to-use portals, apps for accessing benefits, publish-subscribe services to find opportunities? Dang, there's a lot of cool stuff folks could do. Great idea. Let's go do it.

I am completely concerned about the idea of both automating enforcement and the collection/cross-indexing of data. I have a choice whether or not I let Facebook rape my privacy and anonymity online. I do not have a choice about what I report to the IRS. I'm extremely unhappy with the Facebook situation, and the government is a completely different animal.

Laws and regulations -- centuries of them by now -- were made to be enforced by humans, not computers. That's why we have so many of them and they overlap so much. Anything that takes us closer to automatically fining me when I drive 5 mph over the speed limit, schedules an audit if I make a math mistake on my taxes, or tells me what I can do or not do based on data processing that was impossible ten years ago? Screw that. I want no part of it. In fact, I would make a strong case that such activity is antithetical to a free and just society.

I'm also concerned over the difference between a PR move and something tangible. Show me the next idiot in the White House of a different party that's doing this, and I'll be much happier. Otherwise, not only do I have the reservations that I currently have, I've now added a new one: that all of this automation and assistance is only happening by supporters of one political party. Not good.

Don't get me wrong: I like the idea. I want to see some clear ethical guidelines and permanence across political parties before I could support this, however. Right now it looks like a bunch of political BS.

ekianjo 1 day ago 1 reply      
> XXX NAME> Engineer, Healthcare.gov

Not sure you actually want to put that on record with your own picture there. Knowing what a massive mess the whole thing is.

elnate 1 day ago 1 reply      
For curiosities sake, I decided to take a look at healthcare.gov, put in a random ZIP code picked from New York in google maps. When I tried to look at individual cover I got an ACCESS DENIED. Is it restricted to US IPs? Seems like an odd thing to lock down.


[edit]I got to that link from this working site: http://healthbenefitexchange.ny.gov/ which I got to from Healthcare.gov

jinushaun 1 day ago 9 replies      
I'd imagine that this is a hard sell given the govts track record. They want to recruit us, but they still don't get it. Why would we want to work for the bad guys? Why would we want to be second class citizens to clueless politicians that dictate directives from above instead of working in an industry that respects us and views us as leaders?
Alan Turing's notes found being used as roof insulation at Bletchley Park
points by antimora  21 hours ago   79 comments top 10
Animats 19 hours ago 5 replies      
The restoration effort at Bletchley Park has gone over the top. I visited the place on a weekday in 2002, when almost nobody went there unless they were really into crypto history. It was run down, and there were only about 10 people visiting that day. The tour guide was more into the architecture of the mansion than the crypto, although they had a bombe model (a prop made for a 2001 movie) and had started on the Colossus rebuild. The only thing that worked back then was one Enigma machine. A non-working Lorenz machine and some other gear was in glass cases. The guide pointed out where various of the huts had been. It was just one of those obscure, slightly run down historical spots one visits in England, with the usual lake and swans.

Then they got National Lottery funding. Now they've rebuild most of the huts in brick, re-landscaped the grounds, have elaborate displays, added the "National Museum of Computing", renamed it "Historic Bletchley Park", put in a visitor's center, a children's playground, a cafe, and, of course, a gift shop. There's "Turing Gate" "Colossus Way", "Enigma Place", two memorials, and more stuff under construction.

All this is on the Bletchley Park side. The Colossus rebuild is at the National Museum of Computing, which is on the same property but has separate staff and funding. (http://www.tnmoc.org/) They don't get along with the Bletchley Park tourist operation and don't have public funding. ("Other exhibitions are available at Bletchley Park, but operated independently of the Bletchley Park Trust.", says the Trust site.)

toddsiegel 17 hours ago 4 replies      
Back in the old days anything they could stuff in a wall was insulation. I was a volunteer firefighter years back. We had a call in the old part of town, with buildings dating back to the 1700s (George Washington slept here!). We had to open up the walls in a few spots and really cool old bottles, papers, and other stuff came out.
reality_czech 10 hours ago 1 reply      
I thought Mr. Turing worked on the foundations of computer science, not on the roof.
ekanes 12 hours ago 0 replies      
If anyone's interested in Bletchley Park, or the intersection between encryption and WWII, Cryptonomicon by Neal Stephenson is a fantastic book.
DanBC 18 hours ago 1 reply      
They've only just (the past few years) declassified some of his papers from GCHQ so it's nice that we get to see these without much of a wait.


throwaway8899 16 hours ago 1 reply      
Something to think about ...

Alan Turing wasn't a national security risk because he was gay, but he actually was a risk because he was that good.

Anybody who could break rotor ciphers circa WW2 was very valuable indeed.

dang 20 hours ago 5 replies      
This site (edit: I mean the site of the current URL) seems to have stolen the content and contains no attribution. http://www.theaustralian.com.au/news/world/turing-papers-tha... has the story, so we changed to that, but its paywall seems worse than what HN will tolerate, so we put it back. This is unsatisfactory.

Can anyone suggest a better URL?

Edit: Sorry, it seems I got this wrong and the original post was just fine: https://news.ycombinator.com/item?id=8993631. (That might mean it was other sites ripping off MKWeb and not the other way around. So I'm glad we didn't change the URL after all.)

logicallee 19 hours ago 1 reply      
There is something really interesting about this title (phrasing or how it reads) but I can't put my finger on it.


j2kun 19 hours ago 1 reply      
These notes don't particularly seem very important. On the other hand, I always thought it would be interesting to have an art piece that is like "famous mathematicians notes at the moment when the inspiration struck."
clapas 18 hours ago 1 reply      
This man was literally inventing modern computing and cryptography. Those papers must be very valuable for collectionists. This reminds me that I got to know recently about the famous Turing test being passed. Amazing. What a vision.
Replacing Middle Management with APIs
points by gwintrob  1 day ago   219 comments top 25
krschultz 1 day ago 7 replies      
I'm not dismissing this line of thought, but realize this probably already exists, and has existed for 100+ years.

When I graduated college I started out as a mechanical engineer, sometimes going down to the factory to work on new design issues. There was a constant source of friction between the 22 year old mechanical engineers and the 50+ year old factory workers. Here are a bunch of skilled guys that have been building the product for 30+ years, and they in essence often have to report to kids half their age right out of college that don't know a thing. There was a pathway up through the factory, but there was no pathway across to engineering without getting a degree.

If you were a smart engineer, you realized that these guys had an enormous amount of knowledge - more than you could ever hope to get out of a simple degree - and that they were worth listening to. If you were a cocky, dumb, engineer you ignored their opinion.

The solution in the past to this problem has been unions. The startups are moving a bit too fast for the unions to catch up, but in all of these recent 'What happens when everything is Uber?' posts written by engineers the role of unions, politicians, and large masses of affected people seem to be ignored.

addicted44 1 day ago 5 replies      
Getting more stuff done with less human effort is a good thing. However, combining that with an economic system which sets your ability to feed yourself and your families based on the effort you put in is a bad thing.

Capitalism which was a great 2th century economic system, is not suited for the 21st century where there really aren't any jobs left for humans to do.

The benefits of automation only accrue to the owners of capital, while everyone else gets completely left out. Since the owners of capital are an increasingly smaller number of people (not surprising since capital accrues capital exponentially while labor only accrues it linearly) our economic systems are headed for collapse.

dberlind 1 hour ago 0 replies      
Amazing conversation that really provokes us to think about who of us has it right? Those of us who are ambitious about creating opportunity, climbing the corporate ladder, etc? Or those of us who consciously decide to opt for autonomy and convenience (potentially undermining the structure of our capitalist system!). I've written a more detailed response and posted it to ProgrammableWeb since any conversation about APIs are so near and dear to our hearts http://www.programmableweb.com/news/do-apis-eliminate-middle...
fudged71 1 day ago 5 replies      
I have been thinking about this exact thing for a while now.

With Decentralized Autonomous Organizations, the resource utilization of a business is simply an algorithm, which can use any number of APIs to return profit and grow the entity.

All of these APIs we are creating for various functions may soon replace us. Many APIs already do directly dispatch humans to do certain tasks. The scary part is that when someone is providing a service to you, they may not be aware of whether their superior is a human or an algorithm.

A minimum viable autonomous corporation could be a lemonade stand that is managed entirely by an algorithm; with virtual currency it can hire a person to set up a lemonade stand at a certain datetime, and take a cut of earnings. If sales are slow, it could hire someone to design a poster, and another to post them in the area; or even an analyst to decide what the best course of action is. Over time, with many locations and employees, it would be able to learn the best locations and conditions for optimal profit. Perhaps also deciding which employees to continue working with and which ones to fire.

These "management algorithms" would likely need to be developed by someone. But what if the algorithm hired the person to write the algorithms, updates, and performance tweaks?

There could be a bootstrap algorithm that is simply an idea (these could be automatically generated based on search trends, market fluctuations, etc). These seeds would be open to the public for investment with virtual currency (bitcoin could be used to allocate the cap table as well), the program would be able to hire talent to build out the idea, and the investors would receive dividends.

I see this evolving as a kickstarter for viral autonomous businesses. And I don't see why it couldn't be viable with today's technologies and APIs.

Animats 1 day ago 5 replies      
That's an excellent article.

I've been using the phrase "Machines should think, people should work" to describe this for some time. Amazon/Kiva order processing, where the humans are just arms for the computers, is well known. Uber has also been mentioned. Marshall Brain's "Manna" is the SF precursor of this concept.

This has been pointed out repeatedly since Adam Smith visited the pin factory in 1776, and wrote, in the Wealth of Nations "One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on, is a peculiar business, to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations."

Some people like it that way. Henry Ford, on assembly line labor management: "We shift men whenever they ask to be shifted and we should like regularly to change themthat would be entirely feasible if only the men would have it that way. They do not like changes which they do not themselves suggest. Some of the operations are undoubtedly monotonousso monotonous that it seems scarcely possible that any man would care to continue long at the same job. Probably the most monotonous task in the whole factory is one in which a man picks up a gear with a steel hook, shakes it in a vat of oil, then turns it into a basket. The motion never varies. The gears come to him always in exactly the same place, he gives each one the same number of shakes, and he drops it into a basket which is always in the same place. No muscular energy is required, no intelligence is required. He does little more than wave his hands gently to and frothe steel rod is so light. Yet the man on that job has been doing it for eight solid years. He has saved and invested his money until now he has about forty thousand dollarsand he stubbornly resists every attempt to force him into a better job!"

The history of auto labor relations indicates that task boredom isn't a big issue for many people. Workers have fought for higher wages, better benefits, shorter working hours, more breaks, and more dwell time between cycles. But not for job rotation.

A lot of people seem to be OK with dull, boring jobs, provided they get paid reasonably well for them and have enough time off.

karlheinz 1 day ago 2 replies      
This is a paradox rooted in capitalism itself, as Karl Marx pointed out in 1858:

"Capital itself is the moving contradiction, [in] that it presses to reduce labour time to a minimum, while it posits labour time, on the other side, as sole measure and source of wealth."

athenot 1 day ago 3 replies      
I would argue that the shift to "Cogs in a giant automated dispatching machine" already happened with living in an industrialized society, except it's software instead of bureaucracy.

In either case, the organizational "middleware" is under the control of its owners.

jobu 1 day ago 3 replies      
Obligatory link to "Humans Need Not Apply": https://www.youtube.com/watch?v=7Pq-S557XQU I highly recommend giving it the full 15 minutes if you haven't already seen it.

We as a society need to figure out how to handle a future where the majority of people are no longer compatible with the economy.

YesThatTom2 1 day ago 2 replies      
This guy should see how a Google datacenter works.

Need something done in a datacenter? Open a ticket and people respond. Common tickets become standardized by making them API calls that schedule the people. The most common API calls are eventually automated completely.

I saw this for requesting a machine be power cycled (ticket -> API call -> automated).

The people that work at the datacenter are great, but I feel sorry for them because their job becomes less and less interesting as time goes on.

When Google opened a datacenter in [redacted] there was an article about how local people were signing up for Community College classes on computer science topics. I felt terrible... the skills that datacenter would be hiring for were more akin to a warehouse than a startup.

-- Tom-- Ex-Goolger

dm8 1 day ago 3 replies      
> The gap in training and social groups above and below could mean that new automation technology causes sudden, large-scale unemployment.

This also scares me. I've often wondered, what will happen to people in a largely automated world (which could be reality in next 40-50 years). Will there be riots as employment will be unprecedented? Will it open new industries or entirely new period of creativity for humanity (e.g. industrial age giving birth to age of information)?

When we transitioned to industrial age, I believe people had similar concerns but somehow things worked out well. Can we repeat that?

kcole16 1 day ago 1 reply      
While this may lead to a short term increase in unemployment if there is a massive shift from middle managers to a technology layer, it will increase efficiency, and hopefully, job satisfaction. Less middle managers to wonder what value they really add (often very little), and less employees asking the same question (and thinking they are smarter/more qualified than their boss).
SixSigma 1 day ago 4 replies      
The major roles of middle management are

1) Communicate the vision

2) Allocate resources

3) Set criteria for decisions of unequivocal data

4) Make judgement calls on equivocal data

5) Facilitate the voice of the worker being heard

Computers can only do 2 & 3

mystique 1 day ago 1 reply      
I have been thinking about this for a while now too.

I feel more troubled by the fact that folks in non-AI field don't see this as an imminent problem. They don't realize that with current advances in computing power that enables us to store more and more data cheaply, how much can be learnt from it and how much can be automated.

There don't seem to be any easy solutions. Training/teaching folks new skills would help but will not solve this for every person affected. I suspect the collateral damage from this will be big.

a3n 1 day ago 2 replies      
Someone needs to make a tshirt:

You can be replaced with a small REST API.

TeMPOraL 22 hours ago 0 replies      
Well, that describes a dream I have - to make all services function calls. I should be able to order a pizza by just issuing a POST request to some API endpoint, preferably without caring about anything else than the size and list of ingredients. I definitely should be able to install a local print shop as a system printer, so that instead of calling them and sending them e-mails and wiring money I could just do File->Print, change few options, click Print and have it at my doorstep the next day.

Come to think of it, a lot of services are but glorified function calls - it's just the API (i.e. having to talk to people) that's messy and too broken for any automated use.

SmallBets 1 day ago 1 reply      
What doesn't get brought up enough in these discussions is the degree to which the advances in AI and resource distribution bring down costs and increase the accessibility of having your needs met.

For example, as Uber has advanced to UberX and now UberPool, the possibility of being car-free is much more feasible and potentially cheaper than ownership. So the labor hours required to own a car are cut out as a need for those workers. Similarly, craigslist, AirBNB etc. are impacting the renting vs. buying scenario and reducing costs in those areas.

Will this increase in accessibility happen at the same rate of labor loss? Probably not and there will be some turbulence, but I think the net result will be a realization of excess in western culture and an adjustment to more reasonable standards of living.

Sure, there will be 'above the api' haves, but there is reason to believe the accessibility of meeting basic needs will scale up as a long term net result after short/mid term turbulence.

smithy44 1 day ago 0 replies      
Does anyone think that our whole economic meta-model might be overcome by 3D printing, permaculture and sufficiently efficient solar, etc.? (combined with sufficiently efficient energy use.)Has anyone read Clifford Simaks' City?There are technologies that automatically accrue the power to live in the other direction, or at least more widely and uniformly. Perhaps it's time to focus on those. We can have a economy in which the true currency is only ideas. And the money economy is only for luxuries. And in which we are all a lot more free.
georgeecollins 23 hours ago 0 replies      
This has been going on for over fifty years. McDonalds replaced diners with standardized food and processes. WallMart replaces smaller stores, often locally privately owned. Through efficiency and standardization they decrease the amount of labor needed and in particular the skill of labor needed.

The mobile web lets you standardize mobile services (like delivery and taxis) and replace small companies owned by middle class people with leaner, larger, public ones.

cpprototypes 1 day ago 1 reply      
There is a potential dark side to this kind of automation:


There could be a lot of turmoil in the next few decades as society adapts to this new world of increased automation.

girmad 1 day ago 0 replies      
Yes, scheduling is better done by machines than people, and we as a society will be better for it. Ironically, this will eventually lead to "higher end" shops, which retain the human touch.

This process happens over and over as technology improves, it hurts in the short run and opens lots of new doors in the long run.

jordwest 1 day ago 1 reply      
I, for one, welcome our new algorithm overlords.

Option A - Work for a system run by an algorithm that hires, fires, and judges you based purely on the work you do.

Option B - Work for a system run by irrational humans that hires, fires, and judges you based on their own prejudices and ego.

valevk 1 day ago 4 replies      
Thank god, we still need somebody to write that API.
mfringel 1 day ago 0 replies      
If all you have is a resource allocation problem (e.g. scheduling, launching jobs, etc.) in a group of people lower than the Dunbar Number (i.e. ~150), then replacing middle management with an API is a reasonable choice, because you don't have an interesting management problem that requires critical thought or leadership.
rsl7 1 day ago 0 replies      
The leap to self-driving cars will be a massive and painful and complicated one. Way too much faith in tech in these here parts.
spiritplumber 1 day ago 1 reply      
Basic Idea: In small groups management can be automated. If it can be automated well enough, a hierarchy is not necessary. Small groups need to be able to switch quickly beteween adhocracy and democracy while preserving fairness. The following principles are a first attempt to provide a framework for this.

Principle: An operating ageement should be written with a specific purpose, in very simple language or ideally in pseudocode or actual code. Corollary: This document should contain a way to programmatically deal with issues such as "there is no person to do job X", by random selection or by rotation or by whatever other method is agreed upon. Corollary: This document should contain a way to programmatically allocate revenue and split profits.

Principle: Job titles are optional. Hats worn are necessary. Everybody should list the jobs they are happy to do in order of expertise. Corollary: Hats needed should be specified as soon as the scope of the project is determined. Corollary: Any hats left unworn by unwillingness should be traded around with some frequency.

Principle: Everybody contributes to a common diary or blog, at least once a day, with at least one sentence, explaining what they did during that day. Corollary: Everybody should be able to identify at least one useful thing they did each day they are working. Corollary: Anybody wanting to not work for a time should let everybody know in advance.

Principle: Everybody gets one vote. Tie breaks are decided by whoever is wearing the largest hat in that area or as specified by the operating agreement. Corollary: Votes are called when there is a disagreement and resolved as close to immediately as possible. Corollary: Valid vote outputs are yes, no, don't care, don't know enough.

Principle: Every rule agreed upon after the starting document is generated should carry an explanation as to in response to what event it was made. Corollary: The reason for the starting document itself is assumed to be "To accomplish our primary goal", which should be specified. Corollary: The more a rule can be automated, the more it should be, but this machine must never override anybody.

Principle: Any procedure that gets in the way of the stated goal must be moved out of the way. Corollary: When in doubt between toss and keep, default to keep. Corollary: When in doubt between open and closed, default to open.

Principle: Nobody should create emergencies. Everybody should react to emergencies. Corollary: Emergencies should be definied strictly. Corollary: Emergency response is coordinated by anybody who is there and knows what they are doing.


The Big Lie: 5.6% Unemployment
points by mudil  22 hours ago   211 comments top 44
me2i81 21 hours ago 17 replies      
The BLS puts out 6 different unemployment statistics for every period. U3 is the "official rate", but you can also look at the other ones, including U4 = U3 + "discouraged workers", U5 = U4 + "marginally attached" workers, U6 = U5 + part time workers who would like to be full time. You can make arguments about which measurement is "right", but putting out an editorial implying that there is sleight-of-hand going on is silly--all the measurements are available, and comparing a single one to itself over time as a "headline rate" is completely reasonable.
med00d 21 hours ago 1 reply      
The standard rate (U3) doesn't take underemployed or those who give up looking for work into account, but the U6 rate does. The U6 rate in January of 2009 was 14.2% and it is now down to 11.2% from its peak at 17.1% at the end of 2009/early 2010. Is the economy still struggling? Sure. Is unemployment headed in the right direction? Absolutely.

Edit: One thing that people commonly like to do is compare the U6 rate to the U3 rates of the past. "Unemployment isn't 5.6%, it's really closer to 12% ..." That's foolish because it's an apples to oranges comparison. Yes 11.2% is high unemployment, but what we judge as the low/satisfactory unemployment rate would come in somewhere around 7-7.5% when looking at the U6 rate -vs- the 4-4.5% that's considered low/satisfactory using the U3 rate.

Source: http://portalseven.com/employment/unemployment_rate_u6.jsp

"The U6 unemployment rate counts not only people without work seeking full-time employment (the more familiar U-3 rate), but also counts "marginally attached workers and those working part-time for economic reasons." Note that some of these part-time workers counted as employed by U-3 could be working as little as an hour a week. And the "marginally attached workers" include those who have gotten discouraged and stopped looking, but still want to work. The age considered for this calculation is 16 years and over."

ptaipale 2 hours ago 0 replies      
It's interesting that so often everyone talks about how unemployment develops, but not how employment develops.

Unemployment statistics are not so useful, precisely for the reason given: people who've given up hope of finding employment are often excluded.

Employment statistics are much more real, because the taxman wants his own, so it covers everyone who actually works. Of course, there are imperfections here as well: people may be working part-time when they actually would like to have a full time job.

During the years 2000 - 2012, employment rate in United States has gone down from 74.1 % to 67.1 % [0].

In Germany, the rate has gone up from 65.6 % to 72.8 %. [1]

In Sweden, it is relatively unchanged, 74.3 % to 73.8 %. [2]

In Greece, the numbers have always been much lower: from 55.9 % down to 51.3 %. [3]

E.g. Korea and France are surprisingly low, in the 60's. Israel has increased during this decade a lot, and it seems to be due to more women working.

[0] http://www.oecd.org/els/emp/howdoesyourcountrycompare-united...

[1] http://www.oecd.org/els/emp/howdoesyourcountrycompare-german...

[2] http://www.oecd.org/els/emp/howdoesyourcountrycompare-sweden...

[3] http://www.oecd.org/els/emp/howdoesyourcountrycompare-greece...

Short-term comparisons of OECD here:http://stats.oecd.org/Index.aspx?DataSetCode=STLABOUR

seizethecheese 21 hours ago 3 replies      
Economics student here.

One reason Economists are interested in Unemployment is because when it reaches a certain level it puts pressure on prices through upwards wage pressure (fewer people looking for work means employers need to pay more.) Defining Unemployment narrowly as only those actively looking for work and recently unemployed provides for a statistic best measures the labor market's functioning.

As others have noted, there are many other statistics that are collected which can elucidate social concerns.

kwhitefoot 4 hours ago 0 replies      
On the subject of Basic Income. I think most people are missing some very important points by concentrating so hard on the costs, that is, the money provided to the recipients of the Basic Income.

- Poor people spend pretty much everything they get and they do so locally, so a very large proportion of the BI will be immediately spent in the local economy thus increasing the opportunities for people to provide goods and services.

- Having an income that will prevent you starving to death or having your house or car repossessed means that you have the opportunity to turn down a poorly paid or dangerous job. This will go some way to rectifying the imbalance of power between employees and employers and drive up wages at the bottom end

- BI is not charity and is generally not intended to be means tested; every citizen gets it rich and poor alike. It is income and taxable in the normal way.

nostromo 21 hours ago 5 replies      
Take a look at this chart:


I fear that 1999 was "peak labor" -- the point at which technology started to destroy more jobs than it created.

We HN types live in a bubble in which times couldn't be better -- but in the larger economy there are fewer jobs paying and they're paying lower wages. It's troubling.

dkrich 1 hour ago 0 replies      
This article reads like a transcript of a drunk guy at a bar explaining his political point of view. Lots of stated assumptions about what other people don't know, backed up with zero facts to substantiate any claims made therein.

He claims repeatedly that "most people don't know [some fact about what goes into calculating the unemployment rate]." I think actually most people who read the news do, in fact, know that people who have been chronically out of work and given up looking are not counted among the unemployed. How could they not? It was hammered home over and over in the midst of the recession.

He never provides any numbers to support the claim that a large percentage of people fall into that category. He just states that it's left out of the calculation, that few people know it, and then leaves you to assume that therefore this must be a significant percentage of the population.

Stupid article.

Animats 20 hours ago 2 replies      
Realistic numbers for unemployment and other economic statistics are available at "http://www.shadowstats.com". These are mostly computed from older Government definitions. Over the years, the way some key numbers are computed have been changed to make them look better. Shadowstats uses the old computation methods, which are more honest. It's a paid service ($175 a year) for people and businesses who need better numbers.

Their unemployment rate, currently at 23%, includes long-term discouraged workers, which the BLS stopped counting in 1994.

Their inflation value is based on the way inflation was computed before 1980. It includes house prices. Their value is currently 8%. This compares to the official number of 2%. Shadowstats has it right - increasing real estate prices are inflation.

acd 1 hour ago 0 replies      
Before the 2008 financial crisis 1 of 8 american children where on food stamps. 2015 1 of 5 children are on food stamps. In short this is bad. I bet banker bonuses have went in the totally opposite direction, ie up since the financial crisis.

Here is the articlehttp://www.reuters.com/article/2015/01/28/us-usa-economy-fam...

SeanLuke 1 hour ago 0 replies      
I don't get this argument. Let's put aside that the author seems to be unaware that the BLS puts out multiple unemployment statistics. It seems to me that what really matters is not what statistic is being used but rather that the statistic being used is consistent from year to year so we can see what the trend in employment is.
scottkduncan 21 hours ago 1 reply      
"Gallup defines a good job as 30+ hours per week for an organization that provides a regular paycheck."

Perhaps this definition hasn't caught up with the increase in freelancing and self-employment. Underemployment, particularly among lower skilled workers, surely is an issue in the U.S. but Gallup's approach could be undercounting some newer types of "good" jobs.

jobu 21 hours ago 0 replies      
Some people would argue that Social Security Disability Insurance is soaking up a huge number of people that would rather be working as well: http://apps.npr.org/unfit-for-work/

"But, in most cases, going on disability means you will not work, you will not get a raise, you will not get whatever meaning people get from work. Going on disability means, assuming you rely only on those disability payments, you will be poor for the rest of your life. That's the deal. And it's a deal 14 million Americans have signed up for."

rilita 21 hours ago 1 reply      
"many Americans... don't know... Few Americans know this"

"wondering what hollowed out the middle class"

Perhaps the clueless people who don't know anything about what things mean ( the people this article is targeted at ) are the same people who are unemployed/under-employed ( notably also the group this is targeted at )

Summary of tfa: "The number news refers to as unemployment does not mean what you thought it means; it means X" Great, now how does this tell us anything we couldn't learn by looking up "unemployment rate" on wikipedia?

bayesianhorse 6 hours ago 1 reply      
Touting these unemployment numbers as a "big lie" is equally misleading. Especially when comparing this number to a past situation of which we don't know the disappointing details.

Just saying "only 44% of adults have a good job" doesn't sound like a good number either. This doesn't seem to count mothers, "house wifes" (if they choose that occupation voluntarily and gladly), college students and probably not even grad students.

declan 21 hours ago 3 replies      
This is well-known, I think, in economic circles. If you want another, arguably more accurate measure based on the government's previous (pre-1995) methodology, check out Shadowstats.com:http://www.shadowstats.com/alternate_data/unemployment-chart...

It shows the real unemployment rate, counting short-term discouraged and marginally attached workers, to be around 22-23% today. That's up from around 12-13% before the 2008 recession.

dredmorbius 9 hours ago 0 replies      
Workforce participation, median, minimum, and bottom decile wage are in many ways vastly more informative than unemployment numbers. Yes, even the expanded U6 values BLS provides.

Participation tells you how many people are working. Minimum wage tells you how well the worst-paid fare (and as Adam Smith notes in An Inquiry into the Nature and Causes of the Wealth of Nations, "A man must always live by his work" -- which he expands to mean: wages must provide not only for the laborer, but for a spouse, children, and the education of those children to provide for the next generation of workers.

Median wage tells you where the typical worker is. It's not skewed upwards by a few highly-compensated individuals as mean would be. If you and I are at a bar and Bill Gates walks in, the mean wealth has just jumped tremendously, the median not so much.

Bottom decile tells you how those at the bottom rung of the compensation ladder, though not necessarily at minimum wage, are doing. Smith has a considerable amount to say on this as well.

The biggest problem with unemployment (and other economic / econometric metrics) is that once defined they become political, and an change to more meaningful statistics tends to make the administration in power look worse.

smackfu 21 hours ago 7 replies      
Boy, a polling org putting up political opinion pieces sure seems like a terrible idea.
bibabo 10 hours ago 0 replies      
Most interesting is this: Many people compare other countries unemployment to US U3 not US U6/U5 while countries with gov. unemployement benefits (unemployed are people who get money, which includes people not looking or with a small amount of income) have numbers more in tune with U6/U5 and should be compared to U6/U5.

This makes the US economy always looks nicer.

(Same for GDP with chained dollars btw.)

joelhaus 21 hours ago 0 replies      
The delta is the most relatable conclusion the average person can draw from the unemployment rate reports. Otherwise, without taking some economics classes, you're doomed to misinterpret.
alyandon 21 hours ago 0 replies      
The various unemployment rates are pretty clearly defined and U3 is arguably not the most meaningful one to use but it is considered the "official" rate.

More information here:


tezza 8 hours ago 0 replies      
Others have mentioned U6 etc.

The issue of the headline figure not capturing certain key features is mentioned as nauseum on CNBC a or any decent financial news source.

Gallup and the person who posted this is trying to make it sound like a revelation. Further almost all types of employment has improved.

Here in the UK we have a similar obfuscation technique where the opposition says more women are out of work than ever, whereas the government says more women are in work than ever. Both are true, but behind the scenes it is because there is the largest population of work aged women ever.

ThomPete 12 hours ago 1 reply      
I must say I am chocked at how many people seem to defend the U3 definition vs. U6

As far as can tell the number of actual fulltime jobs is decreasing and the number of jobs that aren't providing full time income is increasing.

Unless you have a political agenda why would you insist that U3 is better than U6?

goorpyguy 19 hours ago 2 replies      
I've been thinking for a while that there need to be incentives put in place for companies who short-change their their employees (and by extension, the public as a whole) by hiring 3 part-time workers instead of one full-time.

This goes towards the "good job" / "American Dream" aspect of the article. People shouldn't have to work 2 or 3 part-time jobs to make a living if they don't want to, just because those are the jobs available. If somebody wants full-time employment (for which they are otherwise qualified), it would be better for them to have that.

Of course, it is cheaper for the corporations to use part-time, because it keeps them flexible with scheduling/substitutes and due to added costs like benefits etc.

I have been trying to figure out whether it makes sense to offer tax incentives/penalties which would push the balance towards more full-time jobs instead of part-time. One piece I have envisioned is forcing employers to offer the benefits a full-time employee would receive prorated to part-timers, with a penalty added for splitting it up. Make them want to offer a full-time job instead.

The part I am worried about is whether the effect is too strong and prevents somebody who actually only wants part-time work from finding employment (e.g.: a student, full-time parent, senior citizen or handicapped person). There needs to be some part-time work available, but generally a member of the workforce probably wants a full-time job.

msoad 21 hours ago 3 replies      
All these facts were true when unemployment rate was higher. So even if the numbers are off by some offset they improved.
danans 21 hours ago 0 replies      
Some of the hypothetical examples of out-of-work people the author uses are pretty unlikely, and seem intended to cast an artificially wide net for his argument: i.e. engineers and health-care workers and math degree holders probably have lower than average unemployment rates, yet the author paints a picture of them mowing lawns and losing unemployment (i'm not saying it doesn't happen, just that it's unlikely).

In a way, the author is committing the same sin as those he criticizes: he also oversimplifies the employment situation. The unemployment rate and the labor force participation rate vary dramatically by region and by profession (in California alone, compare the Bay Area to the Imperial Valley).

It's much better for some highly skilled individuals, especially in booming metros, and much worse for others, who are in either low skilled, or in regions experiencing secular decline.

Also, Gallup's own underemployment numbers (cited in the article) show both the unemployment rate and the underemployment rate at 7.1% and 15.9%, which is lower than where they were in Feb 2010, and that it is almost certainly lower than what they were in the depths of the great recession.

Nobody would argue that these stats aren't as good as they should be, but to say that things haven't improved at all is very disingenuous, which is why this reads more like political anger-rousing article than a well-reasoned op/ed. The latter isn't surprising considering that the first rumbles of the next presidential election cycle are here.


jpetersonmn 20 hours ago 0 replies      
I personally don't know anyone that doesn't already understand the things this article is pointing out.
debrice 15 hours ago 0 replies      
If you're getting unemployment benefit but do some undeclared job, you are employed but count as unemployed.

If you're not looking for a job, it's fair to not be counted as unemployed. Otherwise you can also count any kid in age of working in the statistics or family who have decided to have one member employed and the other taking care of the family.

The writer (CEO of gallup) pretty much explains that the stats behind the title is not what HE thinks it is.

This statistic describes those who want to work and cannot find a job.

pkaye 19 hours ago 1 reply      
I was once trying to compare US unemployment to other countries but was not sure which "U" statistic is closest to how others measure unemployment. Anyone has an idea for example within the EU?
samspot 17 hours ago 0 replies      
> None of them will tell you this: If you, a family member or anyone is unemployed and has subsequently given up on finding a job -- if you are so hopelessly out of work that you've stopped looking over the past four weeks -- the Department of Labor doesn't count you as unemployed

Even the local news gives this disclaimer almost every time they mention the unemployment rate. Maybe it's not that nobody tells you, but that the author just didn't notice it.

chipuni 18 hours ago 0 replies      
I look at Gallup's web page about U.S. employment (http://www.gallup.com/poll/125639/Gallup-Daily-Workforce.asp...), and I have a question:

The sum of their "% Payroll to population", "% Underemployed", and "% Unemployed" appears to be about 70%. If they're disjoint categories, what is the other 30%?

yohann305 19 hours ago 0 replies      
It is far from being perfect metrics, I agree. However, it is a good way to compare unemployment fluctuation over time as long as we keep the same paramaters over time.
franciscop 16 hours ago 0 replies      
If we counted Spanish unemployment like that it would surely not be the same rate as it is today, 23.7%. This definitely includes people that has given up hope about finding a job.

On the other hand, it doesn't include those who are working without a legal contract, which I am sure would lower the percentage significantly since it's not uncommon.

neves 16 hours ago 2 replies      
I always thought that the great lie in the employment statistics was due to high percentage of the American population that is in jail. USA has the greatest number of prisoners of the developed countries, and it skews the statistics.
clarkmoody 21 hours ago 1 reply      
Whether or not this information is always available each time BLS publishes the statistics is not the issue here.

The potentially alarming angle is whether the drop in unemployment is attributed to people finding jobs or to people leaving the workforce.

The current reporting of unemployment numbers certainly leaves room for spin, depending on how you want to package the news.

pbreit 21 hours ago 0 replies      
Right-leaning Gallup of course "forgets" to note that baby boomers retiring is making a material contribution.
brohee 21 hours ago 3 replies      
That's why the number of people 18-70 employed full time would be a much more interesting number.

No removing of prisoners, people on disability, housewifes, people that worked just one paid hour this week...

The picture painted by that number would be bleak in most of the developing word, that's why it's not readily published.

marquis 18 hours ago 0 replies      
This American Life has an excellent expos on this topic, from 2013:http://www.thisamericanlife.org/radio-archives/episode/490/t...
tjradcliffe 21 hours ago 2 replies      
I'm not sure why this is being reported as new or interesting. "They don't count people who have stopped looking for work" has been reported to death for decades (always, year after year, decade after decade, as if no one had ever pointed this out before... it's one of those strange "perpetually surprising" stories, like "engineers look to nature for inspiration".)

What is new and interesting is that in the past six years the American labour participation rate--the fraction of the working-age-adult population that is either employed or looking for work--has plummeted from 66% to 63%: http://data.bls.gov/timeseries/LNS11300000

To get a sense of what a big deal this is, you can re-run that chart to cover the full range of data from 1948-2014. After being flat at about 59% for two decades, the LPR begins to ramp up in the late 60's as Boomer women entered the workforce. It exhibits a broad flat peak from 1990 to 2008 at about the 66-67% level, and then starts its dramatic decline in sync with the financial crisis, and is now back at a level not seen since 1978.

This is a demographic shift of enormous proportions, and the answer to "Why" is not known: http://www.washingtonpost.com/blogs/wonkblog/wp/2013/09/06/t...

There is a fairly desperate attempt to spin this as "Boomers retiring" but that runs into a problem of simple arithmetic: the population of the United States was 203 million in 1970, when the ramp-up in the LPR began. It is now 320 million, a factor of 1.5 higher. So for every Boomer retiring, there should be 1.5 new workers entering the workforce. Where are they?

yason 21 hours ago 1 reply      
This is how it's in Europe as well: unemployment rates can be redefined to include or exclude certain people depending on what are the desired results. I usually look at employment rates instead which comes with its own peculiarities.
PythonicAlpha 21 hours ago 0 replies      
Same as in Germany: Every government invents new possibilities to "count people out" of the official statistics. So the numbers fall, but unemployment stays the same or even rises.
stalcottsmith 19 hours ago 0 replies      
The author is not a dummy. He is CEO of a top tier polling organization. Surely he understands U3 and U6, etc. He may have partisan leanings to the extent that this can be taken as a criticism of the current administration. Current policies may not be helping but I'm not sure that any partisan solutions provide the answers needed.

The bigger picture here is that the US sacrificed some broad-based increase in prosperity over the last 20+ years while helping the developing world to climb out of true poverty. You cannot bring 1 billion Chinese (and to a lesser extent other peoples) into the "middle class" through trade while at the same time sustaining the exceptionally high standard of living of so many Americans -- at least not without some major, hopefully-temporary dislocations.

At the same time, somewhat related to this, we are witnessing the passing of a period in which America enjoyed unique competitive advantages which are unlikely to re-occur in a similar form. No amount of IT innovation can make up for the passing of peak-US-cheap-oil-production (1970s), or the loss or diminishing of the dollar's reserve status and the US's central role in global trade (ongoing), or the temporary advantage of economic competitors being crushed in WWII (50's and 60's)...

The Americans worst affected by these policies were bought off to some extent with cheap imported consumer goods (think Walmart), oodles of credit, the spread of two income households and of course benefit programs.

Now, if you were to try to address this problem sincerely from a position that jobs and employment are desirable social goods you wish to maximize, you might aim for sensible policies that would reduce the cost of living for typical Americans (allowing them to attain desirable, economically justifiable employment at globally competitive wages), increase labor mobility (ability to move for opportunity), and reduce the barriers to employment at the bottom of the employment ladder. Secondary policy objectives might include simplifying the tax system, encouraging household formation, and restructuring education so that expensive college degrees are less necessary.

A lot of this has to do with how people are living in what kind of housing, how that housing is financed and what kind of transport they use to get to work and what kind of shape they are in mentally, physically and perhaps even spiritually to be be productive. I think major changes are needed to achieve broad-based 21st century prosperity growth in the US. Some of these changes would be deeply unpalatable and will only be considered if economic conditions worsen substantially.

Some here seem to think we are entering a post-employment society where jobs will be increasingly scarce because they are not needed and that this is a good thing. Maybe it is, maybe it isn't. This kind of thing is the hallmark of privileged bubble thinking. If you really remove the dignity of work from so many, you run the risk of making the people themselves seem redundant.

jordache 19 hours ago 0 replies      
it's funny that some right winger would link this ridiculous article here, thinking it would fool the HN collective.
zkhalique 21 hours ago 1 reply      
Deficit is dropping like a stone, but republicans aren't looking to bring back government jobs, they are just looking to use the flavor of the day to make people afraid of the Obama administration.
pbreit 21 hours ago 0 replies      
Right-leaning Gallup of course "forgets" to note that baby boomers retiring is making a material contribution. Also that the recession gave employers a good opportunity to automate workers out.
My Gravity lawsuit and how it affects every writer who sells to Hollywood
points by bmmayer1  2 days ago   125 comments top 17
dxhdr 2 days ago 6 replies      
What's wrong with these people? Why couldn't Warner Brothers reach out to Tess and say "Hey, we're going to produce a sci-fi film based on GRAVITY. We'd like to work with you and are interested in re-negotiating your contract for a new version of the story." You know, acting in good faith.

Instead they just screw her, no recognition, no money? How is that the default thought process for a company? This may be vindictive but I hope the court sticks it to Warner Bros in the most painful way possible. Something needs to be done to change the decision making process at that company.

lisa_henderson 2 days ago 2 replies      
Everyone should stop for a minute and imagine what would happen if this legal precedent spread to the tech startup world. We might then have situations where a highly experienced programmer might sign up with a startup, and sign a contract with that startup, but then have the contract declared null and void if the startup is acquired.

I can imagine a "less likely" and "more likely" way this might play out:

1.) "less likely" would be something bordering on blatant theft -- perhaps the contract specifies options or bonuses, but the parent company, after the acquisition, no longer wants to pay. That would be fairly blatant, though not impossible.

2.) "more likely" would, I think, be issues regarding copyright, which, with software, allows something of a fudge factor, since software is always changing, and no one programmer writes the whole of a large system. I'm thinking of my own startup here: during the years 2000 to 2002 I created a content management system, which then became the basis of a company that was formed in 2002. But the founding documents of that company gave me the right to specific royalty payments for the CMS, even if we eventually agreed to dissolve the company (which we did in 2008). But suppose the company was acquired, and then the acquiring company declared that the requirement to pay royalties was null and void? This is very similar to what apparently happened with Gravity.


tptacek 2 days ago 2 replies      
Here's the whole ruling:


It's more complicated (of course) than Gerritsen makes it out to be. The Hollywood Reporter does a decent job of explaining the contours:


(and, in fairness, it's more complicated than WB makes it out to be as well).

Most of the ruling concerns itself with the admissibility of various documents for the purposes of a dismissal ruling, which is complicated by the fact that the court has to stipulate all of the plaintiffs facts as true and still find no cause for a lawsuit in order to grant the motion.

From what I can tell, the meat of the ruling is:

* Gerritsen didn't have a contract with WB, but with Katja and, presumably, New Line.

* Gerritsen's argument depends in part on the notion that Katja/New Line would, absent control by WB, have fought against WB making a picture based on work they'd already licensed. Katja/New Line didn't do that.

* Gerritsen might have a legitimate grievance, but it's with Katja/New Line and its previous owners, who were paid by WB. Gerritsen is thus in effect a creditor of Katja's, and if she's owed something, it's owed from the proceeds of the sale, not from WB's own bank account. Or something like that?

It's headachey stuff.

chrisbennet 2 days ago 4 replies      
Even if they honor the contract she will get nothing besides being credited. Her contract states that she gets a percentage of the profits - and the movie will probably never make a profit.

Return of the Jedi never made a profit "despite having earned $475 million at the box-office against a budget of $32.5 million" (Wikipedia "Hollywood Accounting")

Very sad.

AustinG08 2 days ago 0 replies      
If only a powerful group like the MPAA would be opposed to such blatant exploitation of a person's creative work...

edit: grammar

rosser 2 days ago 1 reply      
Hollywood is much easier to understand if you look at it as a collection of interrelated financial instruments, that also occasionally makes movies.
burnte 2 days ago 2 replies      
This is alarming, but not surprising, especially in Hollywood. However, once they amend the complaint, I have a feeling WB will have a hard time arguing that both the contract gives them the rights to the story, and that the contract isn't enforceable or binding. This will be interesting.
fubarred 2 days ago 1 reply      
Large shops have huge bull's-eyes painted on them. If they act in bad faith, they will eventually lose reputation and business, spectacularly. This is why top VC's aren't going to do things that tiny VC's might try to get away with.

Also it's worth nothing that going out-of-your way to be fair isn't just "being nice," it selfishly reduces risk of issues, especially in small, highly-connected industries where reputation is the first consideration.

So if we are to take her account at face value, it seems like WB misjudged that word would get around and then lawsuits. (An expensive lesson in hubris.)

rsync 2 days ago 1 reply      
"I will receive based upon credit, a production bonus, and a percentage of net profits."

Wait ... I don't know anything about anything and even I know you don't ever stipulate profits in a film rights contract ... right ?

Zangela 2 days ago 0 replies      
To me, that's extractly a move to aviod paying what Tess should have from Warner Brothers. Warner Brothers just took an unspeakable way to gain money, but this could ruin its reputation. Tess should fight for her own right, and stand up for others with similar situations who don't have the backbone to fight.
FrankenPC 2 days ago 0 replies      
It seems to me a safer way to sell intellectual property rights would be to place a legal caveat that if the purchasing company is dissolved/acquired for any reason, the contract is null and void and the rights revert back to the original owner.
sethd 2 days ago 0 replies      
Sadly, I imagine Tess will be blacklisted from doing future business with the entertainment industry. Hollywood has a history of doing exactly that to those to sue over issues like this.
amelius 2 days ago 1 reply      
Small question. How does one become a Hollywood writer?
lifeisstillgood 2 days ago 1 reply      
Her contract stipulated she gets "based upon" and a percentage of net profits.

Now I know nothing about Hollywood but I do know that if you have a percentage of the net then everyone you meet sees a great big L on your forehead. Almost no movie makes a net profit.

You get a percentage of the gross or you are not taken seriously.

blueskin_ 2 days ago 0 replies      
Good luck to that guy. A lot of lawsuits over movie plots seem to be frivolous and I've supported the film company more than once, but that one seems to be the exception to the rule.
thinkcomp 2 days ago 0 replies      
Here's the docket and selected documents (blame PACER):


Mandatum 2 days ago 2 replies      
I don't believe this is HN-relevant.
Yahoo Homepage Now Featuring Extra-Scammy Scams
points by rosenjon  2 days ago   118 comments top 18
mpeg 2 days ago 2 replies      
Just want to point out that AdChoices is NOT owned by Google, and they do NOT serve any advertising.

It's a program organised by the DAA (Digital Advertising Alliance) which is itself formed of several other advertising associations, the goal of which is to be a self-regulatory program allowing to opt out of online behavioral advertising, so that you do not have to see any behavior targeted ads if you choose not to.

Google's display network is a part of the AdChoices program, but not every ad showing that icon is a Google ad. In fact, from the yahoo.com homepage, only the banner ads seem to be served by Google, sponsored articles are being served by Yahoo! directly. If you click the AdChoices button it will actually tell you who is placing the ad.

I'm not the biggest paladin of Google, but in this case everything points to the ad being served by Yahoo!

EDIT: From what I could find, it's likely the ads are being served by the Yahoo! streamads[0] self-serve platform.

Programmatic is always tricky to properly validate, because it's expensive to fact check every single creative that comes through. It's likely that, if reported, Yahoo will ban the advertiser from their platform.

Facebook uses a whitelist approach where they'll manually validate your first few creatives and then whitelist you for automatic validation, not sure if Yahoo employs the same method.

[0] https://streamads.yahoo.com/

dredmorbius 2 days ago 9 replies      
99% of online advertisers give the rest a bad name.

Not only is this why online advertising is ultimately doomed, but it's why we hugely and desperately and badly need to find another way of paying for content.

The alternatives for the moment are gratis, patronage, "native advertising", and subscriptions. Few of these strike me as ultimately scalable.

I've been a fan of Phil Hunt, of Pirate Party UK, and his broadband tax proposal after more-or-less independently coming up with the same idea myself. Hunt's proposal was principally aimed at music. I see no reason why it cannot apply to all content published and distributed online.


My own sketched proposal:http://www.reddit.com/r/dredmorbius/comments/1uotb3/a_modest...http://www.reddit.com/r/dredmorbius/comments/2h0h81/specifyi...

bad_user 2 days ago 4 replies      
I was originally against ad-blockers, because it provides me with an incentive to look for alternatives and I also want to reward websites that don't do this shit.

However, it's because of scams and malware pushed by means of ads that I started using ad-blockers myself and I recommend it to all my non-technical friends.

It's also the reason for why on my Android I'm now using Firefox. For Google to not provide at least a glimpse of a plan for Chrome's add-ons support on Android is unacceptable, for one because I now expect my browser to have add-ons support and I originally started using Chrome based on this expectation and because on mobile these websites are even more aggressive in pushing their ads. And with Firefox I can use AdBlock Plus on my Android, with uBlock coming soon.

r1ch 2 days ago 2 replies      
Adwords is becoming increasingly bad with these kind of ads lately. Yahoo are lucky that they don't have the automatic redirecting version that simply takes visitors away from your site onto these fake landing pages. Most of them seem to be coming from compromised adwords accounts.

As a publisher it's very difficult to do anything about this since Google apparently let adwords publishers insert arbitrary javascript into their ad code. This makes it so the ad creative and "destination domain" in the review center mean absolutely nothing, and since the JS won't execute from the review center you have no idea which ads are responsible.

nugget 2 days ago 1 reply      
This is the dark side of the massive shift to programmatic ad buying and selling which nobody wants to talk about. Compliance has largely been tossed aside in search of maximum yield. Sad but definitely the trend for the future.
DanBC 2 days ago 1 reply      
> [...]I ordered one bottle of Brain Storm Elite, entered my payment and shipping info, next thing I know, I have been charged $144 for another product The Memory Plus. I NEVER gave MP any CC information or anything, when I called BrainStorm about this, they say they are affiliated but do not have access to Memory Plus accounts, I asked how Memory Plus got my credit card information because the only website I was on was for BrainStorm. Big Surprise, someone else would have to get back to me on that. Filed report with BBB.

How do these companies survive having their merchant accounts closed after all the inevitable chargebacks?

mbesto 2 days ago 1 reply      
My favorites are Taboola and Outbrain who basically do this as a turnkey service. They've both received $99m+ in funding...

As as a user, I've never seen these as remotely valuable.



jarcane 2 days ago 1 reply      
This kind of thing is, frankly, yet another reason why I employ an adblocker.
blumkvist 2 days ago 5 replies      
>Do companies like Brainstorm Elite ever pay the price for wholesale fraud and the theft of brand identities?

Yes, they do. The FTC takes them for every penny they got. At least the big ones.


brokentone 2 days ago 0 replies      
I've worked primarily on the publisher side and observed such advertisements, but where to point the finger can be really difficult. There are so many affected parties here.

Should Yahoo be held at fault by the consumer? What does that even mean, a boycott? Does Yahoo have a direct relationship with these scammers, or how many connections away are they? Is this a network or programatic placement? Do discovery magazine and CNN go after this company for trademark issues? Does this former homeless man go after them for endorsement issues? Does it even have a US point of presence at all?

Fede_V 2 days ago 1 reply      
This is great detective work, and hopefully Yahoo steps in and fixes this.

Online advertising has become and more vicious - on all my computers, I run adblock/no script, etc, but on my phone, browsing sites like retractionwatch or theatlantic, I've been redirected to full page ads for subscription services where the automatic subscription button is about 1mm where the 'close window' button appears.

martinko 2 days ago 2 replies      
Hm, now the link in the article (www.discoverpresentsonline.com/david-brain/report.html) just redirects to the real discover magazine.
PaulHoule 2 days ago 0 replies      
It has been way for a while, but it has gotten worse.

I used to check out Yahoo Finance once a day because it used to have real financial news but now if you look at the front page template about 60% of it is allocated to the same scam ads that run over and over and less than 40% to real content.

Needless to say I don't use Yahoo Finance regularly anymore.

hellbanTHIS 2 days ago 1 reply      
Seems like branding is the way to go on the internet, rather than throwing crap in peoples faces.

Example: Yahoo News, brought to you ad free by Nationwide Insurance

Then people think "I like Yahoo News, I like Nationwide Insurance" instead of "I hate that scummy ad, I hate Yahoo News".

owly 2 days ago 0 replies      
Anyone have experience blocking ads at the firewall level? Possibly too many URLs to block? Also, what's Yahoo? ;)DDG FTW
dnlmzw 2 days ago 0 replies      
That is quite disturbing.
chatman 2 days ago 0 replies      
Shame on Yahoo!
argc 2 days ago 0 replies      
This makes me want to puke. Nice one, Yahoo.... makes you look GREAT.
Research into psychedelics, shut down for decades, is yielding results
points by juanplusjuan  1 day ago   137 comments top 18
cubano 15 hours ago 9 replies      
I took a ton of acid (blotter) in the late 70's and early 80's as a teen, and then again in the late 90's (window pane and liquid eyedrops), and I just have to say, I, personally, have mixed feelings about this sort of thing being heralded as some sort of metaphysical panacea.

As I mentioned in a previous post, I became addicted to opiates in the mid 2000's and lived as a zombified-but-somehow-functional heroin addict for about 4 years.

There is no doubt, in my personal case, that acid and mushrooms (that I often hand-picked in cow pastures after rainstorms here in central Florida) gateway-ed me into harder, destructive "escapes", and for that reason, I cannot fully endorse this sort of thing.

I've had amazing trips where I literally felt as one with the group of friends I was chilling with and created deep, transcendent bonds, and I've had a select few shit ones where I felt totally alienated from every living soul (but not nature, interestingly) on earth.

They did expand my consciousness, but looking back, I see now that it introduced into my psyche a fairly deep distrust of authority and convention which, under sober scrutiny, perhaps did little to help me always successfully nagivate my life.

Treating the very sick and/or terminally ill with psychedelics makes great sense to me; anything to ease those pains, but my own experience makes me want to throw at least a dart of caution into the mix when it comes to making a blanket statement about the benefits of LSD and such.

snikeris 20 hours ago 1 reply      
Includes an interesting account of Robert Jesse's (former Oracle VP, software engineer) efforts to resurrect this research:

When the history of second-wave psychedelic research is written, Bob Jesse will be remembered as one of two scientific outsiders who worked for years, mostly behind the scenes, to get it off the ground.

Pyret 20 hours ago 3 replies      
I didnt want there to be an easy way out, she recently told me. I wanted him to fight.

Attitude that keeps everything stagnant and backwards.

state 19 hours ago 2 replies      
"During each session, which would last the better part of a day, Mettes would lie on the couch wearing an eye mask and listening through headphones to a carefully curated playlistBrian Eno, Philip Glass, Pat Metheny, Ravi Shankar."

This strikes me as sort of funny. For someone completely unfamiliar with this stuff I would imagine encountering it to be pretty trippy on its own.

joncooper 14 hours ago 1 reply      
If you're interested in this, check out MAPS: http://www.maps.org/

They are doing a great deal to push this research forward and have been for decades.

skidoo 2 hours ago 1 reply      
I learned more about the world from DMT than in all my years of school.
benten10 19 hours ago 2 replies      
While this is undoubtedly exciting, lets not forget what should be for us (specially people in the technology who have seen waves of the same 'fad' come over and go) this paragraph from the article:

>The first wave of research into psychedelics was doomed by an excessive exuberance about their potential. For people working with these remarkable molecules, it was difficult not to conclude that they were suddenly in possession of news with the power to change the worlda psychedelic gospel[...]It didnt take long for once respectable scientists such as Leary to grow impatient with the rigmarole of objective science. He came to see science as just another societal game, a conventional box it was time to blow upalong with all the others.

Special emphasis on the last sentence.

superobserver 18 hours ago 2 replies      
Fascinating research. I just hope the same mistakes aren't repeated and a really rigorous and robust effort is made to find what sorts of applications these substances can be used for. I'm reminded of LSD microdosing by scientists to improve innovation that had been done before, but I am unaware of to what degree it really bore any viable fruit.
shanra88 16 hours ago 1 reply      
Mention of "ego-less" state etc sound just like the teachings of hindu masters like Ramana Maharishi, Nisargadatta Maharaj...
FranOntanaya 18 hours ago 0 replies      
"The data are still being analyzed and have not yet been submitted to a journal for peer review"

Maybe the NewYorker could have waited for that to happen.

dwaltrip 14 hours ago 1 reply      
With the proper approach and care, these substances can be incredibly powerful and beneficial. I can't wait until the research eventually forces the hand of those who mistakenly believe otherwise. Psylocybin and perhaps LSD should be legal on some level in our lifetime hopefully.
lorddoig 14 hours ago 1 reply      
With other recent news in mind, I wonder what effect compounds like these might have on religious extremists. I wonder whether - assuming some kind of method of administration is figured out (a love bomb?) - they might stop burning people alive in cages after a decent trip.
bunkydoo 18 hours ago 7 replies      
As someone who has done their fair share of psychedelics - I would feel like a coward consuming these substances if I had a terminal illness. If I knew damn straight that I was gonna die, I wouldn't want to numb it up with a substance. I'd want every minute of pain, suffering, and emotional baggage to be taken on with a sober mind.

But that is my personal choice. I would say it's probably a very positive thing on the other hand for people like Patrick who never consumed these substances. DMT might be the best one for someone who is dying, as it is hypothesized that pineal gland floods an endogenous version of this chemical into your bloodstream upon death. Consuming it prior to death could potentially work as a "practice run" to help cope with the real thing as sad as it sounds.

clapas 17 hours ago 0 replies      
TL;DR I grow magic mushrooms myself and can asure there is a mystic experience on eating them. I do not eat them often, but it helps me everytime with a new perspective.
lsdaccounthn 5 hours ago 0 replies      
Using a throwaway as while I'll talk openly to some friends/family, I don't want my handle and lsd to show up together on Google.

This is a story, an anecdote, and while my view on LSD is positive as a result, definitely shouldn't be read as an endorsement of my actions.

I'm the perfect example of somebody who shouldn't go near psychedelics. I've suffered depression most of my life, and was recently diagnosed as bipolar, though I've only had two real manic episodes. But.. I'm also someone who does stupid things, possibly because of not just despite those things. I've abused coke, benzodiazepines, mdma, alcohol and weed. But never to an extent people around me might notice a problem.

A little while ago, I fell in love with my best friend. It was really fucking hard to deal with (after a few months of hoping it would go away I told her, talked it through and we set about trying to get rid of the awkwardness of staying friends), harder than any other life/love problems I've had. For 6 months I was depressed, had no appetite.. I was forcing myself to eat one meal a day because despite never getting hungry I knew I needed to. Friends told me the appetite was related to my feelings, but I stubbornly dismissed that as pop science - meanwhile I was kind of happy about the appetite, as I was losing inches from my waist.

Then I took LSD for the first time. It was nothing like I'd expected it to be (in my imagination it would be like entering a new world, not just altering the way your mind works in the current world), but it was lovely. A few hours into that trip, I started thinking about my friend. I realised that while I still felt the same way about her.. it didn't hurt any more. It was like this clarity just appeared over the situation that there's nothing I can do about it, so I shouldn't let it hurt me. While under the influence I realised it would probably be back to normal when I woke up the next day, but then it wasn't. I woke up feeling the same way I had while tripping, went into the office, and by lunchtime I was feeling hungry for literally the first time in half a year.

Now I'm in a slightly different place. I'm no longer abusing <something> on a daily basis (the last thing to go was daily weed smoking). I've no interest in benzos or MDMA. I still love coke, but hardly ever do it (twice in the last 18 months, both times someone else's suggestion, both times I didn't want more the next day). And psychedelics... I haven't done them much lately, but have an order on-route from dark net markets of LSD and DMT, largely motivated by wanting some more internal soul searching.

I was hugely grateful to the LSD for that effect it had on me. I've used it quite a few times since then, though it's never made such an impact since. But lately I've been starting to feel down about her again. I don't know if it can help me again.

Long story short... I'm not saying any of my actions were/are sensible or the results deserved. Nor that LSD would help everyone who was in my situation. But as a single anecdote (and hopefully interesting story), it opened my eyes to believing in the sort of trials being described by this article. My pre-existing mental conditions mean I'm probably unlikely to get approved for anything like this, even when it reaches wider access, but if I could, I'd jump at the chance to go through psychedelic therapy with expert scientists guiding me rather than doing it on my own.

(Incidentally: who knows, maybe my next tab will turn me into a schizophrenic: but in the ~15 trips I've had on acid, I'm yet to have a single "bad trip". Same goes for the few times I've tried DMT.)

eli_gottlieb 5 hours ago 0 replies      
I'm sure the drugs can have therapeutic uses. I'm also just as sure that they don't reveal any kind of metaphysical Higher Reality, and we should stop addressing them as if they did. They merely alter your brain functioning in certain ways.
anonbanker 19 hours ago 1 reply      
Replying as breadcrumbs to the militant anti-drug members of HN trolling the pro-drug threads.

Seriously. Look at his posting history.

AngrySkillzz 20 hours ago 4 replies      
Problem? Drugs are interesting because the brain is interesting. As a community that probably collectively spends a lot of time thinking, an interest in the brain makes sense.
Sciences Biggest Fail
points by Yhippa  1 day ago   253 comments top 61
api 1 day ago 11 replies      
"I think science has earned its lack of credibility with the public. If you kick me in the balls for 20-years, how do you expect me to close my eyes and trust you?"

I'm really happy to see that someone else sees this. I've been harping on this for a long time -- that the reason people believe things like anti-vaxx propaganda is not because they are idiots but because scientific authorities, the media, and the medical establishment have not earned their trust.

People subscribe to kooky conspiracy theories and fringe/quack medical ideas because those advancing those points of view appear more credible than our society's institutions. Much of that appearance of credibility is by default -- it's more that our institutions have ruined their own credibility by being overconfident or in some cases actually deceptive. I personally think it extends way beyond medicine. When the president tells us we're invading Iraq because it has "weapons of mass destruction," and that turns out to be almost entirely hot air, should people be considered stupid for suddenly trusting Alex Jones more than they trust the POTUS?

Trust is hard to earn and easy to squander. In addition when you have someone trust and then stab them in the back, the emotional reaction from that is far worse than if you never had any trust to begin with. Betrayal inspires some of the deepest negative emotions.


Another phenomenon that I think is at work, especially with people like Alex Jones and wacky conspiracy theories, is a kind of "fuck you factor" that they have. Believing such things and perpetuating them is an act of (often subconscious) protest -- akin to things like calling yourself a "Satanist" in protest against fundamentalist religion. You might call these kinds of things "protest beliefs."

I have a friend who leans toward the view that we didn't land on the moon. He's a very intelligent person. I personally believe -- and I've told him this -- that this "belief" is more of a big fuck you to the backward-and-sideways direction NASA and America in general has taken post-Apollo. "Fine then... if you're going to cancel visionary projects so we can have more war and tax breaks for the financial industry, then I'm going to deny that you ever did it in the first place to spite you." He didn't really deny that, just kind of shrugged.

mcphage 1 day ago 7 replies      
Doing science is a lot like making sausages. There's a lot of conjecture, hypothesizing, testing, making observations, generating theories, testing those theories, throwing out the ones that don't hold up to repeatable studies. I don't think nutrition is unique in that matter. Where it differs from other sciences are:

1. The general public is very interested in the results, and so there's a lot of motivation for people to misunderstand or exaggerate the results in conveying them to the general public.

2. The connection between nutritional intake and results is very complicated, and ties into pretty much everything about how a person lives. The data is very noisy, and so it's hard to get good results. Other sciences, it's a lot easier to get clean data.

And yet the needle moves forward, slowly. Ideas get refined, the details get filled in, and bad ideas get tossed out. We go from "fat = bad" to "some fats are bad and some are good". That's the natural progression of science. I'm sorry Scott Adams doesn't like that's how science works, but it's the best thing we've come up with so far.

ohazi 1 day ago 3 replies      
"How do you make people trust a system that is designed to get wrong answers more often than right answers?"

You can start by teaching kids how real science is actually done in order to get them to understand and trust the process rather than the we-only-deal-in-absolutes pop-news headlines.

themagician 1 day ago 1 reply      
The problem with the "science" around food and fitness is that there is an underlying assumption that is wrong, and that assumption is this: There are things that are inherently good for us.

Nothing belongs anywhere, and nothing exists on purpose. Nothing is "designed" for us. So many diets and fitness fads tap into this idea of what is "natural" and try to implicate that there are certain things we are supposed to do. That the human body is "designed" to consume certain things in certain proportions. When you think about it for a second it becomes obvious that this is patently wrong.

Everyone is different. Everyone is going to be genetically predisposed to certain conditions as a result of consuming certain things. There are some things we all have in common (basic need for certain vitamins and minerals), and there are certain things we know are bad for us. Our natural evolution has lead to the current state of things, but our consumption habits, behavior and understanding have now surpassed our natural evolution. Even when we try to get back to our most primal it doesn't make sense, because even humans running around 50,000 years ago eating berries weren't "designed" to do that. They simply didn't die as a result of doing it before reaching sexual maturity.

We need to stop thinking about food as being good for us, as if we are going to find some magic diet that works for everyone. It's never going to happen. Our understanding of genetics and the human genome may lead us to a point in the future were we have a better understanding of how our individual genetics are affected by different foodsand we can synthesize substance or specific diets that are optimal for each individualbut we will never reach a state where we "figure it out." Why? Because there exists no correct answer.

Alupis 1 day ago 2 replies      
The problem is, most scientists say "We have evidence today to suggest ..." and most average-joe's hear "This is the absolute truth and everything else is wrong".

Science is an evolution. What's "right" today might be proven wrong tomorrow (after more studies and research are done) -- and science is one of the only fields that admits that they got it wrong previously.

Average-joe's just want someone to tell them "what is right" and leave it at that. Unfortunately that's just not how good science works.

astrodust 1 day ago 2 replies      
This really seems like confusing science with science headlines in the newspaper.
WalterBright 1 day ago 1 reply      
> science failed my parents generation with cigarettes

My father said that cigarettes were popularly called "coffin nails" when he was a boy in the 1920s, and that doctors routinely advised their patients to quit smoking. I've never heard of science advising people to smoke.

Any scientist who cut open a smoker and saw those black, puss-filled lungs knew it wasn't good for you.

protonfish 1 day ago 1 reply      
Science is a process of understanding the world. No process or tool has yet been found to work better. When he insults science, he really means "Scientific consensus." I agree that there are huge problems with this, as consensus has little to do with science, and everything to do with mass media and government agendas. Something as politically charged as food with competing agendas of agriculture subsidies, environmental impacts, and public health and welfare is bound to be so controversial as to resist a clear message.

So Scott and I are in agreement about this: what the media and our leaders have been, are, and will continue to tell of about nutrition is probably utter nonsense. And in their attempts to shove their agenda down the public's throats, they will wave around cherry-picked questionable "science" and accuse their detractors of ignorance. That's how effective PR works.

I would see this as a failure of the press and our elected officials. Mr. Adams sees it as a failure of science. But what is his alternative? I don't know, but I do know that when somebody first tries to convince you that reason and evidence are not to be trusted, what they want to convince you of is probably not in your best interest.

ZeroGravitas 1 day ago 2 replies      
I found out recently that Scott Adams doesn't believe in evolution, which rather colours my response to this headline.


danans 1 day ago 1 reply      
I have a pet theory that half the reason that the public bought the overly simplistic theory that fats are bad is due to the collision in English between the word "fat" as a type of substance, and the common use of the word "fat" as a pejorative adjective.

This has even affected some of my family members for whom English is a second language, but who learned both meanings of the word simultaneously in the 80s in the US. Some of them can't separate the concepts, no matter how hard I try to explain to them that they are mostly unrelated.

EDIT: wording

hga 1 day ago 2 replies      
Two nits:

I used to think vitamins had been thoroughly studied for their health trade-offs. They havent. The reason you take one multivitamin pill a day is marketing, not science.

One reason to take the latter is to make sure you don't get a deficiency disease.

I used to think I needed to drink a crazy-large amount of water each day, because smart people said so, but that wasnt science either.

I don't know about "crazy-large" amounts (and too water much will kill you), but as far as I know, plus a little bit of time with Google just now, which indicates this has been the "scientific consensus" since the time of Hippocrates, unless you drink a fair amount, you're setting yourself up for kidney stones. Which I can attest are no fun at all.

daeken 1 day ago 0 replies      
> Science isnt about being right every time, or even most of the time. It is about being more right over time and fixing what it got wrong. So how is a common citizen supposed to know when science is done and when it is halfway to done which is the same as being wrong?

This is asking the wrong question. A cost-benefit analysis needs to be done, when weighing scientific claims to act on, not just saying "this is right" or "this is wrong". If scientists are pretty sure that me eating an apple (for instance) is a good thing, it costs me little in exchange for a decent potential benefit.

This is why a scientifically literate populace is so incredibly important; without it, you get this all-or-nothing hogwash that this article makes out to be a good thing, for whatever reason.

kin 1 day ago 2 replies      
I subscribed to Men's Health for over 5 years. I read every issue and many times headlines would contradict one another. Many of the headlines were based on studies from what felt like arbitrary Universities or Research Centers. It's pretty much the equivalent of click bait. When you read past the big letters and highlighted sections you see clearly that the studies aren't scientific at all. Their sample size is always too small or they always leave out important human factors or they leave out the middle man of cause and effect. When it comes to diet and nutrition I most definitely see a lot of bad science out there.
Synaesthesia 1 day ago 3 replies      
The major problem is the influence of big corporations in funding studies and promoting certain points of view for their own benefit. This is a huge problem in the food industry and also the drugs and medicine business.

If you look at the history of the popularity of vitamins, orange juice, the promotion of carbohydrates and sugars over fats and proteins etc.

Not a problem with Science per se.

IanDrake 1 day ago 0 replies      
Someone must have just read "The big fat lie".

I could have written the same article after reading the book. A real eye opener and I suspect a similar book will be written about climate science in 30 years.

supergeek133 1 day ago 0 replies      
I think a big problem is that science, by and large, doesn't control their message. News media does.

Come to think of it, science related stories about what food is/isn't good for you etc are the original clickbait. They're a headline, when the devil is actually in the details.

A great example is why you hear that horrible list of side effects during every prescription commercial. Once, during testing, someone got cancer. Therefore, it's in theory possible cancer was caused by that medication, however unlikely.

IvyMike 1 day ago 0 replies      
I'd argue that most human diet and fitness results aren't even 'science', simply because you can't treat humans like lab rats.

The "experiments" we do are extremely limited in scope, and thus the results are limited in scope. Sadly, the authority of "science" combined with the desire to do something grand means that a lot of marginal, limited results get turned into authoritative broad headlines. ("Scientists say eggs bad^Hgood^Hbad")

threatofrain 1 day ago 0 replies      
I think the blame is on the fact that the scientific community does not have much power in the media, and is instead the puppet of the news media. But then again, all facts are puppets of the news station, made to be framed in any way that fits into any kind of narrative.

The overwhelmingly dominant interface non-university people have with science is through news media. How else are you going to get your facts? By going out to different countries by yourself? By researching into all areas of interest? That's insane.

The top four salient science media personalities are probably Bill Nye, Neil Degrasse Tyson, Richard Dawkins, and Dr Phil. Though, I wouldn't say that any of these people hold a candle to Bill 'O Reilly or Rush Limbaugh in terms of influence over public attitudes and opinions on science, and that's the problem.

Science, in the eyes of the many, is just another fact in a news media report, and it can go any way the host or reporter wants it to.

Yes, science has been inconsistent. Yes, science media personalities have been belligerent or not diplomatic on camera. Yes, there has been corruption of metrics and statistics. But... it just doesn't hold a candle to Bill O' Reilly. Or Sarah Palin, for that matter.

hackuser 1 day ago 1 reply      
Despite the criticisms, the outcomes have been amazing. People are living longer and healthier lives than they did 20 years ago. Is Adams also forgoing vaccinations?

My impression is (but someone with actual knowledge please contribute here) that modern medicine and science have achieved what no other institutions or ideas in human history have achieved, significantly extending human life and curing diseases that cursed humanity since the dawn of time. It is a miracle, and it continues -- life expectancy continues to improve and more diseases are coming under control of prevented (except measles, of course).

Scott Adams' cartoons are insightful; he does not seem to apply the same depth of thought to his writing. This piece is poorly thought through. "Science" has told him nothing, unless he reads the research himself. News about science, and the public's poor grasp of uncertainty, risk, and the significance of scientific research (i.e., is this one study? settled science? etc.) are what generate confusion.

rail2rail 1 day ago 0 replies      
An exceedingly large number of people distrust science not because they've been paying attention to it, but because someone told them to distrust it. They couldn't care less about the rigor behind the science if it differs from their own worldview. We have an enormous problem of flat out science rejection in the US for no other reason than politics and religion.
charlieflowers 1 day ago 0 replies      
It seems what is missing in nutrition is the ability to _measure the results on your own individual body_ . For example, you can take a blood test, find out you're deficient in vitamin D, supplement for a while, then take another blood test and see results.

But you cannot do that for most things to do with nutrition. You can't check your gut biome, see a problem, take a specific probiotic, then check again and confirm improvement, because we don't understand the gut biome well enough yet.

You can't do it with most supplements. And even when you can, we hardly ever do. Insurance is not going to be a big fan of it.

You also can't do it with switching from margarine to butter, or drinking more water, or whatever it is you think might help. Without some concrete measurable change that you actually measure, you're taking shots in the dark.

I hope wearable devices can make some inroads here, at least for low hanging fruit (easily measured, well-understood things).

RA_Fisher 1 day ago 3 replies      
I have a simple criterion for a summary judgement of the reliability of results:

a) Is the data made available?b) Is it a Bayesian analysis?c) Has a power study been offered?

As a statistician, I have a keen awareness of the ways that p-values can depart from truth. You can see Optimizely's effort to cope (https://www.optimizely.com/statistics). You can read about it in The Cult of Statistical Significance (http://www.amazon.com/The-Cult-Statistical-Significance-Econ...). This Economist video captures it solidly (http://www.economist.com/blogs/graphicdetail/2013/10/daily-c...).

The key component missing is a bias towards positive results. Most scientists only have two statistics classes. In these classes they learn a number of statistical tests, but much less how things can go wrong. Classic, "just enough to be dangerous."

In order to cope, I have a personal set of criteria to make a quick first sort of papers. It's a personal heuristic for quality. I assume some degree of belief (Bayes, FTW!) that those that offer the full data set along side conclusions feel confident in their own analysis. Also, if they're using Bayesian methods, that they've had more than two stats classes. Finally, if they do choose Frequentist methods, a power study tells me that they understand the important finite nature of data in the context of asymptotic models / assumptions.

I'd suspect that other statisticians feel this way, because I've heard that privately --- what do you think of my criteria?

TeMPOraL 1 day ago 0 replies      
I do see a kind of anti-induction thing going on, similar to what happens on stock markets. In the case of science, when people trust research, we have all kind of assholes[0] flocking to it and using it to push their agenda, up until people don't trust science anymore. Come to think of it, this applies to all kinds of things people trust.

The problem seems to be, people don't care about being lied to. Politicians spew bullshit all the time, there's hardly a true fact you can find in a newspaper, and yet everyone just goes "meh". There should be back-pressure. Journalists should lose their jobs over lying to people, and that includes all that nonsense science reporting that is killing trust in life-saving research. But no one seems to care.

[0] - I honestly believe abusing people's trust in something is one of the most dickish move you can pull.

myth_buster 1 day ago 1 reply      
Jeez, I think Scott Adams is barking up a wrong tree here. The reason science has lost credibility is because of vested interest groups who put money into labs so as to get a result tailor made for them. Just as pattern recognition is one of our traits, fudging numbers and data is also up there. Add to it marketing which uses psychology to draw on our strings and play humans as puppets. I would be surprised if all this didn't amount to the fragmented society that we currently live in.

Also Scott Adams is using very choosy topics to make his case against science which may not hold true in generality. The difficulty of getting sugar rich diet and drinks out of schools shows how obesity is not primarily due to lack of credible science. This also applies to Tobacco. I think this whole blog is a sensationalist piece which jumps from one extreme to another.

> Im pro-science because the alternatives are worse. (Example: ISIS.)

ISIS is not an example of lack of science but an example of lack of empathy and humanity.

I agree that the layman is not an idiot but there is a whole system built into society to patronize them and to make him/her feel that way. People are constantly told to rest their judgement and let the authorities tell them what it implies. This happens right from childhood where parents for the fear of being exposed of lacking knowledge on the topic use their authoritarian powers to suppress curiosity. A child who is constantly being exposed to that treatment will outsource his/her judgement for the ones put across by pundits in the media when they grow up.

I think the "The Clean Room" episode [0] in NgT's COSMOS brings the fault in our system quite effectively by showing how Clair Patterson had to battle great odds to bring the ill effects of lead in Gasoline primarily because there was a big vested interest group who were against it. It's available on Netflix and I would recommend it highly.

[0] http://www.imdb.com/title/tt3410940/?ref_=ttep_ep7

JabavuAdams 1 day ago 1 reply      
Nutrition is a weird example to pick on, because it's dominated by marketing, financial interest, media mis-reporting etc.

Nutrition works like this: your boss reads an article in some rag and thinks that there might be an opportunity to target a new market. They tell you to go find some studies or something that could back up your claims. Usually, this would not survive any kind of rigorous scientific investigation -- but it sounds good enough to use for marketing.

Done. There's precious little science in nutrition. Just follow the money.

EDIT> I remember the eighties and the beginning of the whole low-fat craze. It was clearly a marketing push, not anything based on reputable science. Hey our product contains Plutonium, but no fat. So, let's emphasize the positive.

normloman 1 day ago 3 replies      
Science changes it's mind in light of new facts. That's why science is great. I don't understand what the author wants from science? To make up it's mind? That's not how science works!
amass 1 day ago 1 reply      
Movements or fads like Avocado-based diets or the practice of drinking one glass of alcohol a day are really hard to justify scientifically because the human body has such a multitude of variables. It's almost impossible to test the effects of such a diet ceteris paribus. So I'm not sure if "science" can really be blamed. The general public wants a simple and easy solution, so the "science headlines" are going to try to give such a solution with half-proven theories and loosely-correlated results.
Diederich 1 day ago 0 replies      
Taking a step back.

The rate at which science has moved forward ever since there has been science has increased far, far more than the average human life span.

Say, a few hundred years ago, it took science an average of X years to move forward enough to know that it was wrong about something. X was on the order of many human lifetimes.

Today, science is moving so much more quickly. It 'finds out' it's wrong, over and over again, about a given topic, during a human's life span.

Add to that the increased number of humans who are science literate.

Add to that the total amount of 'bandwidth' between 'science' and people.

Here is yet another area where technology has left biologies ability to cope in the dust.

In this case, there are some possible solutions, mentioned elsewhere in this thread. Understanding what science actually means has never been more important.

Here are my basic assumptions. Everything I believe is going to be proven wrong, multiple times, over the course of the next decades. All we can do is go forward, making sure what we do base our decisions on, every day, is the on our best and most honest efforts, for this moment in time.

Note, I don't really think everything I believe will be proven wrong, in all likelihood. It's just a starting point.

I understand my comments are skipping over a whole lot of important, relevant and fundamentally broken things, many of which can and need to be fixed.

shanusmagnus 1 day ago 3 replies      
I think Adams is very wrong here.

I'm sympathetic to his reasoning, since I have followed nutrition closely for the last twenty years, and believe the scientific/media consensus to be much as he described, and even worse than that occasionally, where people who should know better (a PhD teaching a class on nutrition in a community college) claim things that are both factually and obviously wrong ("Low carb diets are bad because the brain can only metabolize glucose" which has been known false for at least 50 years and probably more.) Cynicism, in some cases, is warranted.

That said, I think Adams is letting people off _way_ too easily. You will notice that when people refuse to believe a scientific position they are always (surprise!) advocating a different position more advantageous to themselves, that will not inconvenience them and that does not reflect badly on them or on those with whom they affiliate. They deny climate science not because the poor wounded souls have had their hearts broken too many times, but because the implication is that their lifestyle decisions are having adverse effects on the world, and addressing those adverse effects will be costly; and because (this is important) because the idea of faggy liberal scientists telling them they're living wrong is too much to take.

It is not an accident that the vocal opponents of things like climate change, global warming, animals welfare, pollution, etc., are the same folks who are 100% convinced that they sometimes get personal messages from angels. Their idiotic worldviews do not arise as a result of Bayesian discounting based on having received bad advice, but from intellectual laziness and an inability (or unwillingness) to look in the mirror and see a possible problem there. (And the aforementioned affiliatory thing about faggy liberals.)

bglazer 1 day ago 0 replies      
My philosophy is this: Correlational nutritional studies should be trusted because they are the only and therefore best source of verifiable information on maintaining healthiness. They are, however, very noisy and each study in isolation should therefore influence my choices very little. Meta-analyses should be given more weight because they tend to smooth the noisiness.

Scott Adams bemoans the fact that the wildly varying conclusions of diet science cause people's pattern recognition to conclude that this is not trustworthy information.

However, the problem is not pattern recognition. The problem is the weight that people give to signals. When each signal is given a weight of 1 or 0, the pattern recognition will never converge on the underlying trend. The media and the public must understand that some studies should be given more weight than others.

Thus, scientists have a duty to communicate which studies should be most influential. They CANT trust the media to this.

dkrich 1 day ago 0 replies      
I think there's a very real truth to what he's saying. I find myself wholly skeptical whenever I hear about the results of any scientific discovery such as a breakthrough in cancer research or diet/health issues.

Partly to blame are media outlets rushing to publish "definitive" results to get a headline before overwhelming evidence that the results are in fact conclusive before they are known to be. Also to blame are of course, scientists themselves, who conduct faulty research and publish the results in a conclusive manner prematurely as well.

That said, I think there's a distinction that needs to be made that Adams did not at the end of the post. He says that people are skeptical as to whether climate change is real due to the aforementioned credibility issues. I don't think this is really true. Most rational people (I realize that there are many who are not) do not dispute the existence of climate change. Any scan of climate data over the past century or photos of receding glaciers can quickly and conclusively show that climate change exists. What most people are skeptical of (and something that I do not believe scientists have yet convincingly proven) is that humans are directly responsible for the climate change. The climate of the earth was increasing prior to the industrial revolution, so how much humans are to blame is very much a debatable point.

To qualify that, I have no vested interest in either side of that argument, however as a skeptical person, I do not believe that there is convincing evidence to support the claim. My father, who was a geologist and climatologist had the same belief. I think for me the issue is that people, scientists and reporters alike, have proven themselves to just not be that smart. They lack credibility because they continue to make claims that are shown to be false. How many reports did we hear that oil was going to spike to $300/barrel and that the economy was on the brink of collapse? How'd that turn out?

j0e1 1 day ago 0 replies      
I agree! In fact, except for the case of rock-solid proven theories, nascent theories that make it to headlines are treated as truth and the ill-informed seem to exhibit fanatical faith in them, forgetting that they are still theories. Being skeptical to these could earn you the title of a fool nowadays!
jakobegger 1 day ago 0 replies      
The biggest problem with Science is that it usually doesn't answer the questions we actually want answered. The problem happens when we try to extrapolate from scientific results. When we make claims about human nutrition based on experiments with rats. That's when we run into trouble.

Usually the scientific answer should be "we don't know".

(That's also the reason why people run to some quack doctor and start taking homeopathic remedies. You'll get answers there. Unfortunately, people trust those giving answers more than those saying "I don't know".)

argos 1 day ago 0 replies      
I have a problem with science been "done"

Science is never done, or at least it shouldn't be. We create models and update them or change them as we get more evidence and the technological advancements allow us to perform better tests.

Gravity was not done with Newton. sure, it's a great model to explain how objects are attracted to each other, but Einstein came alone and proved that that the model was not correct and it made incorrect assumptions (constant time for example).

I think that is the great misunderstanding. We expect science to give us final answers. We expect it to study something and then be "done". But that is not the case.

knice 1 day ago 0 replies      
Asking science to refute marketing claims is a huge distraction for science.
pvaldes 1 day ago 0 replies      
People don't need to trust science, they need to understand science. If you did never have time for educate yourself and develop a real critical sense, don't blame the other guys.
pm 1 day ago 0 replies      
How much of a generation getting fat is attributable to institutionalised science versus people divesting responsibility for what and how much they eat and exercise?
Yaa101 1 day ago 0 replies      
There is nothing wrong with science pers, the problem is the people working in science, a lot of them are corrupt.Not often by birth are they corrupt but mostly due to pressure by the ones handing science money, and it does not matter if the donaters are government or private parties.Most donaters give money to serve their goal and not to get neutral answers about how things work.In other words, we the people should look into the mirror closly to get the answer about why science is so corrupt.
S_A_P 1 day ago 0 replies      
I like the point made. I also think he touched upon another point that needs fixing with the comment of "winged monkeys in the media".

Mainstream news is broken. Sure its possible to dig up a good/factually accurate/balanced perspective story in some of the less visited corners of the internet. But news mainly serves to validate someones politics more than real reporting. This needs to change so that the media can earn back some credibility as well.

temuze 1 day ago 0 replies      
People have always been pretty irrational about health. From the Greek beliefs of bloodletting/leechs to Edyptian amulets to homeopathy, we've associated taking X + some time => healthy.

But that's the thing - our health can improve after receiving bad or neutral treatment because of time, which makes it really easy to confuse correlation with causation. When it comes to health, we easily fall prey to placebo effects.

kak9 1 day ago 0 replies      
The original research supporting "carbs are good, fats are bad" made such trivial statistical errors it's kind of crazy.
jeffdavis 1 day ago 0 replies      
"Scientific consensus" is an oxymoron, because consensus is not a scientific process.

Unless by "consensus" you mean that a lot of scientists have successfully reproduced an experiment. But nobody ever means that.

That being said, scientists are often informed and educated, so their opinions should be valued. But please take off the lab coat and call it an opinion.

dthal 1 day ago 0 replies      
Science has more and less reliable disciplines. In particular, in any field in which meaningful experiments are difficult or impossible to perform, causality will be hard to pin down. It just so happens that many of the topics the public is most interested in - health and nutrition, psychology, economics and, yes, climate - fall into that category.
faske 21 hours ago 0 replies      
This is refreshing to hear that I am not the only one who see this happening to the general public. This is a good video that explains how to determine which "science" is trustworthy or not.


wutbrodo 1 day ago 0 replies      
> But can we stop being surprised when people dont believe science? Humans cant turn off pattern recognition.

Jesus, I hate this line of thinking. It essentially boils down to "we're all just monkeys". Pattern recognition is the same thing behind a lot of in-practice racism[1], and yet no one would claim that one's actions should reflect racism just because our basest instincts do. We have a frontal lobe for a reason.

Similarly, I can be aware that science may be wrong and still understand that, at this point in time, believing the alternatives has a higher chance of being wrong. Modulo widespread issues in study methodology etc (which may be a concern in some fields, but not really what this article is talking about), the principles underlying the scientific method are actually pretty easy to understand without any domain knowledge (of either the specific field or of study design/stats/etc).

[1] It's a straightforward example of "pattern recognition" to reduce a person to one of their most striking and visible physical characteristics, like skin color, and then act towards them as a member of that group instead of a full, multi-faceted person.

Xcelerate 1 day ago 0 replies      
For me, science is synonymous with "predictability". By studying systems, you can create models and theories that allow you to create a prediction about the future which can then be verified. The key to science is the verification step (which is why supernatural concepts are not science; they've never been confirmed or denied).

Based upon my definition, you may be wondering about something like the theory of evolution, since evolution concerns the past (and progresses too slowly to observe in the present). But the science in that field is still a future prediction -- the nature of the prediction is that it just so happens to be about the past. In other words, as new evidence emerges that explains what happened in the past, we can compare this evidence to our prediction for what kind of evidence we would find.

So, if science is prediction, then good science is "better" prediction. Then what's the best science? Physics. Physics can make theoretical predictions that match reality up to twelve decimal places. That's insane. The next best science is chemistry, followed by biology, followed by nutrition and health (as the article discusses), followed by psychology.

Does this mean that psychologists are not as intelligent as physicists? No, it just means understanding quantum mechanics is far easier than understanding how people work -- a view which I would hazard to guess most physicists would confirm. Despite the fact that QM requires advanced math that few can understand, the totality of information necessary to make predictions at a subatomic level is very small in comparison to the amount of information necessary to model a human (biologically or mentally). A few postulates, a few mathematical definitions, some numerical methods, and BOOM -- twelve decimal places of accuracy for the gyromagnetic ratio of an isolated electron. One textbook would be completely sufficient to describe this prediction and the math behind it (although it may be kind of hard to read). On the other hand, there's no way a highly predictive model of a human could fit in one book.

Unlike the elegant laws of physics, a human body is the result of millions of years of all kinds of adaptive chaos and evolution. There's no pretty equation that describes it.

My point with all this is that the public hears "science" and they lump all science together. Instead, they should be educated on which fields of science are the most predictive, and which are still in major development. In this sense, they would have a better idea of what to trust when making decisions for themselves.

enupten 1 day ago 0 replies      
I think Science to non-scientists becomes a matter of Religion, and for us who actually do it, it becomes a matter of Philosophy/Spirituality - if you forgive the medieval imagery of that sentence.
thomasjudge 1 day ago 0 replies      
Part of the problem as I see it is this (and I only have a short amount of time right now but I have a lot of thoughts on this):

Our paradigm for "science" is typically physics, in particular say newtonian mechanics. Force, mass, acceleration. 100% predictable.

However, while human diet, nutrition, metabolism are ultimately based (in some way) in physics, chemistry, biology, microbiology, diet and nutrition in the individual case are incredibly complex and clearly not fully understood phenomena.

So I think it is not exactly a failure of "science" or even the "media" at work here, but a failure to understand (which neither science nor the media has been particularly helpful on) the scope and LIMITATIONS of our knowledge and theories.

ebbv 1 day ago 1 reply      
This is a pretty terrible post. Science didn't kick people in the balls for 20 years, the media did. The media pushed bogus results of bogus studies on people, and overstates the case that "X is because of Y" on real studies who do not make such strong claims themselves.

The media is responsible for people's distrust of science, because not only do they push bogus claims, but they also push FUD as well. Particularly oil and coal industry anti-climate change FUD for the past 20 years. That is precisely why people doubt the scientific fact of global warming.

And yes I would prefer to live in a world where people defer to experts. It's great that people have "pattern recognition" but trusting your own half-assed judgment on an issue where there are experts who have studied something for 30 years is not "pattern recognition", it's hubris.

VikingCoder 1 day ago 0 replies      
I really like Scott Adams, but I think he failed utterly on this post.

The problem is media. News programming is constantly looking for something to breathlessly report, and is delighted to find one or more so-called experts who will loudly extol / lambaste the latest findings.

If you look at the leading causes of death, understand your own history and risks, and follow the advice of credible doctors, you'll be doing great. Most of us don't do that, and then we scramble for fix-alls.

It's these silver-bullet, "all-x-is-bad, all-y-is-good, z-causes-cancer, w-cures-cancer," reports that are jerking your chain.

cttet 1 day ago 0 replies      
Read the papers. The articles on news about "science" are only news.
tsotha 1 day ago 0 replies      
Heh heh. Economists would have the crown for being wrong the most often if not for nutritionists.
tjradcliffe 1 day ago 0 replies      
There are two things to say about this:

1) Diet and fitness are hard problems because humans evolved as opportunistic hunter-gatherer-scavengers, so we are moderately well adapted to almost any imaginable lifestyle. When the optimum is broad and shallow (which it necessarily is, especially for diet, unless you are an evolution denialist) it is easy to wander around in the noise.

This is made worse by snake-oil salespeople who are dedicated to the idea that the optimum is narrow and deep, and they can sell you its precise location. They take any minor wobble that scientists identify--which based on evolution is almost certainly noise--and declare it the One True Location of Perfect Health.

2) Science fails to get traction with the public because it lacks narrative, which is an idea I explore in a lot more depth here: http://www.amazon.com/Darwins-Theorem-TJ-Radcliffe-ebook/dp/...

enupten 1 day ago 0 replies      
No wonder Homeopathy et.al are making a comeback.
snowwrestler 1 day ago 0 replies      
People trust science implicitly 1000 ways every day: when they get in their car, when they check Facebook, when they step into an airplane, when they take some Advil, when they eat a Snickers bar, when they take an antibiotic, when they take Viagra, when they make a phone call, etc.

So the question is not "why don't people trust science," the question is "why do people very selectively mistrust small segments of science?"

A plausible answer is because there are people and organizations who are encouraging them to mistrust those small segments of science--by purposefully feeding bad information into the marketplace of ideas.

I think Adams is making a fundamental error of attribution, blaming good actors (real scientists) for the actions of bad actors. He's basically arguing that unless scientists can stop all bad information from anyone, they can be blamed for the bad information. Doesn't seem fair or sustainable.

Mz 1 day ago 0 replies      
I think part of the problem is that the human race is like that story about the six blind men and the elephant, where one guy argued it was like a spear because he had only touched a tusk and another guy argued it was like a tree trunk because he had touched a leg and so on. There are billions of people here, each with their own unique experiences, their own little slice of the truth. It is only reasonable that some think their piece is TRUE and attempts to rebut their piece of the truth must be crazy or something.

When I look back on historical concepts of things, often, they are decent mental models, given the limited information available. For example, Native Americans of the Pacific North West thought that the world was a bit of land floating on water in a bowl. This area is geologically active and when there is an earthquake, water runs up onto the land, not unlike what would happen if you floated something in water and then pushed down on one end with a finger, tipping it. So it's a fairly good mental model for the limited information available to them. It's not accurate given what we know today, but I think it's disrespectful and a disservice to act like it is simply "dumb."

I spend a lot of time trying to figure out how to take seriously the piece of the puzzle different people have and how to help people who see things very differently communicate effectively. It's shockingly hard. Most people want to insist, NOPE, you are an idiot and fucktard because I KNOW the elephant looks just like a spear. I have experienced that first hand, by god!

The part that most frustrates me is that the people who are the worst about this are often not the conspiracy theory "nuts" but the closed-minded folks doing it in the name of "science."

dmfdmf 1 day ago 0 replies      
> So how is a common citizen supposed to know when science is done and when it is halfway to done which is the same as being wrong?

Forget about the common citizen, the scientists themselves don't know when science is "done". This is the problem of induction and most scientist are completely ignorant of the issue because they dismiss epistemology and, more generally, philosophy as nonsense. So they implicitly or explicitly substitute "consensus" for truth which is a horrible mistake.

BTW, I don't agree that getting halfway is the same as being "wrong", science is a process after all and unlike Zeno's paradox the state of scientific knowledge is not always half-way to its target, Karl Popper's claims to the contrary notwithstanding.

> Im on the side that says climate change, for example, is pretty much what science says it is because the scientific consensus is high.

The problem here is complicated by government funding of science and positive feedback that only "consensus" consistent research gets funding. Sure there is window-dressing "opposition research" but its purpose is not the truth but to justify the consensus. Science has become politicized by the use of government money and that's why things such as global warming have become political hot buttons. There is a lot of money and power at stake and people want to fly the flag of "science" to achieve their political goals. When politics and science mix, bad things result. cf. Soviet Union.

> And we all know that studies funded by private industry are suspect.

I think this true today but that it is wrong. The tacit, unspoken assumption behind these suspicions is that government funding is NOT suspect and free from all biases. I think that that is logically and historically false. Certainly revealing ones funding is important because funding can be a source of bias but in the end science should stand on its own merit regardless of the source.

I think Scott Adams has named something that has been going on for decades now which is that the American people rightfully no longer trust science and scientists. Part of this mistrust is a product of the rising anti-technology luddites, religion and the opposition to reason in the culture as he points out. But like Adams I think that the distrust of science has also been earned and I believe it is because of the politicization of science via government funding. The leaked emails in Climategate revealed that maybe the scientists were not being so objective after all. The clearest evidence of dishonesty that I read was that in public the pro global-warming scientists would call for the opposition to publish their results and arguments in peer reviewed journals. In the background they were actively working to block the publication of such research or boycott any journals that did so.

It is now considered "scientific" to dismiss your opponents with ad hom "climate denier" labels, secretly politic to limit dissent and smear the opposition as anti-science morons (the implication that Adams was addressing) and using political marketing tricks like changing the name from "global warming" to "climate change" to make the opposition look like fools. These are the methods of a Karl Rove or James Carville not science nor scientists. This is not science.

beloch 1 day ago 0 replies      
First, researchers can't get funding unless everything they do is world-shattering and of clearly superior quality. This is why you'll never see a published paper with a title like, "We did X. It failed. This is probably not that important, but might be interesting at some point in the distant future." More importantly, you'll never see an article in a top journal titled, "These other guys did X, and now we have verified they're not full of beans by doing X too". How science is funded is directly responsible for both the titles we see in academic journals and the research that is actually done.

Second, even if a journal article isn't over the top in it's claims, the press releases about it will be. Journalists want to sell copy and generate page views. If what they're supposed to be reporting bores them, they'll get it wrong in an entertaining fashion more often than not. I'm sure Scott Adams and many here have heard that "Butter is healthier than margarine". A scientific study examined several margarines on the market and found that many brands contained a high percentage of saturated or trans fat. Based on the theory that saturated and trans fats are worse for your health than unsaturated fats (which is still supported by science) they concluded that some brands of margarine may be no more healthy than butter, which is mostly saturated fat (with some trans). A newspaper reported that this study proved that Butter was better for your than margarine, which was factually wrong based on that study, and then printed a retraction two weeks later. However, the damage was done. Dairy produces seized upon this single article and launched an ad campaign extolling the evils of their competitor. The "butter is better" myth persists to this day. The truth is still the same. Some margarines may be no better than butter, but others are.

I'd like to point out one of my own great disappointments about health and fitness: Medical doctors don't seem to know any more about it than the next guy. You'd think medical doctors, through their study of the human body and the many maladies that can affect it, would be the perfect people to tell you what to eat, what not to eat, how to exercise, etc.. They're not. Their profession is entirely focused on spotting problems that require medical intervention and then giving that intervention. When the body is healthy, the best thing they can do is leave the patient alone. This is because every medical treatment carries with it risks and side-effects. As such, an optimally running human body is the least interesting thing in the world to a medical doctor. Furthermore, they're taught to maintain patient confidence by seeming to know more than they do, and they take courses that teach them specifically how to do this. Yes, faking that you know more than you think you know is a course medical doctors take (Source: sibling who is a medical doctor). Asking the typical medical doctor about diet and fitness is like asking a used car salesman about fuel and oil mixtures for a F1 engine.

cookiecaper 1 day ago 0 replies      
The best demonstration of the fact that food science has failed, and the fact that commercial influence is prevalent in modern science and academics, is that obesity is still a significant global concern.

Almost everything you can buy at a grocery store in the United States has been altered to make it more addictive. If you go to a store and buy the foods labeled as healthy, you'll soon learn that they're not really that different after all. If you go to a store determined to buy only food that is actually healthy, you're limited practically exclusive to fruits and vegetables, and even that is questionable since we don't know what type of chemical treatments or preservatives have been applied.

"Make everyone eat only celery and go to the gym for 2 hours per day" is just not going to work. It may be a nice fantasy but it is never going to solve the obesity epidemic on a significant scale. Addictive ingredients have the side effect of causing obesity and as long as that's true, people are going to be cajoled into eating incorrectly at every turn, because everyone involved wants to sell you more food.

The obesity epidemic is a result of scientific advancements that have allowed us to acquire an unlimited amount of the most biologically desirable food ingredients on a near-global scale. We can't go back in time and uninvent this stuff, and we can't actually expect people to switch to a diet of 25 celery sticks per day, so we need a scientific solution that solves the problem. And there isn't one, because obesity doesn't really represent a commercial threat to anyone in particular -- if anything, it creates new commercial opportunity for another super-powerful industry in the U.S.: medicine.

Maybe the airline industry will fund a solution once people get too heavy to be effectively carried by airplane.

cookiecaper 1 day ago 0 replies      
I really appreciate Scott Adams' temperate perspectives on controversial issues. He's usually able to legitimately grasp the underpinnings of issues that most people in tech circles don't even want to countenance.

I have to agree that the overzealous apostles of Scientific Consensus that run around condemning people all the time often look very silly not-too-many-years-later. In their self-righteousness, they fail to see that they're just the same as the conservative grannies and aunts and uncles they look down upon with disdain: repeating a cherry-picked data point that they don't possibly have the professional background or academic context to actually understand, but which they fervently believe and adopt because it confirms their worldview and the people around them expect them to.

People are all fundamentally the same, and most people of any affiliation will propagate anything if they accept the authority of its source. This applies to conservative and liberal persons equally -- very few of them are even capable of deciding if an analysis of a complex topic is valid or not, let alone putting forth the effort to actually vet it. The world runs on trust.

Never trust a statistic you haven't faked yourself. Visit Retraction Watch for a frequently-updated sampling of things "science got wrong".

hyperion2010 1 day ago 0 replies      
One reason for this is that the field of nutrition science is a complete circle jerk. They, like many other fields, only peer review internally, they have their pet hypotheses that they repeatedly validate with crappy statistics (because that's the standard for the field!) and postdocs alsmost never come from other fields. I read a piece detail this awhile back, can't seem to find the link now.
Mkcast GIF terminal screencasts with key presses overlaid
points by gnocchi  2 days ago   40 comments top 9
danso 2 days ago 4 replies      
This is great...now I'll have to find a spare laptop to load Ubuntu on. I'm on OS X and have been attempting to write Unix tutorials...I don't like doing videos, so I use Quicktime to do a window recording, save it, and then send the file to the "gifify" utility (https://github.com/vvo/gifify), which besides wrapping around ffmpeg to do the conversion, uses the giflossy fork of gifsicle to optimize the image:

       gifify screen.mov -o screen.gif --resize 800:-1
example GIF:


It's better than embedding video clips for such short snippets, but being able to show keystrokes would be even better (Quicktime does record mouse button presses)

acqq 2 days ago 0 replies      
It is mostly a bash script wrapper around Byzanz, the later existing since at least 2008:


Still, interesting.

przemoc 2 days ago 0 replies      
Looks nice. I was going to perform a shameless plug by mentioning my simple Linux OSD nanoproject (for those wanting to use some other recording matters, but still see the keystrokes on the screen):


but I just remembered that I still haven't fixed a bug I noticed on my computer at work, where I had Gnome back then. Nowadays I have awesome there too (just like on my laptop), so I'll possibly won't reproduce it, but notes I left should be enough to do the fix one day. ;)

falcolas 2 days ago 4 replies      
Why gif, and not webm or mp4? Using gifs makes it only a bit more portable, but makes it consume significantly more bandwidth.
jsheard 2 days ago 1 reply      
The Sublime Text dev has an interesting take on this idea, using a single packed PNG and canvas rendering to avoid GIFs 256-colour limitation while remaining lossless:


ghuntley 2 days ago 0 replies      
Similar available on Windows - http://carnackeys.com/ with source available at https://github.com/Code52/carnac
shanselman 2 days ago 1 reply      
For Windows, use the Open Source http://www.carnackeys.com and either CamStudio or Camtasia. Works great.
300 2 days ago 0 replies      
This is awesome! I'll definitely try to use it and make some Vim tutorials.
glhaynes 2 days ago 0 replies      
Smart and useful! And if there are issues with the GIF format, surely others could easily be added. Nice work.
The curious case of disappearing Polish S
points by radley  1 day ago   115 comments top 23
blahedo 1 day ago 8 replies      
I do not have sufficient profanity in my verbal arsenal for websites that override basic, fundamental browser behaviour because they think they know better than I do what I actually meant to do. I have noticed Medium being a regular perpetrator of this sort of broken behaviour before.

That one of the developers at Medium could run into such a glowing example of one of the many reasons it's nasty to override browser behaviour, and take the lesson from it that oh hey, we just need to make our override logic more complicated... the blindness and idiocy and sheer bloodymindedness of this response I just can't even.


kosma 1 day ago 2 replies      
I initially planned to put a TL;DR here, but it's the best bug analysis I've seen in months - a pleasure to read! - so I'm leaving it for others to savour.

The very same bug used to be present in early Windows mobile GPU drivers - with global hotkeys making it impossible to enter (with Intel GMA 950) and (with ATI Catalyst). Being a Polish geek, I used to earn lots of free dinners from frustrated friends who were forced to copy-paste those letters on their brand new laptops. Funny how the same bug recurs in different types of software due to an obscure locale-dependent edge case - and it's much less known than, for example, the Turkish dotted/dotless I.

seppo0010 1 day ago 3 replies      
I think the article is too verbose when knowing the context the bug is trivial: you intercepted meta+s and ctrl+s and in the way you broke alt+s.

I find the save dialog useless for web browsers as well, but I think preventing its use is a bad idea in general. Overriding the browser's shortcut is uncomfortable. For example, wordpress likes to capture cmd+<number> to change the font style, but that's how I usually change the active tab. It also disables ctrl+tab, the other way I use to escape while the text area is active.

People use their browsers and have their workflow in them. Breaking them needs to have a really good excuse. http://xkcd.com/1172/

nathell 1 day ago 0 replies      
> Our extra characters might look very much like Latin equivalents, and amount to only about 8% of letter distribution (you will hate them playing Scrabble)

Polish competitive Scrabble player here. It's not so much hate as a love-hate relationship. While it can be frustrating to be left with a vowelless rack containing a in the endgame, for the most part these letters are quite desirable. Indeed, three of them (, , ) are vowels worth 5 points each, and only two of them ( and ) don't appear in any two-letter word. Even the is fairly easy to get rid of, given an open enough (read: not completely blocked) board. They're nowhere near as irritating as the Q (and to a lesser extent Z) in English Scrabble.

Sami_Lehtinen 1 day ago 0 replies      
Noticed similar issues with official Australian VISA / immigration pages. You can't simply fill some forms with your email address using Finnish keyboard. Why? Because they block usage of AltGr button on their page. They also prevent using clipboard blocking copy paste option for that sign. User has to be smart enough to switch to US keyboard and then enter @ sign and then switch back. So this is nothing new, but it's absolutely rude from part of the site designers to vandalize basic functionality like that. Normally @ is produced by AltGr + 2. I guess they got BOFH department, let's make this really annoying and prevent people from getting their business done, laugh.
V-2 1 day ago 3 replies      
"While France, Germany, and other countries got their early PC with customized keyboards whose layouts mirrored closely the typewriters that came before [...] in Poland, we had to find another way of inputting the extra 9 diacritics unique to our language."

Did we? I dare to say that your historical account is inaccurate. We stuck to our traditional layout for a while. What does it matter if the keyboard wasn't customized in physical sense anyway? You should be looking at the screen, not at the keys. If I recall well, Polish typewriter layout (PN-87) was still available on Windows 95/98. The now prevalent >Alt + something< one was called "programmer's layout", and the name itself indicates that it wasn't originally thought of as everyman's layout.

Domestic software certainly used the typewriter layout - for instance http://pl.wikipedia.org/wiki/TAG_%28edytor_tekstu%29, once a very popular word processor for PC, last version released in 1996.

The typewriter layout finally got extinct, but what spelled its ultimate demise was the internet, I believe.

Early netiquette actually forbid using Polish diacritics, because of encoding issues - in that era you could never be sure whether the other person would read "gwd" or "gwd", so it was considered good practice to stick to Latin characters only.

Meaning that users didn't get to feel the pain of having to press Alt + whatever all the time, and so they got hooked on default QWERTY.

As the last of Mohicans, I use the typewriter layout to this very day (for typing in Polish; I alternate), only I had to recreate it myself (using Microsoft Keyboard Layout Editor)*

It allows for much faster typing in compare to these inconvenient right-Alt shortcuts. Swapping Y with Z by itself is a win for a Polish speaker (writer), given that Z is much more common in Polish. While its frequency is at mere 0.07% in English, it reaches 4.9% in Polish, placing it in the top 10. Thus putting it under the weakest finger of all - your left pinky - isn't very considerate.


* Funnily enough - I'm not flame baiting - when I decided to try Linux, hailed for its customizability, I found out that recreating my favorite layout wasn't as easy. I got lots of advice on various forums, but noone had a simple receipt for me. Admittedly this was few years back.

hawat 1 day ago 1 reply      
1) There is something called "polish typewriter keyboard", now is more like a unicorn than computer part but... I really had one in 1995. 2) in `80 Poland produce some computers "on it`s own", like elwro:http://upload.wikimedia.org/wikipedia/commons/6/65/Elwro_800...And they had polish typewriter keyboard...3) And, there was a polish made clone of IBM PC:http://pl.wikipedia.org/wiki/Mazovia_%28komputer%29#mediavie...and, yes - it has a polish typewriter keyboard...
qewrffewqwfqew 1 day ago 3 replies      
Yet another example of why websites shouldn't be able to hijack keys that have meaning outside them (in the browser itself, or in the desktop environment). It still blows my mind that browsers permit this and don't provide an easy option to stop such abusive (and dangerous!) behaviour.

The workaround is typical of web stuff as well: deal with the symptoms one by one while leaving the underlying problem unquestioned.

59nadir 9 hours ago 0 replies      
The much bigger problem is that browsers out there are crap and won't let you re-map your keys as you want them.

How is it that we're well into the 2010's and I still can't get (my own) emacs keybindings in Firefox/Chrome without some silly extension that doesn't work 100%?

I can download uzbl, dwb or any other browser like it and edit my keys as I see fit, for emacs/vim keybinding behavior, without limits.

I don't know about Medium, but I also find that most websites have these hidden shortcuts and you can't even re-map those. Why is there no choice in all this? Browser developers and websites can spend hundreds of thousands of dollars on UX/UI research, but can't figure out that I might want to re-map some keybindings.

dredmorbius 1 day ago 1 reply      
Or, as I found on a BBC news page earlier today (and reported via feedback), my middle mouse button on link to open in new tab wasn't -- the page was opening in the present tab.

Don't break user controls.

Seriously. Fucking. Annoying.

The tendency of sites to also force all links to open in a new tab is similarly annoying. I've got the means to choose. Leave it to me.

Another argument for disabling JS pretty much everywhere.

ojanik 1 day ago 0 replies      
This is how you type euro sign on a lot of keyboard layouts, no communism needed.

Looking at your code, you're also blocking Win+S on Windows and Ctrl+S on Mac for no good reason.

Extrapolating from your experience: What about shift key?

When you decide to block default behaviour, do your research, not everyone is on a Mac with American layout.

Spoom 1 day ago 0 replies      
What an interesting article. I don't think many people outside of our field realize how obscure some bugs can be, and the lengths needed to diagnose them. Not every bug, of course; not even 95% of bugs. But every now and again, you get something that really tries your problem solving skills... and inevitably it ends up being something small like not checking the state of every meta key.
cysun 21 hours ago 1 reply      
I use regularly English, German and Romanian. I find it easier to learn using the two extra standard layouts (not the programmer version) and change between them by ALT-SHIFT than pressing ALT-X for every diacritic. The WPM count falls drastically using the ALT key all the time.
zokier 22 hours ago 4 replies      
While most commentary is (fairly) focused on browser behavior and keyboard hijacking, I'd point out another viewpoint: why the frak does every latin alphabet using nation need its own keyboard layout? Couple of diacritics is hardly a good reason to reshuffle the whole keyboard.
odiroot 1 day ago 2 replies      
Oh nice, an article about my native language on HN's front page.

Currently living in Germany I actually find our approach to special characters really elegant. I was surprised how much different are German (and French) keyboards.

I wonder whether these different layouts can affect for example your programming prowess.

viraptor 1 day ago 0 replies      
> To find room for the extra letters, typewriters needed to dispense with some punctuation, most notably semicolons (comma + backspace + colon), and parentheses (replaced in common use by slashes).

I've seen the slashes-as-parentheses on many (mostly older) documents before, but never knew why. Suddenly this makes so much sense. Unfortunately what makes less sense is official translators still using // for () these days.

CalRobert 1 day ago 0 replies      
Reading this reminded me of how incredibly annoying it is that new OS'es don't always underline the shortcut key. It's maddening! I can't seem to get it to work in Unity; auto-mnemonic true or false makes no difference.
frik 1 day ago 0 replies      
German keyboard has the AltGr key too, it like pressing Ctrl + Alt, though there is no special character mapped on the "s" key.
w__m 1 day ago 0 replies      
still, how about choosing font that does support latin-ext characters properly for medium.com/polish ?

just saying.

DogeDogeDoge 1 day ago 2 replies      
We Polish people have a hard language :(
rvennar 22 hours ago 0 replies      
Ok, but Polish is a dying language, so I don't think it matters that much.
moron4hire 1 day ago 0 replies      
I first learned about this issue while working on Primrose.

Things get really hairy in the browser when you start talking about JavaScript, the keyboard, and international support.

For example, there is no way to know what keyboard layout a user is using. With Primrose, I guess based on the user's default language, but even that isn't perfect as a lot of not-US English speakers just show up as "en", with no further locale description, so I also provide a select list with all of the available choices.

It's very difficult to know with certainty what character a user is intending to type. KeyCode 51 with no modifier keys is "3" on most keyboards, except French, where it is "#" (they essentially swap the casing of the number row).

The number pad numbers send different keycodes than the number row numbers, but the number pad arrows (if you turn off the numlock) send the same keycodes as the arrow keys.

In languages with deadkey support for diacritics (such as French and German), the deadkey keyCode isn't sent until the second key is typed. Then, they are sent in rapid succession.

There is absolutely no way to know what is going on with IMEs.

And there is no reliable way to interact with the soft keyboard on mobile devices. Some versions of soft keyboards don't send the arrow key keyCodes. There is no standardization of what keys should be available, so you can't guarantee that your user will easily be able to type your shortcuts (I know of only one keyboard on Android that even has CTRL or ALT). And it's nearly impossible to know how much space the soft keyboard is going to take up on the screen (you can figure it out on Android, eventually. It's impossible on iOS.).

So, you have one of two choices, if you're trying to implement any sort of browser-based application that involves heavy use of the keyboard.

Option 1: You can either create a hidden, surrogate text area in which the user actually types, unbeknownst to them, and run a sync process between the content they type and the content you display. This has several problems: you have to wait for keyUp to activate shortcut commands. The sync process can be costly (especially in the context for which Primrose exists: WebVR) and it is difficult to keep the cursor view in sync when dealing with mouse/touch interactions. Oh, speaking of pointer actions, you have to make sure your surrogate text area is positioned exactly under the displayed text field, with the text appearing the same apparent size as the displayed text, because when it gains focus the browser will scroll the view to it.

But it will work, except for certain use cases. It won't work well on mobile, and it won't play nicely with WebVR (which is the entire point of Primrose, anyway), especially for Asian users using an IME.

So Option 1 isn't a good option.

Option 2: completely reimplement the key input stack, i.e. create all the keyboards and soft keyboards and IMEs your users will ever need. Completely ignore what the OS and the browser tell you, take only the raw keyCodes, and reconstruct what is happening. It's a lot of work, a lot to debug, but at least there is a path to actually solve every problem.

guard-of-terra 1 day ago 2 replies      
I would theorize that second used slavic language is not Ukrainian but Serbo-Croatian, at risk of getting hated by everybody mentioned.

It also has a rare property of coming with two alphabets - cyrillic and latin.

Show HN: Matterhorn Your new project manager
points by Linnea  1 day ago   201 comments top 57
SEMW 1 day ago 11 replies      
First thought: you've priced it quite ambitiously. A 10 user team would be $90/month, compared to $20 for JIRA + JIRA Agile, $42 for Asana premium, $50 for Trello business (or free for normal trello), $35 for Pivotal Tracker, ...

(Which isn't to say it necessarily should be cheaper, only that it seems surprising to see that price without any attempt to compare or justify why you believe that e.g. it's already, at launch, worth 3x as much as Pivotal Tracker).

eranation 1 day ago 2 replies      
Looks great, very appealing landing page, message is passed clearly.

Feedback stuff:

1) agree with pricing plan, too high for large teams

2) call for action - I saw the "sign in" button immediately, but had to scroll all the way down for sign up, will be nice to have a floating sign up button just next to sign in, and in the sign in page, have a link such as "not registered? sign up here" in case people click the wrong button.

3) this is more due to my personal taste, but no gmail sign up is lowering my will to spend time to test the product. I want to click click, play with it a few mins, and if it's good suggest it to my team. I don't have time to fill a form (I'm exaggerating a little, but this goes through a lot of people's mind, filling forms is annoying for some people)

4) I'd like to see a demo the product. having a dummy project that anyone can see with a "guest" login will be really great. (good if you are not willing to add gmail login for any reason)

5) if not a demo, at least a video. the gif is great, so I think a longer video will be even better, seems like a very slick UI.

all in all looks great, I like the hybrid approach, will give it a look.

adamgravitis 22 hours ago 2 replies      
The rule of thumb with this kind of thing is you try to use plausible data in your screen shots. Having "moar project" and "even more project", and "super project" and "new project" makes it hard to envision what your product is really useful for.
noodle 8 hours ago 0 replies      
The number of people balking at $9/mo in this thread is amazing. $9 is nothing compared to salaries. If it saves you 1 hour of productivity per month, you get 10x return on that cost straight away.
dnlmzw 1 day ago 1 reply      
I think your landingpage looks good, but overall I have a hard time seeing exactly how it makes life easier for me.

I have worked in most of the roles you describe, but even after having scrolled to the bottom, I don't exactly understand how it is tailored to the roles.

What I was left with is that you have boards and progressbars. Doesn't really compare to the stuff I already use.

Maybe you could explain even better how each role can tailor an interface to meet their needs, and what you provide better than other software out there.

eastbayjake 21 hours ago 3 replies      
The fact that this post has made it to #2 despite the huge number of PM tools out there reveals two things:

(1) Project Management is painful and the existing providers still don't fully grasp what the market needs/wants

(2) Matterhorn must be doing something right to get over the noise, so kudos to your team! For me, it's your realization that not everyone manages their workflow in the same way, so being able to coordinate while giving people their personal preferences is really powerful. I wish I could see a demo!

uniclaude 23 hours ago 1 reply      
This page does a very good job at explaining what this project is about. I'd like to have a comparison of features with the competition somewhere (on another page maybe?), but this is very good.

Interesting project, I'll give it a try.

eterm 1 day ago 2 replies      
Disappointed to see it's a saas app with no ability to self-host. For project management I think information is too confidential to be using a third party cloud provider.

I'll keep it bookmarked though, perhaps my attitude in this regard is out of date.

dsr_ 1 day ago 1 reply      
No visible privacy/security policy. I'm going to trust confidential company information to somebody on the net who doesn't address privacy and security concerns on the very first page? No, I'm not.
thejosh 1 day ago 1 reply      
Your signup form no worky.

Mixed Content: The page at 'https://matterhorn.io/register' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://api.matterhorn.dev/users.json'. This request has been blocked; the content must be served over HTTPS.app-6c6e7022ec9660d68ebd624054790399.js:3 sendapp-6c6e7022ec9660d68ebd624054790399.js:3 p.extend.ajaxapp-6c6e7022ec9660d68ebd624054790399.js:6467 (anonymous function)app-6c6e7022ec9660d68ebd624054790399.js:3 p.event.dispatchapp-6c6e7022ec9660d68ebd624054790399.js:3 g.handle.h

Bedon292 1 day ago 1 reply      
Looks nice, but was sorely disappointed there was no self hosting option. I would be very interested if I could keep my data on my servers, but cannot move to this otherwise.
sergiotapia 22 hours ago 1 reply      
This service looks really cool, I like that it's flexible for different preferences. Some guys on my team like Trello's columns better while others like full blown tickets a la JIRA.

Will you offer some sort of micro plan for small teams of 5 or 6?

Here's a good comparison of the various PM tools so you can compare Matterhorn to the established players. http://stackshare.io/stackups/trello-vs-asana-vs-basecamp-vs...

efriese 23 hours ago 0 replies      
I like it. Here are a few thoughts:1. I agree about the price point. If it were $5 I might be a buyer. I know it's insignificant, but it adds up when you have a team of people.2. I would run this entire site on SSL. There's some good stuff to sniff here.3. I don't like having to type in the name of a project and clicking the + to get to the form. I would rather just click + and get the form. Didn't seem intuitive to me.4. When leaving comments on a feature, it was duplicating the comment. My username was there and then the same comment with blank user data.5. On the "dashboard" or whatever you want to call it, tickets that I have assigned to myself for today aren't showing up. I have to go to the planner.6. I clicked "Board" and then all of the links died. Refresh got me to the board.7. When I move things in the Planner, they don't seem to take effect. I can't seem to get a ticket into Today.

Overall, I like what you've done here. I like being able to segment by customer and the board. Simple interface that is good once you learn the workflow.

Jun8 1 day ago 6 replies      
I've been looking for a long a time for a good PM software that is standalone (personal use) or can be self-hosted (easy setup a must). Cloud hosting is a show-stopper when you're in a large company.
Maro 1 day ago 0 replies      
"Designer, Developer, Copywriter, Project Manager, Strategist, Accountant, Client Relations"

Maybe it'd be better to concentrate on one usa-case. Say Accountant. That way you have more focus. Once you talked to 10-100 accountants and made sure the product is good for them, move on to the next use-case.

fnordfnordfnord 20 hours ago 1 reply      
How about educational licensing? I'm forever trying to fit real PM tools into my curriculum but it is hard to do.

Also subscriptions are really hard to deal with at an .edu, and at 16weeks/semester (14 really) so 4 months x $9.00 x nStudents

bdg 1 day ago 0 replies      
From a sales perspective I can't tell the difference between this and trello in less than 60 seconds.

When I load the page I see I can "try it" but I don't want to invest the time. Show me exactly why your product is going to be more valuable and worth the time to migrate over.

q2 18 hours ago 0 replies      
As others listed, in project management space, there are already various tools like Jira,Asana ...etc.

Personally, this space appear more crowded. So far, I have read only positive feedback on existing tools like Jira/trello ...etc and I have not read many bad experiences (may be I have limited exposure). Is there really a window/space for new entrant?

To the current users of other tools: Are these tools (Jira/trello ...etc) fundamentally different to each other or just incremental differences while fundamentally similar?

catern 23 hours ago 0 replies      
> You all have slightly different workflows: workflows that enable you to do your thing in the best way possible.

> You could force everyone to track their time and their progress in exactly the same way, even if it doesn't fit their workflow

These lines briefly made me hopeful that this was some clever layer in front of all the various project management systems that would allow them to talk to each other.

I would value this because I vastly prefer the seemingly uncommon terminal-based workflow, and such a layer would presumably be able to talk to Emacs org-mode or whatever, just as it talks to Jira.

As it is, this is just another project management system that doesn't fit my workflow.

kudu 18 hours ago 1 reply      
Do you have any discounts or even free hosting for nonprofits? I run one which could really benefit from something like this but it's way out of our budget.
capex 17 hours ago 0 replies      
What's so attractive about project management tools? Why do we see so many companies doing the same thing with slight variations?
benmccann 1 day ago 1 reply      
I got a 404 when clicking the reset password link that was emailed to me.

The create new project button was broken.

It's unclear what the pricing is. The homepage says $9/user/month. When I logged in I think the price was 9/user/month

cpursley 1 day ago 5 replies      
Isn't project management software a solved problem already?

Seems like all this effort on these type of pm systems could have been applied to some niche market that's still using custom MS Access systems built in the early 2000s.

frik 23 hours ago 2 replies      
A British SaaS named after a mountain in Switzerland. http://en.wikipedia.org/wiki/Matterhorn

It seems Uber started a new hype repurposing German words. The opposite is happening in German speaking countries: "handy" for cell phone and "public viewing" for watching a live TV event on a projected wall.

Can one integrate his mailbox? Outlook and MS project server/Sharepoint are a good example, though there is room for improvements.

emiller829 22 hours ago 2 replies      
This may be nitpicking, but I really liked the marketing material here, aside from this phrase:

"Do you need a way to divide your resources across multiple projects[...]"

It's a pet peeve of mine that so many processes and tools use the phrase "resource" in place of person. It may not have been what was meant by "resources" here, but that's how it reads.

How about:

"Do you need a way to manage multiple projects[...]"


"Do you need a way to divide your time between multiple projects[...]"

peterevans 22 hours ago 0 replies      
Definitely more information about your integrations would be really helpful. For software like this, having an integration with Zendesk to bring tickets into the tracker or Github Issues into the tracker is essential; to the point where having an open API is great, but you're probably going to have to do the legwork for those integrations.

Having said that, I think there's a lot of room for improvement in the issue tracking space. Good luck!

karka91 1 day ago 1 reply      
I see nothing about integrations or an API. Thats a bit dissapointing
Animats 19 hours ago 0 replies      
Aw, it's just a project management tool. I was expecting a automated project manager, like Microsoft Middle Manager 3.0.
gk1 1 day ago 0 replies      
Can you explain the benefits of using this over any of the other project management apps like Trello and Basecamp?

Also: The centered text is annoying to read when there's a series of paragraphs.

apunic 20 hours ago 0 replies      
I see that many upvotes and comments--can anyone summarize in few bullets why this tool is superior to the hundreds of other project management tools?

The landing page and product though very nice and stylish seem not offer any outstanding feature or did I miss something?

Edit: this comment was downranked in the thread in less than 50 seconds, anyone has an idea why?

devonoel 22 hours ago 1 reply      
Honestly, my biggest issue with this landing page is the dummy text in the screenshots. Its nitpicking for sure, but it would look nicer if you took the time to give the projects realistic names in the screenshots and whatnot. Also that Sign In button needs another 5-10px of margin at the top.

Otherwise its a very nice landing page in my opinion.

digital-rubber 6 hours ago 0 replies      
While reading others' first thoughts,

My first thought was hey is this an add-on for gitlab? Does look a lot like it. So my initial quest was to look where i could download the community edition of this, but there is not?

cheald 22 hours ago 0 replies      
The screenshots need to be not-test-data. Make up some fake company with fake tasks; looking at screenshots of a development environment leave me underwhelmed.

What I want to know is "Why should I use this over Asana?"; the copy doesn't address it, and the screenshots leave me unsure as to the specific use cases for the product.

Guthur 6 hours ago 0 replies      
I was really hoping from the title that I could finally find a way to get rid of our PMs.

But it's just another SCRUM/agile board which seem to just give PMs the means to layer more complex processes on top of my job.

subpixel 1 day ago 2 replies      
First thought: show me more app when I load your page, less aspirational lifestyle props. (Is that a moustache brush?)
colinmegill 21 hours ago 0 replies      
I feel like this is already picked apart by more focused competitors that already exist. For instance, the todos part of this app is competing with Todoist et. al., the kanban board competes with Trello et. al., etc.
cvburgess 22 hours ago 1 reply      
Does anyone know if this does (or has plans to) integrate with GitHub / GitHub issues? I have a pretty similar setup with Trello right now that feeds off of various repos, but this would be a simpler setup if it integrated nicely.
sdrothrock 13 hours ago 0 replies      
Are there any tools like this that support Japanese? The only one I've found so far is JIRA.
ssmoot 23 hours ago 0 replies      
The copy needs a fair bit of work. It needs to be more brief. The sentence fragments are difficult to read. It only works if I imagine two voices, like one of those commercials employing a fake conversation.
untilHellbanned 1 day ago 1 reply      
Looks nice but sign in being http, whereas landing page being https is backwards.
schuettla 4 hours ago 0 replies      
interface looks nice and slick. gonna give it a try
dccoolgai 1 day ago 1 reply      
Looks promising. Seems vaguely Trello-ish. Can you compare and contrast? Specifically, why would I pay for this when I get Trello for free? And Trello has a really good API. Anything comparable here?
teh_klev 1 day ago 0 replies      
As a "project management" tool, can this do critical path analysis, resource allocation, Gantt charts (gotta love a Gantt chart) etc. The usual PM stuff we use in MS Project?
Linnea 23 hours ago 0 replies      
Thanks for giving Matterhorn a chance! It's really good for us to see feedback on the price point. We've been completely bootstrapped up to this point and are trying to find a price point that would be good for our users but at the same time generate what we need to keep the project going and allow us to refine and develop it's features.

Would be great to hear any feedback you have on the features, as you seem to know what you want from pm tools.

saukrates 1 day ago 0 replies      
Looks interesting, but lack of task dependency would be a deal breaker. One of the reasons our team has stuck with Smartsheets.

I agree with earlier comment asking for a demo project.

LandoCalrissian 23 hours ago 1 reply      
Looks very nice. Small deal, but the web fonts appear to be getting blocked for me so you may want to host those on the same domain. Keep up the good work!
temuze 21 hours ago 1 reply      
I'd make it freemium, like Slack or Trello. I'm hesitant to sign up for anything or pay for something without trying it.
teachingaway 1 day ago 2 replies      
Gray text on a light-gray background is difficult to read.


hsuresh 1 day ago 0 replies      
Congrats on the launch! How is it different from asana/trello and a host of other tools? Why should someone use this over those tools?

All the best!

brandon272 1 day ago 1 reply      
Get a demo online ASAP. I have no interest in signing up, confirming email, etc. just to see what the product even looks like.
tehabe 1 day ago 1 reply      
why is it so hard to tell who is making all this, where are they, where is this app hosted, where is the company.

I mean this is not aimed at causal users but to people who might want to use this on a daily basis with very important stuff.

And they are suppose to trust an anonymous website?

This is really confusing for me.

thejosh 1 day ago 1 reply      
Really cool.

Looks like Asana?, but with a focus for agencies who have multiple projects / deadlines?

amalhotra123 1 day ago 2 replies      
registration page doesn't work
higherpurpose 1 day ago 1 reply      
Would you be able to offer it for free for up to 5 members? I figure this would spread the word of mouth quicker and once startups begin using it, and have enough money after they increase their team beyond 5 members, they'll just upgrade to it, rather than switch.

Or do you think free members aren't worth the hassle?

VLM 1 day ago 0 replies      
Looks intra-team. How does it handle inter-team access control and/or some kind of read only for non-paying users who just want to know whats up?
AngryMike 23 hours ago 1 reply      
It's really annoying viewing a website (especially one that asks for cash) and not finding a page that talks about the team members behind the project. No accountability, No way I'd sign up
niels_bom 1 day ago 2 replies      
Side note:

The avatars are quite stereotypical: male developer, female designer, male project manager. Why don't you switch it up? There's female developers too.

Authentication Cheat Sheet
points by colund  2 days ago   149 comments top 22
oneeyedpigeon 2 days ago 14 replies      
"Applications should enforce password complexity rules to discourage easy to guess passwords." - ARGH!

To clarify, to avoid downvotes for a non-'productive' comment, I firmly disagree since this will probably result in me having to pick a password that's harder to remember than I otherwise would. It might also might it more awkward to type quickly, making shoulder-surfing easier.

(Note that this is probably not i18n-friendly, either)

beobab 2 days ago 4 replies      
"An application should respond with a generic error message regardless of whether the user ID or password was incorrect."

I really don't like this advice (although I see why they put it in there).

I often use different email addresses for different services so that I can determine who sells on email addresses (depending on how much I trust them), and quite often I can't remember which email address I signed up with (was that mojang@...com or minecraft@...com).

At least if I see "user not recognised", I know to try a different email address.

billyhoffman 1 day ago 6 replies      
"Maximum password length should not be set too low, as it will prevent users from creating passphrases. Typical maximum length is 128 characters."

Why would you ever have a maximum password length at all? bcrypt or (god forbid) your secure hashing algorithm of choice doesn't care about input length, and has a fixed output length to stick in a database. Why on earth would you limit the password length beyond anything so insanely large (1024, etc) to not even matter?

birdmanjeremy 1 day ago 3 replies      
Every time you introduce a password constraint, you've reduced the potential password complexity. I absolutely hate arbitrary password requirements. "not more than 2 identical characters in a row"? WTF? Stop with this nonsense.
AUmrysh 1 day ago 2 replies      
A big one I've seen is more related to the TLS cheat sheet [1] they link to on that page.

Many sites will send session tokens over http because they don't set the "secure" cookie flag. It's a simple thing to do, and prevents a malicious ARP poison or DNS attack from potentially hijacking an account.

You'd be surprised how many sites are vulnerable to such attacks. Reddit, parts of Ebay, several university websites, and many other sites still are vulnerable to session hijacking.

I think people writing web libraries need to start building "sane defaults" concerning security. All cookies should be secure by default, and only those who know what they are doing should turn them off. It's not that much extra overhead, and the potential benefits outweigh the increased processing and bandwidth.

1: https://www.owasp.org/index.php/Transport_Layer_Protection_C...

pc86 1 day ago 2 replies      
> not more than 2 identical characters in a row (e.g., 111 not allowed)

Why? If my password is id8FK38f@&&#d is it inherently less secure if 111 appears in the middle of it somewhere?

darkhorn 1 day ago 0 replies      
Some of the suggestions are bad. Why they are enforcing English characters? Like a-z? For example in Github I write and then it wants me to write a lowercase letter. WTF? It is lowercase! And more secure than an English letter!
Karunamon 1 day ago 1 reply      
The correct response does not indicate if the user ID or password is the incorrect parameter and hence inferring a valid user ID.

ARGH. This is a usability nightmare - moreso when the recovery system implements the same rule.

"Okay, I had an account on this website, which email address was it again?"

try logging in a few times

"Hm.. I must have forgotten the password. Off to reset!"

go through the recovery process

recovery page indicates an email will be sent

email never comes

"Wait, so are they being 'really secure', or is email just broken right now?"

wait a couple hours

forget about the site

samspot 1 day ago 2 replies      
Do people REALLY brute force passwords? Do people REALLY brute force all lowercase, all latin combinations up to 20 characters before trying symbols, uppercase and numbers?

I am very skeptical that the '3/4 complexity rules' approach is making systems meaningfully more secure. I've had all kinds of passwords, but I've never lost them to brute force. Every time it was because someone got inside a company and made off with the database.

If complexity rules don't add anything, they should be discarded in the name of usability.

cddotdotslash 2 days ago 3 replies      
Just a hypothetical, but what if an application started encouraging users to enter a "login sentence" instead of a password. i.e.: "Please enter a sentence that you'll be asked to remember each time you login." Obviously, the standard constraints of length and complexity (albeit slightly altered) can be enforced.

It's much easier for me to remember "Please close the window, I'm cold." then it is for me to remember "XSDJd94*(lo03X.._".

The "horse battery staple" XKCD comes to mind.

snarfy 1 day ago 1 reply      
Password complexity rules are stupid. The only thing that matters is the total entropy. "Entropy too low" is the only error a user should receive when coming up with a password.

Those complexity rules are the result of an entire industry blindly following the best practices of an old unix DES crypt function. It's dumb and it should stop.


bohinjc 1 day ago 1 reply      

  The application may return a different HTTP Error code depending on the authentication attempt response. It may respond with a 200 for a positive result and a *403* for a negative result.
I would say a 401 - Unauthorized with proper WWW-Authenticate header.

403 means forbidden, which apply to when you try to access a resource without permission / authorization

Also, in their Password Storage Cheat Sheet [https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet], they seems to recommend :

  Select:    PBKDF2 [*4] when FIPS certification or enterprise support on many platforms is required;    scrypt [*5] where resisting any/all hardware accelerated attacks is necessary but support isnt.    bcrypt where PBKDF2 or scrypt support is not available.
AFAIK, things are not so binary :

* https://news.ycombinator.com/item?id=3724560

* http://security.stackexchange.com/questions/4781/do-any-secu...

* http://security.stackexchange.com/questions/26245/is-bcrypt-...

dpweb 1 day ago 0 replies      
Simplicity should be a primary goal in the methods used to protect systems. Just because the methods to protect are easy, doesn't mean its easy to crack. For instance, a decent size password and lockout and you're set as far as brute force attacks. They are not going to guess a 10 letter password in 5 tries. After x tries, make them reset. Two factor auth for really important stuff, isn't that pretty much it.

I believe we're seeing more successful attacks from the use of security techniques that are unnecessarily complex and not completely understood (or partially implemented) by most engineers - than cause passwords aren't long enough.

pippy 1 day ago 1 reply      
He forgot an important modern rule on authentication: don't do it.

If you can get another system to do it for you; persona, OpenID, Github, Google, Facebook, or twitter it's more secure for the end user. They have features such as two factor authentication, fraud detection, manage password resets for you, and the end user is more likely already have an account.

Many developers don't agree with this on a moral level, as you are giving power to third party. However developers are developers, and if you do it yourself you're bound to do at least one thing wrong.

Gargoyle888 1 day ago 1 reply      
Isn't 2FA the best approach? I'm just asking.

A problem here where I work is that every application must have a different password and it must change every 90 days. Consequently everyone has a spreadsheet with his passwords written down because nobody could possibly remember them all.

It seems to me that with 2FA, one simple password is adequate. Two independent devices need to be compromised and brute force is ineffective since the turn around time is at least several seconds between tries.

zobzu 1 day ago 0 replies      
I dont really like this page. Its a good effort.. but. (no ands!).

- Most things are just a flyby, such as "hey look heres a paragraph that tells your what MFA is". but doesnt tell you how to use it.

- Password rules are outdated "use caps, 10 char, numbers, etc!". The horse staple blabla has been the new standard for yearS now... and is way better.. generating the password for the user is often not a bad idea

- no mention of upcoming techs like FIDO/U2F

sekasi 1 day ago 0 replies      
This is not a accusing comment, but more of a request for more information:

"Passphrases shorter than 20 characters are usually considered weak if they only consist of lower case Latin characters."

This goes against the concept of diceware generated passwords of 4-6 short words doesn't it? Where in this equation am I getting it wrong? I've been approaching passwords like this for a while now.

jakubp 1 day ago 1 reply      
"An application should respond with a generic error message regardless of whether the user ID or password was incorrect. It should also give no indication to the status of an existing account."

If so.. then how to respond on user registration page when someone tries to open a new account with username / email address of an existing account?

caseyf7 1 day ago 0 replies      
and if you use email addresses as user names, verify they actually own the email address. Apple doesn't do this, and I'm amazed at how many people sign up for iTunes with email addresses I control.
homakov 1 day ago 0 replies      
OTP passwords dont fight client side malware lol.
jjarmoc 1 day ago 1 reply      
Some of this is good advice, but there's a BIG point on the 'Password Storage Cheatsheet' that's linked and referenced by the above article, that I don't think is solid.https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet...

Specifically, it suggests storing passwords using: `return [salt] + HMAC-SHA-256([key], [salt] + [credential]);`

It goes on to note that the key must be protected as private, strongly random, and stored outside the credential store, while the salt (can we please call it a nonce?) can be safely stored against credentials.

I'm still not comfortable with this construction. Stick with their earlier advice and use scrypt, bcrypt, or PBKDF2. That's my order of preference too, which differs from theirs somewhat, but that's a minor quibble; all three are reasonable.

The problem with their construction is that HMAC-SHA-256() is designed to be fast, and so attackers have the opportunity to make a lot of guesses quickly. The secrecy of the key helps over a straight SH256(...), but not a lot for the following reasons:

1 - It assumes an attacker who compromises the credential store won't also have the key.

2 - It assumes an attacker is unable to recover the key.

1) is a valid assumption for certain classes of attack. If your key is stored in an environment variable or something, while credentials are stored in a database, an attacker who compromises your database via a SQL injection won't have the key. But the problem is that an attacker who compromises the application may. If I have full remote code execution on the server (and you have a bad day then, passwords aside) I'll have the key. Or maybe an attacker has an arbitrary file read (not quite as bad a day), and you store your key in a flat file on disk. Or an attacker can cause your application to generate a stack trace (disclosing runtime details) and view the key...

You get the idea - there's lots of potential ways to get that key regardless how you store it, and everything hinges on that. Once they have the key, they can mount dictionary or brute force attacks against credentials just as they would against `SHA256([salt]+[credential])`

2) is a valid assumption only if the attacker doesn't know a single password's plaintext value and the key is sufficiently long and random. If I know the password to my own account (and I likely do) or any other account (let's say one user on the site uses a password that was recovered in another breach) this scheme fails.

Suppose my salt is "SALTYSALT" (okay, so my PRNG sucks; also a detail to be wary of) and my password is ye olde "PASSWORD" (yah, this sucks too.. but it's my crappy password. Maybe I added it just to observe the resulting HMAC value?) Now I can just try calculating `HMAC-SHA-256("A", "SALYTSALT" + "PASSWORD")` If that doesn't match, try `HMAC-SHA-256("B", "SALYTSALT" + "PASSWORD") and so on. The one thing I don't really know that would help is the length of the key. If it's long enough (We'd want at least 256 bits), and strongly random, I may have a difficult time. It it's short (And they make no recommendation on it's length, just that you "Generate the key using cryptographically-strong pseudo-random data"), I'm going to crack the key, and then I'm back to attacking all the other credentials.

I MIGHT be convinced of this being reasonable if it can be ensured that all the HMAC calculations are done in something like an HSA or TPM which generates a large key internally and doesn't expose it, even to the application. But that's probably not the scenario we're talking about here. Even then, you've got nothing to lose by using an adaptive algorithm rather than the HMAC construct. For anything else, it's far safer (and easier really!) to use scrypt, bcrypt, or PBKDF2. So just do that.

peterwwillis 1 day ago 0 replies      
This doesn't touch on commercial authentication managers and how horribly they can be implemented. There's no authorization cheat sheet either.

They also make assumptions like "When multi-factor is implemented and active, account lockout may no longer be necessary." Sure, until someone finds a simple hole in one of the factors and the rest become trivially brute-forced, sniffed, phished, etc. The chain is only as strong as the weakest link.

PyPy 2.5.0 released
points by cyber1  17 hours ago   54 comments top 9
aidos 5 hours ago 2 replies      
"We would like to thank our donors for the continued support of the PyPy project, and for those who donate to our three sub-projects, as well as our volunteers and contributors (10 new commiters joined PyPy since the last release). Weve shown quite a bit of progress, but were slowly running out of funds. Please consider donating more, or even better convince your employer to donate, so we can finish those projects! The three sub-projects are:"

PyPy are doing such incredible work and they seem to ask for very little in the way of funding to make it happen (and they're very explicit about where they're going to spend the money).

Why is it that there aren't more donations from the industry? Is it just a marketing issue? Do they need to do a snazzy kickstarter video to build the hype?

Fede_V 8 hours ago 2 replies      
I'm really looking forward to the numpy specific announcements. Numpy is THE basic building for every single scientific library - if pypy can get a high performance numpy, that will go a long way towards allowing scientific users to use pypy (there is still the detail of libraries that use the c-api to wrap c libraries, but cffi is pretty neat).
kbd 15 hours ago 4 replies      
Congrats to the PyPy team on what sounds like a pretty big release!

Something in the release notes caught my eye:

> The past months have seen pypy mature and grow, as rpython becomes the goto solution for writing fast dynamic language interpreters.

I asked this question[1] on the Perl 6 thread from a few days ago but didn't get an answer. Does anyone know why on earth the Perl 6 folks created yet another dynamic language VM+JIT with MoarVM instead of taking advantage of all the great work done with PyPy? Does anyone know whether PyPy was even considered as a target before writing MoarVM?

[1] https://news.ycombinator.com/item?id=8982229

mrmagooey 13 hours ago 1 reply      
Assuming trunk and 2.5.0 are roughly the same thing it seems like a decent performance increase http://speed.pypy.org/
bmoresbest55 1 hour ago 0 replies      
This is very exciting. I will have to look into using PyPy more regularly.
rcarmo 16 hours ago 1 reply      
FYI, you'll still need to compile a specific gevent branch if you want to use it with this. lxml built fine, uWSGI seems OK too (except for the lack of gevent workers in my build).

Things seem adequately speedy, haven't investigated the network throughput tweaks yet.

ldng 5 hours ago 1 reply      
Now if only someone could fix swig to be compatible with Pypy ..
ngoldbaum 8 hours ago 1 reply      
Does anyone have any experience with numpypy? Is it useful for real work yet?
tbrock 16 hours ago 3 replies      
I wish you could compile scons with pypi.
Layoff Underway at IBM
points by ajarmst  18 hours ago   181 comments top 19
obtino 18 hours ago 6 replies      
Before the IBM apologists start commenting:I was an IBM employee a few years ago and I would never recommend it as a good place to work. You were constantly worried about your job and there were cuts to basic resources all the time. It's not at all surprising to see this happen. IBM only cares about its shareholders and not its employees or customers.
kjs3 32 minutes ago 0 replies      
Cringly said in his article that a quarter of the IBM workforce would "get their paperwork" to be laid off in the last week of January or so. He was very specific and he's not even close. At some point, the layoffs may total a quarter of the workforce, but that's not what his histrionic diatribe said. If he now gets to backpedal and "restate" what he said and be right in his predictions, then so were the forecasters who said New York was going to be buried in snow and yet only got dusted.

People have been predicting the imminent death of IBM, with detailed litanies of the myriad of ways it's unrecoverable failures, pretty much every day of the 30 years I've been in IT. The thing about predicting the end of the world, is that if you do it long enough and lack the humility to be ashamed of all the times you were wrong, you'll eventually get to be right.

Jgrubb 18 hours ago 5 replies      
> Of course, the appearance of the situation, in the eyes of employees and the public, is not being helped by the fact amid IBMs actions comes the boards announcement on Friday of a big raise for CEO Ginni Rometty.
smackfu 18 hours ago 1 reply      
Reminder that Cringeley's original claim from Jan 22nd was:

To fix its business problems and speed up its transformation, next week about 26 percent of IBMs employees will be getting phone calls from their managers. A few hours later a package will appear on their doorsteps with all the paperwork. Project Chrome will hit many of the worldwide services operations. The USA will be hit hard, but so will other locations. IBMs contractors can expect regular furloughs in 2015. One in four IBMers reading this column will probably start looking for a new job next week. Those employees will all be gone by the end of February.

Now he's trying to spin that he never said "layoffs". Not sure why the IEEE is still trusting him.

anonbanker 15 hours ago 0 replies      
Nobody ever got fired for choosing IBM.

Except as an employer.

Someone1234 17 hours ago 3 replies      
But nobody will admit that there is a massive ageism problem in technology..? It is nice that some countries have moved to protect against ageism, it is an extremely common problem that few wish to address or take seriously.
mkozlows 18 hours ago 3 replies      
"IBM said there would be thousands of layoffs. We believe there have been 5,000 layoffs. Clearly IBM was lying."


TeMPOraL 18 hours ago 1 reply      
Wait what... does that mean that management at IBM suddenly realized they don't need 1/4 of the company? Since they rather won't be hiring new people in place of all laid off, I wonder what is going on there? Did they recently have an extremely successful merger with a very similar company? Did 1/4 of the company provide zero output? Or did Watson get so good it can actually replace engineers and sales people?
sciurus 17 hours ago 1 reply      
Ouch. I lived in Columbia, Missouri for a couple years starting in 2012. The IBM office had just opened in 2010 and was a big deal. The city and state gave them large tax incentives to open it there. If IBM really laid of 150 people in it, that's a huge cut.



aceperry 12 hours ago 0 replies      
It would be nice to hear from honest IBM managers, who are doing the firing and downgrading of employees, what is really going on. Though I'm pretty sure most people know what the score is.
bronson 10 hours ago 1 reply      
Why is the tone of the comments on this story so different from those of just last week?


One hypothesis... some of the IBM managers who were commenting on that story now realize Cringely was at least partly right?

akurilin 9 hours ago 0 replies      
Does this impact SoftLayer in any way?
orionblastar 15 hours ago 0 replies      
IBM didn't take the microcomputer seriously, until it saw how well Apple did and then made an IBM PC to compete with it.

They made a deal with Microsoft for DOS, but didn't make the deal exclusive so Microsoft sold their own version of DOS to the PC Cloners.

IBM made the PS/2 series with Microchannel as Clone Killers. VGA was a better video, and Creative Labs had the Sound Blaster for better audio. IBM's Microchannel flopped because people wanted to still use their ISA cards. IBM had OS/2 and Microsoft had their own version of OS/2 and Windows, and Microsoft took their OS/2 NT 3.0 and made Windows NT 3.1 out of it and stabbed IBM in the back for a second time.

IBM sold their printer line to Lexmark, and their PC X86/X64 line to Lenovo, IBM didn't know how to turn a profit on them.

When IBM couldn't supply the PowerPC chips to Apple for their Macintosh line, because IBM was making PowerPC chips for video game consoles as a priority, Apple switched to Intel chips. Then later video game consoles switched to Intel or AMD chips. IBM open sourced their PowerPC chips eventually.

IBM bought out Lotus and basically ran it into the ground and let Excel replace Lotus 123, and Lotus Smartsuite was never updated to compete with Microsoft Office and for modern Windows systems so it fell away and IBM forked OpenOffice.Org to make Lotus Symphony. That also went nowhere.

IBM still earns money from mainframes and contract support. I think IBM got into Linux and Java contracting as well.

But IBM has changed over the decades and it is not the same company it once was. It fell into a trap of maximizing shareholder values rather than making the customer experience a better one like Apple did. Microsoft also suffers from the same sort of thing that IBM does which explains why Microsoft Surface sales tanked.

IBM needs a big reboot, and to focus on making the customer experience better. Mobile apps is an area they could focus on, make the IBM Cloud and then make IBM Lotus Symphony for iOS and Android and store the documents on the IBM Cloud and offer subscriptions for more storage. They should also make Lotus Domino and Lotus Notes for mobile devices, and make a set of developer tools to make Android and iOS apps easier to program.

bhouston 18 hours ago 1 reply      
Still no real confirmation of figures but I guess there is purposeful obfuscation here.
rodgerd 18 hours ago 0 replies      
So maybe not uncritically accepting IBM's press releases would be a good idea?
e0m 16 hours ago 0 replies      
What a remarkable collapse from the "Big Blue" of the 1960s. Is there really anything sexier then then thought of brand new IBM 360 getting loading into a PanAm jet-powered aircraft.
Elrac 18 hours ago 0 replies      
A goodly chunk of the company I work at was recently sold to IBM. Not my chunk, but still - color me deeply un-reassured!
q2 16 hours ago 1 reply      
Recently, Apple announced highest quarterly income in corporate history and now, IBM is going through biggest layoffs in corporate history. It seems to be the season of superlatives in corporate history.

But what a contrast between Apple and IBM!!!

drawkbox 11 hours ago 1 reply      
IBM seems like it would be a terrifying place to work, Initech level. With the 1,2,3 system there is also probably tons of project protectionism going on, and you probably have to wear a tie.
Harper Lee to publish Mockingbird 'sequel'
points by InternetGiant  1 day ago   77 comments top 18
chimeracoder 20 hours ago 3 replies      
I was incredibly excited to see this news upon seeing the headline in the New York Times, and surprised, because Harper Lee has been a recluse for almost her entire life since writing To Kill a Mockingbird, and has repeatedly insisted that she had no desire to publish another book ("I wouldn't go through the pressure and publicity I went through with To Kill a Mockingbird for any amount of money. Second, I have said what I wanted to say and I will not say it again."[0])

After doing a bit of digging, however, I'm a bit concerned. Now, Lee is almost 90, and has suffered a stroke that seems to have had lasting effects. She filed a lawsuit in 2007 against the son-in-law of her former agent, claiming that he took advantage of her mental state during her recovery and duped her into assigning him the copyright to To Kill a Mockingbird[1]. For much of her adult life, her sister handled press relations and shielded Lee from these pressures. Her sister passed away three months ago, and suddenly this new book comes to light[2].

I really hope these suspicions are wrong, and that there's nothing shady at play here. I'm excited to read the book, but I can't help but be skeptical of the timing.

[0] https://en.wikipedia.org/wiki/Harper_Lee#After_To_Kill_a_Moc...

[1] https://en.wikipedia.org/wiki/Harper_Lee#Lawsuit_to_regain_c...

[2] (I dislike linking to Gawker Media sites on principle, but Jezebel actually wrote a good post digging into the details of this - "Be Suspicious of the New Harper Lee Novel".)

nathanb 22 hours ago 1 reply      
Embarrassingly, I first misread this as Harper Lee working on a sequel to the Hunger Games finale Mockingjay, and I was so confused....

I was forced to read To Kill a Mockingbird for school. I started reading it with a bad attitude. After I finished it, I immediately turned back to the first page and reread it, not with a school mindset but with a "this is amazing literature that I need in my life" mindset.

If she was writing this "sequel" at the same time she was writing the original, they're likely to contain the same themes and the same timeless way of looking at life, society, and what it means to be human. I don't know if any novel could survive the pressure of being a long-delayed sequel of To Kill a Mockingbird, but I'm definitely willing to let it try!

chengiz 18 hours ago 4 replies      
It seems everybody here loves "To Kill a Mockingbird". To me, it's a well written but ultimately shallow novel. Finch is your typical woman's fantasy man: great at fatherhood, great at his work, morally upright, totally scrupulous, and yes, best shot in the county. The black people in the novel rarely get a voice, except one of platitudes, and the race relations stuff is totally black and white (excuse the pun), with no particular insight. It counts as literature only because of its propitious timing around the Civil Rights movement. It's a fine school reading list book but that is all it is.
TwiztidK 22 hours ago 1 reply      
A writer working on a biography of Harper Lee came to my high school 6 or 7 years ago to give a presentation about her. He told us that she had written another book but didn't want to publish it due to the pressure she felt from the success of To Kill a Mockingbird, so she planned on having it published after she died. This is probably the book he was talking about.

I can't remember exactly who the writer was, but he spoke about his experience interviewing Kurt Vonnegut for his biography, so it was probably Charles Shields.

xianshou 23 hours ago 2 replies      
On the basis of regression to the mean (http://en.wikipedia.org/wiki/Regression_toward_the_mean), or what we might call the "J.K. Rowling effect," it would be far too much to hope that the sequel will match the original. Nonetheless, this has got to set some sort of record for the gap between a novel and its sequel.

Interestingly, there is a list of gaps between film sequels (http://en.wikipedia.org/wiki/List_of_the_longest_gaps_betwee...), and the longest gap is over 63 years, but there is no such list for books!

fnordfnordfnord 22 hours ago 0 replies      
For anyone who hasn't seen it. "Hey Boo" is a pretty good documentary about Mockingbird and features Harper Lee. http://www.pbs.org/wnet/americanmasters/episodes/harper-lee-...
renglis 23 hours ago 0 replies      
60 years on and it remains relevant and insightful into the events of today. We should all reread it.

I look forward to the new book.

pervycreeper 23 hours ago 1 reply      
Despite widespread changes in social attitudes on some topics, To Kill a Mockingbird is still as relevant to today's world as it was when it was originally published.
wmeredith 23 hours ago 3 replies      
I'd be interested to hear more about the creative dynamic between this new book and To Kill a Mocking Bird. It says she put this new one aside 60 years ago to write TKaMB, but it features the characters later in their lives. I wonder if she was sketching out backstory to flesh out the characters and that was more compelling, so she pivoted and wrote To Kill a Mocking Bird instead?
etep 21 hours ago 0 replies      
Wow. Am currently reading "The Mockingbird Next Door" by Marja Mills. Brifely, Marja gained unprecedented access to the private life of Nelle Harper Lee. It is extremely interesting, and I am quite surprised at this turn of events. Good news!
fmax30 21 hours ago 1 reply      
To kill a mocking bird was the first novel i read in my life. I was only 11 (in 6th grade) at the time and took around 3-4 months (summer break) to complete it, to be honest this was the book that made me realize that reading english literature can be an extremely amazing and insightful experience. Granted i didn't understand many things that were in it at the time but it kept me hooked.

Also i remember thinking that jean was a boy till i was 20-30 pages in realizing that she was in fact a girl.

samatman 23 hours ago 1 reply      
Perhaps David Gerrold will complete the War Against the Cthorr after all! We've only been waiting on that for twenty years...
ojbyrne 23 hours ago 1 reply      
I thought I was seeing an onion headline at first.
jqm 11 hours ago 0 replies      
I'm sure the first book was very good but I never read it and feel negatively about it. Why? Simple. Because it was on the high school required reading list. I looked around at the teachers, looked around at the town, looked around at the larger society in which I lived, and decided very early I was having no part of indoctrination.

It's too bad. Because it probably is a good book. The bible might be as well. But I'll never know because suspicion of indoctrination ruined it for me. Maybe this is a personal failing. But putting books on the high school required reading list is a good way to make thinking people suspicious of motive in my view.

icantthinkofone 15 hours ago 1 reply      
Great. But what does this have to do with Hacker News?
bshimmin 23 hours ago 1 reply      
I think this will be a massive seller, though (depressingly) probably nothing in comparison to a new Harry Potter.

I also think it'll be great.

taivare 20 hours ago 0 replies      
ask HN: seems to be down, sorry, off topic, I want to publish eBook/only, and have all revenues go to charity. not a lot of info on web, regarding this subject. One author who put a Link on the end of his eBook is all.
mw44118 20 hours ago 4 replies      
To Kill a Mockingbird perpetuates the idea that women make false rape accusations. We shouldn't celebrate such a hurtful topic.
How to get into an admin account on a Windows computer
points by joewee  2 days ago   98 comments top 22
richthegeek 2 days ago 2 replies      
Wow this takes me back to busting into Windows 98 via the Help > Print > something > Explorer method.

Actually, someone linked to something like it in the comments: https://i.imgur.com/n9Th4q5.jpg

I can't imagine how difficult it must be to secure a login system with so much added surface due to accessibility and dumb users.

discreditable 1 day ago 2 replies      
This is one reason why we use BitLocker with TPM unlock on all of our student PCs. If the system is booted in an usual way like this, the disk won't unlock. We had issues our first year with a student who booted a Linux DVD and reset the local administrator password and another instance where the student removed the hard drive to do the same (after we'd locked our BIOS settings). Encrypting the drives has 100% prevented these problems, and makes us feel better about sending failed drives to vendors without wiping them.
runeks 2 days ago 3 replies      
If you have physical access to the PC, why not just boot up Ubuntu, and replace the files that need replacing? No need to rely on a feature in Windows that can be disabled.

If full-disk encryption isn't used, there's really not much the underlying OS can do to prevent this.

dguido 2 days ago 1 reply      
This trick is widely used by hackers to maintain persistent access to large enterprise networks. You can RDP around and press Shift 5 times to pop up the cmd.exe on machines you've modified. No malware needed.

Performing this modification DOES NOT require rebooting the computer into startup repair if you have admin access (which is typical after you get hacked). There is a process that watches system32 for modifications and resets them, but if you script it and change the files fast enough it won't notice.

The utility of this trick is really insane. Would you realize that a hacker had backdoor access to your computer via RDP if the only thing that was modified was cmd.exe and sethc.exe swapping places?

ikeboy 2 days ago 1 reply      
Or you can boot from kon-boot.

That makes any password work for login, but only once, and it's undetectable after a reboot.

http://piotrbania.com/all/kon-boot/index2.html is the free version, which can be installed with http://www.pendrivelinux.com/yumi-multiboot-usb-creator/. That only works on 32-bit systems and up to Windows 7.

If you want the paid version, you can either buy it from http://www.piotrbania.com/all/kon-boot/, or get it from http://kickass.so/kon-boot-v2-4-remedy-for-lost-password-mum... or your usual source.

zarify 2 days ago 1 reply      
I found out about this method last week and it's my new favorite after having to fix up a couple of weird situations caused by someone joining their laptop to the school domain, and someone locking themself out of their only admin account.

I was quite surprised to find that the command shell ran with admin privs from before the login process, since I was unable to elevate in any other way (including by trying system restore back before joining, or trying to get into safe mode).

crb 1 day ago 0 replies      
As Raymond Chen would say: "it rather involves being on the other side of this airtight hatchway".
xenophonf 2 days ago 1 reply      
This is the de facto standard for local lost-admin-password resets since Windows NT was released, although back in the day the trick was to change the login screen saver to the command interpreter. The imgur album is a nice touch, I guess, but if you do a web search for "windows lost admin password" (or similar), you'll get some variation on these instructions.
x0n 2 days ago 7 replies      
Physical access and all bets are off. This is not a vulnerability.
ericlathrop 2 days ago 0 replies      
I did exactly this over the holidays to get into a deceased family member's computer.
vkr 2 days ago 1 reply      
If booting from a CD or USB is allowed, you can just change the password using the Pogostick [1] live cd - or any linux live cd.

[1] http://pogostick.net/~pnh/ntpasswd/

slenk 2 days ago 1 reply      
If they use BitLocker, you can't get to the startup repair without the recovery key...
pjc50 1 day ago 1 reply      
I think it's now two decades since I first found how to overwrite BIOS passwords from MS-DOS QBasic in order to break into school computers. Somehow reassuring that kids are still doing the same thing.

The comments about bitlocker and TPM are a good reminder that he who controls the boot sequence controls the computer / phone / car / IoT toaster.

ominous 2 days ago 5 replies      
user comment on imgur: "As a network manager, disable startup repair via group policy. Fixed."

Does this fix it?

runjake 1 day ago 0 replies      
Why is this on here and not flagged?

Similarly, on any UNIX OS, you can boot into single-user mode and obtain root access.

These are ancient techniques.

The obligatory response every. single. time this is brought up: Local access? All bets are off. (With certain caveats, of course).

chdir 2 days ago 2 replies      
Is there a good software (preferably open) to remotely kill/erase your data on a Windows machine. Not all versions have bitlocker. Truecrypt's future is unpredictable.
bigp3t3 1 day ago 0 replies      
I refer to it as the sticky-keys hack. Been using it since before I left High School. (admittedly only 5 years ago)
blueskin_ 2 days ago 2 replies      
Anyone who hates annoyance will already have disabled the stickykeys shortcut.

A more reliable way if you're somewhat prepared is to just use a live linux system and NTPasswd. Any distro that can be installed to a flash drive should have it available.

bonif 1 day ago 0 replies      
Remembers me of the ol' winnuke days (windows 95)
utxaa 2 days ago 1 reply      
this is nothing new.

why go through all the trouble? just take the disk out and mount it elsewhere.

that's why one has to encrypt drives.

stephengillie 2 days ago 0 replies      
Wow, it took 5 days for this to get reposted on HN? It's kinda cool watching the web go round.


dEnigma 2 days ago 0 replies      
Had to do something similar at work a couple months ago when a Windows computer somehow ended up without a single account with administrator privileges but I replaced magnifier.exe with cmd (for some reason I even switched them around) and then opened the supercharged "magnifier" on the login screen. I also didn't use "Startup repair" but a Ubuntu Live CD.

I then got called away from the computer before I could switch magnifier and cmd back to normal and somebody started using the computer again, with hilarious consequences

(Note: I'm not the IT guy in this company xD)

Why We Should Build Cloud Cities on Venus
points by cryptoz  1 day ago   184 comments top 18
aetherson 1 day ago 9 replies      
In terms of human habitation, the big advantages of Venus' atmosphere are:

1. It has essentially Earth-normal gravity. Zero-G long-term is a death sentence for humans. The long term effects of Martian gravity are unknown. It seems safe to assume that Venus gravity is fine.

2. It is protected from impact and radiation by an atmosphere in a way that Mars or asteroids never will be.

3. It has an essentially limitless supply of carbon, oxygen, nitrogen, and sulfur available to it.

4. It is reasonably well-positioned for solar power.

5. It is relatively temperate.

6. Low pressure differential between inside a habitat and outside of one means that leaks are less severe and containment breaches are easier to react to.

But there is at least one huge disadvantage:

1. Everything besides carbon, oxygen, nitrogen, and sulfur needs to be imported, either from a fantastically hostile surface, or down through reentry into an atmosphere in a deep gravity well and rendezvousing with an aerostat.

That disadvantage is a pretty goddamn significant one for human habitation.

But it's not a disadvantage for long-term robot probes, and it's... less... of a disadvantage for a minimal-population scientific base.

scarmig 1 day ago 4 replies      
Thought of the day, which is said mostly tongue-in-cheek:

If we really must think of a planet to terraform, it seems like the best body in the solar system to work on might be... Earth. There are large swaths of it that are more or less currently uninhabitable in its polar regions, and most changes to the atmosphere we make have their greatest effects in those same polar regions. A 5C increase in global average temperatures might be a 10C or more increase in those polar regions, opening up millions of square miles for human habitation and intensive agriculture. It's even self-sustaining: about a quarter of known fossil fuel reserves are available in the Arctic, and those will become much more accessible with warming temperatures.

Needless to say, there are costs elsewhere on the planet for that kind of intervention, but those are very definitely far, far smaller costs than building floating cities on Venus or reheating Mars' core. Many of those costs could even be recouped by adding a terraforming tax on new residents of polar regions and redistributing them to Bengali refugees fleeing their homes.

Economically, it makes a lot more sense than investing resources in terraforming other bodies in our solar system, though it doesn't offer the same risk mitigation.

Disclaimer: I think terraforming Earth, purposefully or not, is a bad idea. I think it's just a better idea than investing real resources into terraforming other planets.

simon_ 1 day ago 4 replies      
Could someone weigh in on the following quote? Sounds fishy / wrong to me:

To put this in perspective, a balloon that is one kilometer in diameter is capable of lifting about 700,000 tons, or the weight of two Empire State Buildings. Add a second balloon of the same size and the lift capacity of these two balloons increases exponentially: its now capable of supporting nearly 6 million tons of weight.

nsxwolf 1 day ago 1 reply      
Do we know what it would look like to float in these safe regions? Would it be all haze, or clear skies? What color would the sky be? What would you see if you looked down?
j_baker 1 day ago 1 reply      
There's one gaping hole in the idea though: where are we going to get water?

One other interesting possibility is colonizing Mercury. It turns out to not be as crazy an idea as it seems at first. Mercury does have water, there's plenty of solar energy, and it's not too hot at the poles. The trickiest part would really be getting there. As deep as it is in the sun's gravity well, it would take 6 years to get there!

ChuckMcM 1 day ago 2 replies      
I've always been surprised that NASA hasn't gone for a blimp type 'probe' for Venus. Something which floated above the clouds and allowed us to do long term observation from that point.

We think in terms of "flying" through the atmosphere, but once it gets dense enough you can make the equivalent of fish to "swim" through it. With internal bouyancy compensation bladders, and a skin impervious to the atmosphere. Smaller probes swimming down into the clouds to collect data about the surface and other conditions.

If nothing else it would be completely different than exploring Mars :-)

mmanfrin 1 day ago 5 replies      
Question to those with science backgrounds: Would it be possible to 'seed' venus with co2 crunching algae, provided we figured some way of suspending them in the atmosphere above the point where they'd cook to death? Some sort of superlight-algae that could live in the clouds and crunch the CO2 in to O2? It would grow and spread and begin reducing the pressure downwards, allowing it to also eat downwards.
feralley 1 day ago 0 replies      
Humans are too cheap.

They'll just pass away during the next big extinction event, despite knowing another will eventually come.

We are no better than the dinosaurs so far. Setup outposts on Mars, the moon, Venus, learn to deal with radiation.

Cat lovers.

cdwhite 1 day ago 0 replies      
A related previous article: "NASA Study Proposes Airships, Cloud Cities for Venus Exploration" (http://spectrum.ieee.org/aerospace/space-flight/nasa-study-p...). Discussion: https://news.ycombinator.com/item?id=8760732
jkot 1 day ago 2 replies      
Only argument for Venus atmosphere is that it sucks much less than its surface. Any asteroid would be better place. Someone must really love Star Wars.
disputin 1 day ago 1 reply      
At what depth underground does the temperature become tolerable?
stesch 1 day ago 1 reply      
"The Space Merchants" anyone? ;-)
vatotemking 1 day ago 1 reply      
Curious, how will they deal with the venusian storms?
imaginenore 1 day ago 6 replies      
Would you rather be surrounded by very thin CO2 of Mars or clouds of sulfuric acid of Venus?

And no possibility to live on the surface?

At least we know how to transform CO2 into oxygen.

Mars just seems orders of magnitude more friendly.

joering2 1 day ago 2 replies      
"In more expansive visions, pumping Venus full of sulfur dioxide or hydrogenor surrounding it in Sun shieldscould terraform its climate into submission".

Couldn't bio-genetics create some sort of virus that inhabits on Mars' or Venus' atmosphere and grows on it while converting it into friendly environment? Or am I speaking pure sci-fi?

davesque 1 day ago 1 reply      
I'll take one.
glxc 1 day ago 0 replies      
Elon Musk has got to step up god damnit
agmcleod 1 day ago 2 replies      
I didn't read it through & through, but i think the benefits of going to mars over venus is longevity. As the idea would be (provided we last long enough) that when the earth starts to swell, earth will get too hot to live on, and therefore so would venus. We'd still have time on mars before having to move further away.
Heartbleed in Rust
points by glass-  2 days ago   132 comments top 18
MichaelGG 2 days ago 5 replies      
As the offending commentor, I apologize. Particularly to the Rust team for generating this negative publicity, and to the person I replied to, for asserting a lie.

I misunderstood Heartbleed, exactly as Ted summarizes. I've no excuse other than commenting when I shouldn't. I am happy though to have my idiocy corrected as I'll comment better in the future.

The rest of the original thread does point out that I did examine every security advisory published by Microsoft over a year or two span, and that, from the descriptions, Rust would have prevented basically every serious (code exec) one. (Notable exceptions being failures in the sandboxed code loading, similar to the various Java in browser bugs.)

nikomatsakis 2 days ago 1 reply      
I don't know that anyone claimed that a bug similar or analogous to heartbleed couldn't be reproduced in Rust. If they did, that was certainly an overstatement. I think more concretely people claimed that unreachable code yields a warning in Rust, which is absolutely true, but certainly not equivalent to saying something like a heartbleed bug would not happen.

In general, Rust is fairly aggressive about linting for "small" details like unused variables, unreachable code, names that don't conform to expected conventions, unnecessary `mut` annotations, and so forth. I've found that these lints are surprisingly effective at catching bugs.

In particular, the lints about unused variables and unreachable code regularly catch bugs for me. These are invariably simple oversights ("just plain forgot to write the code I meant to write which would have used that variable"), but they would have caused devious problems that would have been quite a pain to track down.

I've also found that detailed use of types is similarly a great way to ensure that bugs like heartbleed are less common. Basically making sure that your types match as precisely as possible the shape of your data -- with no extra cases or weird hacks -- will help steer your code in the right direction. This is a technique you can apply in any language, but good, lightweight support for algebraic data types really makes it easier to do.

simias 2 days ago 4 replies      
I mostly agree with the premise: logic errors are always going to be there, at least until the compiler is an IA strong enough to catch them for us (and by then we probably won't need coders anyway...). There's no silver bullet, bad coders are always going to produce. And I also don't like it when people claim that bug X or vulnerability Y wouldn't have happened if they had been technology Z, they're just begging for that type of post.

That being said I'm a bit more skeptical of this part: "code no true C programmer would write : heartbleed :: code no true rust programmer would write :: (exercise for the reader)"

If I look at the examples in the acticle, the C version doesn't look that terrible and contrieved to me. I wonder what the author means by "Survey says no true C programmer would ever write a program like that, either." That looks like a lot of C code I've read, there's nothing particularly weird about it.

On the other hand the rust version looks very foreign to me (and I've been writing quite a lot of rust lately). You basically have to go out of your way to create the same issue.

I guess my point is that while it's true that as long as there'll be coders there'll be bugs and security vulnerabilities it doesn't mean we shouldn't try to make things better. And in my opinion Rust makes it much more difficult to shoot yourself in the foot than plain C.

Torgo 1 day ago 1 reply      
Here is what I noticed about this, sorry if it is considered too off-topic:

There was an argument, about something specific and technical; It was refuted without singling out a specific person by name; without using humiliation or insults; using code to do so ("show me the code!"); and there was a polite acknowledgement and resolution.

This is an example of an interaction in a community that I think anyone would want to be a part of. Thank you.

geofft 1 day ago 0 replies      
I'm very confused at the argument here. The C code looks remarkably close to idiomatic. Not "good," mind you, but "idiomatic". The Rust code looks significantly more contrived to my eyes. I'm reading the blog post as arguing that they're equally contrived.

It's true that you can do terrible things in any language, but the test of a language is how easy it makes it to do the right thing in the common case (plus how possible it makes it to do the thing you want in the uncommon case, without these goals compromising the other).

Is there a reason that reusing the buffer makes sense in Rust? (Zero allocation?)

Also, is it not true that Rust lends itself well, probably better than C, to abstractions like bounds-checked substrings within a single buffer? BoringSSL has been doing this in C, and this definitely would have stopped Heartbleed:


nickik 2 days ago 1 reply      
Some people wrote a completly new TLS Stack in Ocaml to combat this problem:


Here a Video about Mirage OS and this TLS Stack from the 31C3.

Trustworthy secure modular operating system engineering - http://media.ccc.de/browse/congress/2014/31c3_-_6443_-_en_-_...

There goal is to reduce the trusted computing base to a minimal.

Rust could deliver some of the same benefits to writing highperformance low level code.

pacala 1 day ago 1 reply      
If I'm reading the blog code correctly, the error is trusting user input:

    // Rust    let len = buffer[0] as usize;    // C    size_t len = buffer[0];
I'm no Rust hacker, but can I expect the Rust type system to be able to encode some form of tainting? Making the leaky sequence illegal:

    let len = buffer[0] as usize;    // ERROR ERROR ERROR using unscrubbed user input ERROR ERROR ERROR    buffer[0 .. len]
How exactly to encode tainting is left as an exercise to the reader :) But ideally it should be able to identify that the buffer is reused between 2 different requests, and that data tainted from second request is used to index an array tainted with data from first request. This seems eery up Rust's alley, given the concurrency / allocation disambiguation support I've read (alas superficially) elsewhere.

leovonl 1 day ago 0 replies      
I fail to see the point of this whole discussion.

The code reflects exactly what the program is doing, and there's no undefined behaviour anywhere. There's no way to access anything outside the very delimited scope of "buffer" memory area, like stack variables or any other part of the program.

What's the point of using a high-level language for re-defining basic low-level operations on buffers and recreating everything using those low-level constructs without the proper boundary checks?

Of course, you can simply define a huge "unsafe" block and program everything inside it, but what's the point? That you have a language powerful enough to shoot yourself?

Compare that to C or C++: the unsafe block is always on. Any code block can have unsafe properties anywhere. Not only that, but you have ZERO guarantees on memory safety and other general operations. Summarizing, high-level and low-level totally mixed and no way to isolate them.

Sorry, but if you can't see how Rust avoids a "Heartbleed" or any other kind of similar issue, you have no understanding of programming or no experience debugging anything.

And yes: security != safety, but please note you are the one mixing both concepts.

ajanuary 2 days ago 2 replies      
Wasn't the heartbleed issue that you could trick it into reading past the memory it had allocated? That's different to explicitly reusing memory you've allocated without clearing it in between.

The original claim was that rust would prevent the class of errors that caused Heartbleed. No one claimed rust would prevent you from writing a program with a different bug that just happens to exhibit similar behavior.

Buffer overruns are tricker to spot than explicitly reusing a buffer.

[Edit]An example of an actual buffer overrun, with no changes to pingback.


    $:/tmp # cat bleed.c    #include <fcntl.h>    #include <unistd.h>    #include <assert.h>    void    pingback(char *path, char *outpath, unsigned char *buffer)    {            int fd;            if ((fd = open(path, O_RDONLY)) == -1)                    assert(!"open");            if (read(fd, buffer, 256) < 1)                    assert(!"read");            close(fd);            size_t len = buffer[0];            if ((fd = creat(outpath, 0644)) == -1)                    assert(!"creat");            if (write(fd, buffer, len) != len)                    assert(!"write");            close(fd);    }    int    main(int argc, char **argv)    {            unsigned char buffer2[10];            unsigned char buffer1[10];            pingback("yourping", "yourecho", buffer1);            pingback("myping", "myecho", buffer2);    }    $:/tmp # gcc bleed.c  && ./a.out && cat yourecho myecho    #i have many secrets. this is one.    #i know your     one.    +x-core:/tmp #

    C:\Users\ajanuary\Desktop>cat hearbleed.rs    use std::old_io::File;    fn pingback(path : Path, outpath : Path, buffer : &mut[u8]) {            let mut fd = File::open(&path);            match fd.read(buffer) {                    Err(what) => panic!("say {}", what),                    Ok(x) => if x < 1 { return; }            }            let len = buffer[0] as usize;            let mut outfd = File::create(&outpath);            match outfd.write_all(&buffer[0 .. len]) {                    Err(what) => panic!("say {}", what),                    Ok(_) => ()            }    }        fn main() {            let buffer2 = &mut[0u8; 10];            let buffer1 = &mut[0u8; 10];            pingback(Path::new("yourping"), Path::new("yourecho"), buffer1);            pingback(Path::new("myping"), Path::new("myecho"), buffer2);    }        C:\Users\ajanuary\Desktop>hearbleed.exe    thread '<main>' panicked at 'assertion failed: index.end <= self.len()', C:\bot\slave\nightly-dist-rustc-win-64\build\src\libcore\slice.rs:524

steveklabnik 2 days ago 1 reply      
This is why I get a little uncomfortable when people suggest Rust fixes tons of security issues. Yes, it will fix some of them. No, just because a Rust program compiles doesn't mean that it won't have problems.

Rust is _memory safe_. Nothing more, nothing less.

tormeh 2 days ago 2 replies      
My take-away: Low-level code will burn you eventually, and unnecessarily low-level code will burn you unnecessarily
kaoD 2 days ago 3 replies      
Slightly OT: while trying to understand the vulnerability I came across a Rust question.

Why can you do this?

    let mut outfd = File::create(&outpath);    match outfd.write_all(&buffer[0 .. len]) { ... }
According to `old_io::File`'s doc[0] it returns an `IoResult<File>` which is an alias `type IoResult<T> = Result<T, IoError>` i.e. `Result<File, IoError>`. How come you can do `write_all` directly on a `Result<File, IoError>` without unwrapping the `File` first?

The example in the docs does something similar:

    let mut f = File::create(&Path::new("foo.txt"));    f.write(b"This is a sample file");
So I guess I'm missing something here.

[0] http://doc.rust-lang.org/std/old_io/fs/struct.File.html#meth...

stevejones 2 days ago 1 reply      
No true blogger would wilfully misunderstand a buffer overrun vulnerability in order to score some cheap pageviews.

To put it simply, his examples are the equivalent of doing this:

    unsigned char data[4096];    #define X (*(int *)(&data[0]))    #define Y (*(int *)(&data[4]))    ...
Basically, he's explicitly re-using a buffer, no buffer was overrun. In Rust you will not read something out of a buffer you didn't put there first, in C you can, and you might even read several GB out of a 256 byte buffer.

krick 1 day ago 1 reply      
So, let's say I'm on drugs and I'm writing TLS implementation being not "real Rust programmer". What are "rules of the thumb" I should follow (let's assume I have that much self-control) to not end up with something like this?
yk 1 day ago 0 replies      
Well of course this is possible. You can port a bug compatible version of a program to any other language. That is called Turing complete ( and may involve writing a x64 emulator in VB Script). /snark

A bit more serious, I wonder which security problems Rust would have, if it would be as well studied as C.

lmm 2 days ago 1 reply      
"Code no true C programmer would write", eh? And yet one did, in a high-profile, security-critical library. When you find Rust code like this in the wild, I'll start to believe in some kind of equivalence.
qguv 1 day ago 0 replies      
Shouldn't that analogy read:

code no true C programmer would write : heartbleed :: code no true rust programmer would write : (exercise for the reader)

mseepgood 1 day ago 2 replies      
Type / memory safety != security. The Rust people also mistake "no segmentation faults" for "no crashes".
Uber Opening Robotics Research Facility in Pittsburgh to Build Self-Driving Cars
points by foobarqux  1 day ago   138 comments top 18
krschultz 1 day ago 10 replies      
Google opens an Uber competitor. Uber opens a Google (research) competitor.

It will be interesting to see which one can commodize the other. I feel that what Google has built (self-driving cars) is harder to replicate, but Google doesn't have the killer instinct that Uber does.

If Android is any guide, Google would rather spread their innovation around to partners rather than to use it to build a killer first party product. I imagine Google will license their future self driving-car stack to the existing car manufacturers, Uber, et al.

mkempe 1 day ago 1 reply      
Back in 1987, Carnegie Mellon already had robotic vans [1] trying to drive around the campus. They were slow-moving, so students had ample time to cross the road as they approached. I probably have a picture in a box, somewhere. As I recall a major issue was how to stay on the road. [2]

[1] https://www.ri.cmu.edu/pub_files/pub2/thorpe_charles_1988_1/...

[2] https://www.ri.cmu.edu/pub_files/pub3/thorpe_charles_1988_1/...

omarforgotpwd 1 day ago 1 reply      
Google probably wants to run a big data center that coordinates and organizes all the world's cars. A reliable centrally coordinated self driving vehicle network would greatly canabalize air and train travel and cargo, so the winners in this space will control the world's transporation network. Google will probably win, but either way the world economy will benefit massively from the reduction in transportation costs.
Qworg 1 day ago 0 replies      
Actual Uber announcement: http://blog.uber.com/carnegie-mellon

I'm not sure a "partnership" qualifies as "cleaning out".

Animats 1 day ago 1 reply      
That makes sense. CMU has been working with Cadillac on self-driving cars, and has demoed them in Washington DC traffic. The CMU/Cadillac car has nothing visible externally which marks it as a self-driving car. They may be closer to a production product than Google.
ThomPete 1 day ago 3 replies      
So Uber is using low wage drivers to make money so they can invest in self driving cars to completely make the drivers obsolete.

Talk about a moral paradox.

seanp2k2 1 day ago 2 replies      
I'm imagining a future where there are self-driving busses that route themselves based on where people are and where they want to go. For a dense urban area, I think this has real potential to increase route efficiency and bus utilization while decreasing crowding and unpopulated bus trips.
loceng 1 day ago 5 replies      
This feels a bit strange, no? Shouldn't all of their drivers immediately stop using Uber?
coldcode 1 day ago 2 replies      
Self-driving cars in a world where all cars are self-driving is doable. Self-driving cars sharing the road with the morons I deal with every day is not going to happen any time soon, much less dealing with accidents, road closures, ice, snow, rain and the occasional angry politician.
enahs-sf 1 day ago 0 replies      
I find it interesting that Google will probably mint a nice ROI from it's Uber investment and ostensibly dump it all right back into research for robotic cars (I get this isn't how it works, but the idea is novel). CMU has some of the most brilliant minds in robotics - It should be very interesting to see who wins this arms race.
terravion 1 day ago 0 replies      
This seems like this should be easy to source... should we call the robotics institute? Cleaned out seems a bit hyperbolic.
NittLion78 1 day ago 0 replies      
We're one step closer to Johnny Cab, though I suspect it will work a little better than the version in Total Recall.


fspacef 15 hours ago 0 replies      
Either way, Google vs. Uber means cheaper cab rides for all of us...
monkeyninja 1 day ago 0 replies      
Imaging that, we can get rid of all drivers, must be brilliant...
Intoo 22 hours ago 0 replies      
soon techcrunch headlines would be: UBER DRIVERS LAID-OFF, REPLACED BY SELF-DRIVING CARS
Zigurd 1 day ago 1 reply      
It's interesting to see comments of the nature of "Google doesn't have the killer instinct that Uber does."

This is a fight over the businesses of logistics and transportation in general, and on a global scale. You can expect Amazon to join the fight. This is not limited to driving people across town, and winning is vastly more valuable. Nobody is going to hold back.

zobzu 1 day ago 0 replies      
Heh so thats what this uber job offer was about.. i should have replied! ;)
clientbiller 1 day ago 0 replies      
Sniff Sniff... I smell a buyout coming.
Amazon Is in Talks to Buy RadioShack Stores, Report Says
points by swohns  19 hours ago   129 comments top 29
blt 17 hours ago 3 replies      
Please, please, Amazon, leave 1/16 of the shelf space for electronic components! Pack them densely, don't offer customer support, and charge high prices. We need brick and mortar places to buy components. It might be worth a lot of goodwill from electronics tinkerers. Although I guess that's not a big enough group to matter, but who knows...
Spooky23 18 hours ago 8 replies      
Sounds fishy to me.

Radio Shack tends to have old leases in 2nd tier shopping centers. Why would they buy a marginal retailer with poor footprint, when you could just lease stores yourself?

There was a time when getting space in malls and strip shopping centers was tough. This isn't one of those times.

Shivetya 1 hour ago 2 replies      
Would this be a precursor to their delivering items themselves? I wonder if it would allow them to effectively have mini warehouses which are just behind a convenient store front. Something like Amazon Basics, simple items Amazon users buy all the time just now in your neighborhood shopping center.

Hell they could do shipping and receiving like UPS stores if they want. The possibilities are endless.

DigitalSea 12 hours ago 2 replies      
This could be very interesting if it proves to be true. Seeing Amazon purchase Radioshack and then miraculously return it to its former glory and exclusively sell electronic components, hobbyist kits and stop selling things like TV and phones would be a move that I would wholeheartedly support as would my inner 7 year old self who has fond memories of going to Radioshack with my dad and buying a bag of LED's and various electronic components to build things.

As a bonus they could use it to locally store popular items, use the stores as pick-up and drop-off zones (as the site suggests) and have a few computers consumers can come in and use to order directly off of Amazon. I would hate to see Radioshack die, it kind of makes me sad to think the brand could just vanish.

vhost- 19 hours ago 3 replies      
I wonder if this is their in for B&M stores. I can also see this as part of their plan to start shipping items before you even buy them. I click checkout and it tells me to just go pick up the item from Radio Shack on 6th and Weidler.
7952 4 hours ago 0 replies      
Amazon sells so many different things that it is difficult to find products within a particular niche. When you look for educational toys on Amazon you will not presented with electronic kits (for example) because mostly that is not what a generic customer wants. Even if you go looking for a particular niche it can be very tricky.

There are lots of sites that sell the same products Amazon do but target a particular niche. A brand like RadioShack could use the Amazon backend and only expose a particular type of product that fits within the traditional RadioShack ethos.

CyberDildonics 13 hours ago 0 replies      
They should treat the stores as a cache for whatever people in the area order. If someone orders something, send two and put one in the store. That should make things interesting.
MilnerRoute 18 hours ago 2 replies      
It'd be easier for Amazon to sell Amazon smartphones if their customers could first actually hold one in their hands at the local mall.
bastian 17 hours ago 2 replies      
I'm pretty sure that Amazon will use the best positioned stores as forward stocking locations. They experimented with a similar concept at WebVan i believe. I also think they now realize that what Postmates and Instacart are doing today (using the city as a warehouse) is actually working and can be attractive to customers.
sosuke 16 hours ago 4 replies      
Oh no I sure hope not, the tax will be back and that is enough to move me to other sites in several cases.

Edit: To clarify, no sales tax was one of the first things Amazon and other online retailers had on their side. They could sell things cheaper, even by a little, and the rest would be made up by not having to pay sales tax. If they have a presence in a state though they have to collect sales tax. If you buy a lot from Amazon it is kind of like taking a 8.5% pay cut in buying power. If you remember back in 1997-8 there were several bills popping up around it. My Google-fu is failing me, but this is a real issue. Amazon even discontinued the associates programs in some states to avoid taxes.


e0m 16 hours ago 1 reply      
I like the idea that one of the shipping options could be:

"2 day shipping (free with Prime)"

"1 day shipping $3.99"

"Get off your butt and go get it yourself (closest 2 miles)"

BendertheRobot 17 hours ago 4 replies      
This would mean sales tax in all 50 states for amazon purchases.
chrisgd 18 hours ago 0 replies      
Says RadioShack equity holders
genopharmix 1 hour ago 0 replies      
This is a brilliant PR move.
fubarred 10 hours ago 0 replies      
Bezos could refocus on DIY, hobbyists and maker culture... classes (or sponsorship thereof) would be a good direction to get the cash register filling. It's hard to compete with online, open-source, but there are some things people would pay for (and want) in-person.
julianpye 15 hours ago 0 replies      
Scott Galloway talked about this at last week's DLD. You can hear his points and arguments at 6:50 -very insightful video on why Amazon has reached a point where they must make a brick and mortar acquisition https://www.youtube.com/watch?v=XCvwCcEP74Q
fnordfnordfnord 11 hours ago 1 reply      
The Sears & Roebuck dealer store (Smaller Sears stores in typically in rural locations) remade as an Amazon Prime depot/storefront?
raycloyd 16 hours ago 0 replies      
Perhaps Amazon Local could expand and extend as a storefront hub that connects with the city's businesses. Or maybe I just wish I could get local goods in Amazon's purchase model
Animats 17 hours ago 0 replies      
This makes little sense for Amazon. They have a huge product line. What items would they put in a retail outlet? Unless it's a desperate attempt to push their phone/tablet line.
VLM 18 hours ago 1 reply      
Pump n dump. There's a story on bloomberg that radioshack had been a target for months of leveraged buyout rumors in pump n dump schemes and now, finally, "woosh sound of relaxation" thats all over. LOL little optimistic, not done beating this dead horse yet.
bhartzer 17 hours ago 1 reply      
Would make sense for Amazon to do this, they could stock a limited inventory and expand their quick-delivery option for a limited set of products.
mhuffman 14 hours ago 0 replies      
I would like to see Amazon Direct-to-Store pickup locations at all existing radioshacks. Do you hear me Amazon!
analog31 11 hours ago 0 replies      
So does a retail presence in every state mean that Amazon will pay sales tax?
sidcool 7 hours ago 0 replies      
I would be much more excited if Google bought it. Google lacks a physical presence that Apple has, and they need it.
johansch 17 hours ago 1 reply      
Perhaps Amazon is looking to use these locations as pickup-up-places for shipped packages?
mitchell_h 18 hours ago 1 reply      
Bloomberg is reporting that "THE SHACK!" is in talks to close half its stores and sell the rest to Sprint.


schnevets 17 hours ago 0 replies      
I'm sure Amazon is eying every struggling retailer the exact same way.
tn13 17 hours ago 2 replies      
This is a good example of why we should let failed companies fail and let good companies then salvage the good assets.

Imagine US government had taken over RadioShack to "protect the jobs" using taxpayers money and spent billions for a so called turnaround.

pasbesoin 17 hours ago 0 replies      
So, AmAShack or Radiozon?

I'm not sure I see the logic/advantage of taking over existing RadioShack locations as opposed to just making real estate decisions based upon Amazon's own requirements.

A select subset of their stores might be quite select (and limited in number).

Atom now using Io.js
points by skyllo  22 hours ago   128 comments top 11
juddlyon 20 hours ago 1 reply      
For those of you who like me who aren't sure what this is about: Atom - text editor from Github, Io.js - node.js fork.
dshankar 20 hours ago 2 replies      
This isn't particularly surprising, NW.js (previously called node-webkit) switched to IO.js as well.

Edit: to clarify, this is relevant because both Atom and NW.js use a webkit shell.

sigzero 20 hours ago 8 replies      
Until they fix the 2MB limitation on editing files. No way.
joshstrange 19 hours ago 1 reply      
This is good news but I'm a little confused, if Io.js supports ES6 why do you need 6to5?
jbrooksuk 21 hours ago 1 reply      
What does this mean for Atom? Is it faster? Is the compiled size now smaller?
kikki 21 hours ago 1 reply      
This is interesting, and a very big business move to make the switch. Does this say something for the future of Node?
luisrudge 21 hours ago 1 reply      
plus 6to5 support! :)
crucialfelix 17 hours ago 0 replies      
I was just wondering today when this might happen. specifically because I want to use generators in a plugin (supercollider ide). great job guys !
visarga 10 hours ago 1 reply      
90% of my work is on remote files by SFTP/SSH. How's that working in Atom?
jbeja 17 hours ago 0 replies      
Who cares? Is going to be slow an unusable non the less.
jtth 17 hours ago 2 replies      
Why would I ever use a text editor that uses even node.js, let alone some even newer thing? I don't understand how people can commit to such a thing.
Reddit has shut down its nascent cryptocurrency project
points by lelf  1 day ago   116 comments top 7
alexis 1 day ago 7 replies      
Not sure where the author got "redditcoin" but here's what we're actually doing with redditnotes (I said the same thing to fortune when this rumor first arose):

We will be issuing redditnotes.

Our research leads us to want to wait until the law and technology around cryptocurrency are further along before deciding exactly how. We want to make sure we can give the community the full value of the equity when they receive it in the future, and today we havent been able to find a way to do that within existing regulations.

Edit: here's the last official blog post on the subject. http://www.redditblog.com/2014/12/announcing-reddit-notes.ht...We were not contacted by this guardian reporter didn't contact us for comment. So it goes.

vectorpush 1 day ago 2 replies      
I'm glad they killed this project. I can't see how integrating a crypto-payment system into reddit would have worked out into anything other than a disaster.

First of all, if it became commonplace that reddit accounts might contain some sum of crypto-money, the bustling onslaught of salivating hackers would become so tremendous that the site would likely be crippled under the weight of user grievances and brute-force traffic. I'm sure mandatory MFA would be put in place to increase security, but it wouldn't change the fact that reddit would become a major target for hackers, and many people would lose their money, despite MFA.

Next, I think the incentives for shit and spam posts would rocket off the chart as scammers, beggars and cam-models would likely flood the site with crap so that they can extract tips from anyone willing to toss penny shavings in their direction. Karma is already a sufficient reward mechanism, adding money into the fray is totally unnecessary and would almost certainly lower the quality of discussion.

Finally, this may be a stretch, but I fear that integrating crypto-money into the site would open the door for pay-to-subscribe/pay-to-view/pay-to-comment/pay-to-vote subreddit mechanisms. I understand that reddit is a business, but I think that this would be objectively bad, and even worse if subreddit mods were able to get a cut of it.

In the end, I don't think reddit needs crypto, it doesn't make the site better in any conceivable way.

harvestmoon 1 day ago 3 replies      
Reddit is a great site in what it does. And due to its nature, it has tremendous amount of niches, some of which are worth a lot of money if handled properly.

For instance, /r/watches is an active area for discussion of watches. The people who post there often share their new Rolex or their treasured Patek Phillipe. /r/watches is just one of many, many such niches on the site which have a lot of potential value.

My thinking has been that reddit could focus on developing the value in its many product oriented subforums.

Also, interestingly enough, reddit has already sort of created its own new cybercurrency - dogecoin. Though it may not be doing well, I think most of the value in dogecoin was how easy it was to use on reddit.

reddit does need to find a way to monetize. I think it could do so quite successfully due to the nature of it having many high value niches.

I say all this as a big fan of the site and have even been considering making a subreddit finder (so that someone who is a fan of the TV show Suits, for instance, can know that there is /r/suits to discuss the show, which they would have simply no way of knowing if they just landed on reddit's homepage with pictures of Very Round Eggs, to use a current example).

yafujifide 1 day ago 11 replies      
Ryan X. Charles here. AMA.
danso 1 day ago 1 reply      
How far did the implementation of the currency go, and will what remains of it be released as OSS, if it's worthwhile to do so?
banderon 1 day ago 0 replies      
Only 125 days since I posted this: https://news.ycombinator.com/item?id=8390136. I thought it was a cool idea... but I also thought BTC was a worthy investment at $400.
muyuu 1 day ago 1 reply      
Reddit seems a bit without direction lately. Maybe they cannot make as much from their popularity as they expected.
First NetHack bot ascension
points by ivank  1 day ago   80 comments top 12
statico 20 hours ago 2 replies      
You can watch players (and bots) play NetHack in real time:

    telnet nethack.alt.org
...then hit "w" to watch games that are in progress. You might need to resize your window, and some players might be using a different character set than your terminal.

If you want to start playing, telnet there, create an account, hit "p" to play a game and then "?" to read basic help. http://nethack.alt.org/ will keep logs of your game as well as other stats.

This post also mentions previous bot attempts such as the Tactical Amulet Extraction Bot (TAEB), which is also worth looking at: http://taeb.github.io/

Warning, SPOILERS: If you're okay with spoiling the game to some extent (e.g., solutions to common puzzles, which corpses are safe to eat, strategies), check out the NetHackWiki: http://nethackwiki.com/wiki/Main_Page

jashkenas 21 hours ago 2 replies      
Folks, do yourself a favor, and check out the bot's source: https://github.com/krajj7/BotHack/blob/master/src/bothack/bo...

It's really, really neat. Even to those of us who know little or nothing about the game, from the nested Englishy descriptors piled up into short conditions for things-the-bot-might-want-to-do, the basic strategies can be discerned...

jedberg 22 hours ago 7 replies      
At the risk of losing my nerd card, I've never played Nethack.

I read the reddit thread, and while it was in English, it made not a lick of sense to me. Now I know how non-engineers feel sometimes. :)

BTW, can someone tell me why this is such an amazing accomplishment? I know Nethack is very old, so is this a case of a complex problem space or just no one has tried before?

jere 21 hours ago 1 reply      
If you find this interesting, take note that people were writing bots to beat Rogue over 30 years ago: http://en.wikipedia.org/wiki/Rogue_(video_game)#Automated_pl...
bkcooper 22 hours ago 0 replies      
Good stuff.

There's a bot with several wins in Dungeon Crawl Stone Soup, another roguelike. The combinations it has won are pretty rote, but it's still very impressive, particularly since I don't think it's even capable of knowing that much about the game (it uses Lua handles to pick up information about the state and I believe there's a fair chunk of the game that isn't exposed that way.)

riffraff 20 hours ago 0 replies      
> the bot has already managed to reach Rodney without farming and can get to the Castle and beyond fairly reliably, maybe 1 in 10 runs or so

so, the bot is also a better player than me.

fjarlq 13 hours ago 0 replies      
Video of the winning game: https://www.youtube.com/watch?v=unCQHAbGsAA

The author's list of milestones illustrates the key components of this achievement, as well as some limitations:


Awesome work, Jan Krajicek!

th0br0 23 hours ago 2 replies      
Reading through the comments, I realized that people even do speedruns in Nethack... impressive.
samfoo 16 hours ago 0 replies      
Really amazing achievement, kudos to duke-nh!

For those of you who are looking to get into nethack, or play it all the time... (shameless plug) I've been working on a project for some years that helps ease some of the monotony of playing.


For example, it keeps track of where you see shops, so you can come back, maps out levels and what you've seen on them, and auto-price identifies things you pick up.

There's a rudimentary plugin system so you can write your own extensions (more coming soon).

guelo 20 hours ago 1 reply      
One of the difficulties for a human playing Nethack is remembering all the potions and amulets and wands and scrolls and rings and monsters and their effects and many, many, many combinations. In a way a bot should have an easier time since it has basically unlimited memory.
nkuttler 21 hours ago 1 reply      
That is truly impressive. I remember what a colossal achievement my first ascension felt like. Nethack is such a complex game, but the most important skill is probably patience and planning single moves, something a bot can be perfect or very good at. Now I feel like playing again.
mkramlich 19 hours ago 1 reply      
I'm the creator of a Rogue-like game (Dead By Zombie) and love NetHack, consider it one of the masterpieces of game design I've tried to study and learn from. Lessons baked into it both in terms of game design and software. Has a lot of bang per buck, in terms of fun/value per LOC, and per square inch of screen real estate.

If anybody gets introduced to Rogue-like UI for the first time because of this article, and likes it, I also recommend checking out Dwarf Fortress. Similar but very very different in ways that people tend to either love or hate.

Knightmare: A DevOps CautionaryTale (2014)
points by strzalek  15 hours ago   78 comments top 18
ratsbane 14 hours ago 1 reply      
I can't read something like this without feeling really bad for everyone involved and taking a quick mental inventory of things I've screwed up in the past or potentially might in the future. Pressing the enter key on anything that affects a big-dollar production system is (and should be) slightly terrifying.
NhanH 14 hours ago 5 replies      
Everytime I'm reading the story, there is one question that I've never understood: why can't the just shutdown the servers itself? There ought to be some mechanism to do that. I mean, $400 millions is a lot of money to not just bash the server with a hammer. It seems like they realized the issue early on and was debugging for at least part of the 45 minutes. I know they might not have physical access to the server, but wouldn't there be any way to do a hard reboot?
vijucat 7 hours ago 3 replies      
I once shut down an algorithmic trading server by hastily typing (in bash):

- Ctrl-r for reverse-search through history

- typing 'ps' to find the process status utility (of course)

- pressing Enter,....and realizing that Ctrl-r actually found 'stopserver.sh' in history instead. (There's a ps inside stoPServer.sh)

I got a call from the head Sales Trader within 5 seconds asking why his GUI showed that all Orders were paused. Luckily, our recovery code was robust and I could restart the server and resume trading in half a minute or so.

That's $250 million to $400 million of orders on pause for half a minute. Not to mention my heartbeat.

Renamed stopserver.sh to stop_server.sh after that incident :|

P.S. typing speed is not merely overrated, but dangerous in some contexts. Haste makes waste.

ooOOoo 10 hours ago 1 reply      
The post is quite poor and suffer a lot from hindsight bias.Following article is so much better:http://www.kitchensoap.com/2013/10/29/counterfactuals-knight...
otakucode 13 hours ago 1 reply      
While articles like this are very interesting for explaining the technical side of things, I am always left wondering about the organizational/managerial side of things. Had anyone at Knight Capital Group argued for the need of an automated and verifiable deployment process? If so, why were their concerns ignored? Was it seen as a worthless expenditure of resources? Given how common automated deployment is, I think it would be unlikely that none of the engineers involved ever recommended moving to a more automated system.

I encountered something like this about a year ago at work. We were deploying an extremely large new system to replace a legacy one. The portion of the system which I work on required a great deal of DBA involvement for deployment. We, of course, practiced the deployment. We ran it more than 20 times against multiple different non-production environments. Not once in any of those attempts was the DBA portion of the deployment completed without error. There were around 130 steps involved and some of them would always get skipped. We also had the issue that the production environment contained some significant differences from the non-production environments (over the past decade we had, for example, delivered software fixes/enhancements which required database columns to be dropped... this was done on the non-production systems, but was not done on the production environment because dropping the columns would take a great deal of time). Myself and others tried to raise concerns about this, but in the end we were left to simply expect to do cleanup after problems were encountered. Luckily we were able to do the cleanup and the errors (of which there were a few) were able to be fixed in a timely manner. We also benefitted from other portions of the system having more severe issues, giving us some cover while we fixed up the new system. The result, however, could have been very bad. And since it wasn't, management is growing increasingly enamored with the idea of by-the-seat-of-your-pants development, hotfixes, etc. When it eventually bites us as I expect it will, I fear that no one will realize it was these practices that put us in danger.

rgj 9 hours ago 0 replies      
Repurposing a flag should be spread over two deployments. First remove the code using the old flag, then verify, then introduce code reusing the flag.

Even if the deployment was done correctly, during the deployment there would be old and new code in the system.

gunnark01 8 hours ago 1 reply      
I used to work in the HFT, and I dont understand is why there was no risk controls. They way we did it was to have explicit shutdown/pause rules (pause meaning that the strategy will only try to get flat).

The rules where things like: - Too many trades in one direction (AKA. big pos) - P/L down by X over Y - P/L up by X over Y - Orders way off the current price

When ever there was a shutdown/pause a human/trader would need to assess the situation and decide to continue or not.

serve_yay 14 hours ago 2 replies      
If you fill the basement with oily rags for ten years, when the building goes up in flames, is it the fault of the guy who lit a cigarette?
Mandatum 14 hours ago 2 replies      
I remember reading a summary of this when it occurred in 2012. It's obvious to everyone here what SHOULD have been done, and I find this pretty surprising in the finance sector..

Also your submission should probably have (2014) in the title.

solarmist 14 hours ago 2 replies      
Why would they repurpose an old flag at all? That seems crazy to me unless it was something hardware bound.
beat 9 hours ago 0 replies      
It's nice to see a more detailed technical explanation of this. I've used the story of Knight Capital is part of my pitching for my own startup, which addresses (among other things) consistency between server configurations.

This isn't just a deployment problem. It's a monitoring problem. What mechanism did they have to tell if the servers were out of sync? Manual review is the recommended approach. Seriously? You're going to trust human eyeballs for the thousands of different configuration parameters?

Have computers do what computers do well - like compare complex system configurations to find things that are out of sync. Have humans do what humans do well - deciding what to do when things don't look right.

narrator 13 hours ago 2 replies      
Somebody was on the other side of all those trades and they made a lot of money that day. That's finance. Nobody loses money, no physical damage gets done and somebody on the other side of the poker table gets all the money somebody else lost.
__abc 12 hours ago 1 reply      
This must be an old wives tale. I live in Chicago and a trading form on the floor beneath us went bankrupt, in roughly the same time, with a similar "repurposed bit" story.

Maybe it's the same one .....

aosmith 13 hours ago 0 replies      
Wasn't Knight in trouble for some other things as well?
recursive 14 hours ago 0 replies      
"Power Peg"? More like powder keg.
danbruc 13 hours ago 1 reply      
What really looks broken to me in this story is the financial system. It has become an completely artificial and lunatic system that has almost nothing to do with the real - goods and services producing - economy.
bevacqua 13 hours ago 0 replies      
Ah yes, this story is legendary. I discuss it in my JavaScript Application Design book[1]. Chaos-monkey server-ball-wrecking sounds like a reasonable way to mitigate this kind of issues (and sane development/deployment processes, obviously)

[1]: http://bevacqua.io/bf

hcarvalhoalves 14 hours ago 2 replies      
As usual in catastrophic failures, a series of bad decisions had to occur:

- They had dead code in the system

- They repurposed a flag for a previous functionality

- They (apparently) didn't had code reviews

- They didn't had a staging environment

- They didn't had a tested deployment process

- They didn't had a contingency plan to revert the deploy

It could be minimized or avoided altogether by fixing just one of the points. Incredible.

       cached 4 February 2015 16:11:01 GMT