hacker news with inline top comments    .. more ..    18 Jun 2015 News
home   ask   best   4 years ago   
Inceptionism: Going Deeper into Neural Networks googleresearch.blogspot.com
199 points by neurologic  4 hours ago   38 comments top 19
davedx 20 minutes ago 0 replies      
Worth reading the comments too.

One from Vincent Vanhoucke: "This is the most fun we've had in the office in a while. We've even made some of those 'Inceptionistic' art pieces into giant posters. Beyond the eye candy, there is actually something deeply interesting in this line of work: neural networks have a bad reputation for being strange black boxes that that are opaque to inspection. I have never understood those charges: any other model (GMM, SVM, Random Forests) of any sufficient complexity for a real task is completely opaque for very fundamental reasons: their non-linear structure makes it hard to project back the function they represent into their input space and make sense of it. Not so with backprop, as this blog post shows eloquently: you can query the model and ask what it believes it is seeing or 'wants' to see simply by following gradients. This 'guided hallucination' technique is very powerful and the gorgeous visualizations it generates are very evocative of what's really going on in the network."

philipn 1 hour ago 0 replies      
The reason they look so 'fractal-like' (e.g. trippy!) is because they actually are fractals!

In the same way a normal fractal is a recursive application of some drawing function, this is a recursive application of different generation or "recognition -> generation" drawing functions built on top of the CNN.

So I believe that, given a random noise image, these networks don't generate the crazy trippy fractal patterns directly. Instead, that happens by feeding the generated image back to the network over and over again (with e.g. zooming in between).

Think of it a bit like a Rorschach test. But instead of ink blots, we'd use random noise and an artificial neural network. And instead of switching to the next Rorschach card after someone thinks they see a pattern, you continuously move the ink blot around until it looks more and more like the image the person thinks they see.

But because we're dealing with ink, and we're just randomly scattering it around, you'd start to see more and more of your original guess, or other recognized patterns, throughout the different parts of the scattered ink. Repeat this over and over again and you have these amazing fractals!

dnr 3 minutes ago 0 replies      
Am I the only one who found those images somewhat disturbing? I wonder if they're triggering something similar to http://www.reddit.com/r/trypophobia
moyix 3 hours ago 1 reply      
This appears to be the source of the mysterious image that showed up on Reddit's /r/machinelearning the other day too:


meemoo 2 hours ago 1 reply      
Tweak image urls for bigger images:

Ibis: http://3.bp.blogspot.com/-4Uj3hPFupok/VYIT6s_c9OI/AAAAAAAAAl...Seurat: http://4.bp.blogspot.com/-PK_bEYY91cw/VYIVBYw63uI/AAAAAAAAAl...Clouds: http://4.bp.blogspot.com/-FPDgxlc-WPU/VYIV1bK50HI/AAAAAAAAAl...Buildings: http://1.bp.blogspot.com/-XZ0i0zXOhQk/VYIXdyIL9kI/AAAAAAAAAm...

I'd love to experiment with this and video. I predict a nerdy music video soon, and a pop video appropriation soon after.

simonster 2 hours ago 3 replies      
The fractal nature of many of the "hallucinated" images is kind of fascinating. The parallels to psychedelic drug-induced hallucinations are striking.
davesque 1 hour ago 0 replies      
This is one of the most astounding things I've ever seen. Some of these images look positively like art. And not just art, but good art.
pault 2 hours ago 3 replies      
I would love to see what would come out of a network trained to recognize pornographic images using this technique. :)
IanCal 1 hour ago 0 replies      
The one generated after looking at completely random noise on the bottom row, second from the right:


Reminds me very heavily of The Starry Night https://www.google.com/culturalinstitute/asset-viewer/the-st...

Lovely imagery.

I never had much luck with generative networks. I did some work putting RBMs on a GPU partly because I'd seen Hinton talk showing starting with a low level description and feeding it forwards, but always ended up with highly unstable networks myself.

frankosaurus 1 hour ago 0 replies      
Really cool. You could generate all kinds of interesting art with this.

I can't help but think of people who report seeing faces in their toast. Humans are biased towards seeing faces in randomness. A neural network trained on millions of puppy pictures will see dogs in clouds.

intjk 1 hour ago 0 replies      
I'll repeat what I posted on facebook because I thought it was clever: "Yes, but only if we tell them to dream about electric sheep."

So, tell the machine to think about bananas, and it will conjure up a mental image of bananas. Tell it to imagine a fish-dog and it'll do its best. What happens if/when we have enough storage to supply it a 24/7 video feed (aka eyes), give a robot some navigational logic (or strap it to someone's head), and give it the ability to ask questions, say, below some confidence interval (and us the ability to supply it answers)? What would this represent? What would come out on the other side? A fraction of a human being? Or perhaps just an artificial representation of "the human experience".

...what if we fed it books?

tomlock 1 hour ago 0 replies      
These paintings remind me of Louis Wain's work when he was mentally ill.

Which makes me wonder, are these sophisticated neural nets mentally ill, and what would a course of therapy for them be like?

nl 1 hour ago 2 replies      
I'd really like to see what an Electric Sheep looks like. Maybe if they did a collaboration with the Android team?
henryl 2 hours ago 1 reply      
I'll be the first to say it. It looks like an acid/shroom trip.
anigbrowl 27 minutes ago 0 replies      
These images are remarkably similar to chemically-enhanced mammalian neural processing in both form and content. I feel comfortable saying that this is the Real Deal and Google has made a scientifically and historically significant discovery here. I'm also getting an intense burst of nostalgia.
huskyr 1 hour ago 0 replies      
Very cool. I wonder if there's some example code on Github to generate images like this?
mraison 59 minutes ago 0 replies      
Really nice. I'd be interested in seeing a more in-depth scientific description of how these images were actually generated. Are there any other publications related to this work?
hliyan 1 hour ago 1 reply      
Am I the only person who is not entirely happy about the overuse of the pop-culture term 'inception' for everything that is remotely nested, recursive or strange-loop-like?

 In this paper, we will focus on an efficient deep neural network architecture for computer vision, codenamed Inception, which derives its name from the Network in network paper by Lin et al [12] in conjunction with the famous we need to go deeper internet meme [1]

gojomo 42 minutes ago 0 replies      
Facial-recognition neural nets can also generate creepy spectral faces. For example:



(Or if you want to put them full-screen on infinite loop in a darkened room: http://www.infinitelooper.com/?v=XNZIN7Jh3Sg&p=nhttp://www.infinitelooper.com/?v=ogBPFG6qGLM&p=n )

The code for the 1st is available in a Gist linked from its comments; the creator of the 2nd has a few other videos animating grid 'fantasies' of digit-recognition neural-nets.

From Asm.js to WebAssembly brendaneich.com
730 points by fabrice_d  14 hours ago   287 comments top 36
sixdimensional 14 hours ago 2 replies      
I think this quote speaks volumes - "WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks." Sometimes I think maybe, just maybe the W3C and other web standards groups finally have some wind behind their sails.

It may have taken a while, but with all these individuals and organizations cooperating in an open space, we may finally advance yet again into another new era of innovation for the web.

I am really excited about this, much like others in these comments.

We have been beating around the bush to have a true assembly/development layer in the browser for a long time: Java applets, Flash, Silverlight, you name it - but no true standard that was open like Javascript is open. This component has the possibility of being the neutral ground that everyone can build on top of.

To the creators (Brendan Eich et. al) & supporters, well done and best of luck in this endeavor. It's already started on the right foot (asm.js was what lead the way to this I think) - let's hope they can keep it cooperative and open as much as possible for the benefit of everyone!

AriaMinaei 10 hours ago 14 replies      
Does everyone think this is good news?

I'm all for making the web faster/safer/better and all that. But I am worried about losing the web's "open by design" nature.

Much of what I've learned and am learning comes from me going to websites, opening the inspector and stepping through their code. It's educational. You learn things you may never read about in tutorials or books. And it's great because the author may have never intended for their code to be studied. But whether they like it or not, other people will learn from their code, and perhaps come up with [occasionally] better versions of it on their own.

This has helped the web development to evolve faster, and it's obvious how democratizing this "open-by-design" property is, and I think we should be concerned that it's being traded away for another (also essential) property.

Human beings cannot read asm.js code. And a bytecode format will be more or less the same. So, no matter how much faster and more flexible this format/standard is, it will still turn web apps into black boxes that no one can look into and learn from.

pcwalton 14 hours ago 3 replies      
Having been on one side of the perpetual (and tiresome) PNaCl-versus-asm.js debate, I'm thrilled to see a resolution. I really think this is a strategy that combines the best of both worlds. The crucial aspect is that this is polyfillable via JIT compilation to asm.js, so it's still just JavaScript, but it has plenty of room for extensibility to support threads, SIMD, and so forth.
wora 13 hours ago 1 reply      
Oberon language had a similar system called Juice back in 1997.It does exactly the same thing, e.g. using binary format tostore a compressed abstract syntax tree as intermediate formatwhich can be compiled efficiently and quickly. I think it evenhas a browser plugin much as Java applet. Life has interestingcycles. I don't have the best link to the Juice.

[1] https://github.com/berkus/Juice/blob/master/intro.htm[2] ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf

dankohn1 13 hours ago 3 replies      
This is enormous news. I could see a scenario where, in ~5 years, WebAssembly could provide an alternative to having to develop apps with HTML for the web, Swift for iOS, and Java for Android. Instead, you could build browser-based apps that actually delivered native performance, even for CPU- and GPU-intensive tasks.

Of course, there would still be UI differences required between the 3 platforms, but you would no longer need 3 separate development teams.

addisonj 13 hours ago 5 replies      
Prepare for the onslaught of new (and old) languages targeting the web.

While this is welcome news, I am also torn. The possibilities are pretty amazing. Think seamless isomorphic apps in any language that can target WebAssembly and has a virtual dom implementation.

However, it finally seems like JS is getting some of its core problems solved and is getting usable. I wonder if it might have short term productivity loss as the churn ramps up to new levels with a million different choices of language/platform.

Either way, it will be an interesting time... and a time to keep up or risk being left behind.

amyjess 13 hours ago 2 replies      
This is probably the best thing that can happen to web development.

For quite a while, I've been thinking about how instead of hacks like asm.js, we should be pushing an actual "Web IR" which would actually be designed from the ground up as an IR language. Something similar to PNaCl (a subset of LLVM IR), except divorced from the Chrome sandbox, really.

spullara 9 hours ago 1 reply      
It is really too bad that at some point in the last 18 years of Java VMs being in browsers that they didn't formalize the connection between the DOM and Java so that you could write code that interacted directly with the DOM and vice/versa in a mature VM that was already included. Would have been way better than applets, way faster than Javascript and relatively easy to implement. The browsers actually have (had?) APIs for this but they were never really stabilized.
aikah 13 hours ago 0 replies      
So it's basically bytecode for the web without compiling to javascript right ?

Any language can now target that specific bytecode without the need for javascript transpilation.

For instance Flash can target this format in place of the Flash player, making swf files future proof since backed by standard web techs.

So it's basically the return of Flash,Java applets and co on the web. And web developers won't have to use Javascript anymore.

The only constraint is obviously the fact that the bytecode has only access to web apis and cant talk directly to the OS like with classic browser plugin architectures.

kodablah 14 hours ago 2 replies      
I think the biggest win is https://github.com/WebAssembly/design/blob/master/FutureFeat.... Now instead of asm.js being only for emscripten-compiled code (or other non-GC code) WebAssembly can be used for higher level, GC'd languages. And even better, https://github.com/WebAssembly/design/blob/master/NonWeb.md, means we may get a new, generic, standalone VM out of this which is always good (I hope I'm not reading in to the details too much). As someone who likes to write compilers/transpilers, I look forward to targetting this.
comex 10 hours ago 1 reply      
Has any consideration been given to using a subset or derivation of LLVM bitcode a la PNaCl? I know there are significant potential downsides (e.g. according to [1], it's not actually very space efficient despite/because of being bit-oriented and having fancy abbreviation features), but it already has a canonical text encoding, it has been 'battle-tested' and has semantics by definition well suited for compilers, and using it as a base would generally avoid reinventing the wheel.

[1] https://aaltodoc.aalto.fi/handle/123456789/13468

daurnimator 7 hours ago 2 replies      
This still doesn't fix the biggest issue with running non-javascript code in the browser: browsers still offer no way to know when a value is collected.

e.g. if I allocate a callback function, and hand it to setTimeout, I have no way to know when to collect it.

Sure, you can encode rules about some of the common functions; but as soon as you get to e.g. attaching an 'onreadystatechange' to an XHR: you can't follow all the different code paths.

Every time a proposal comes up to fix this:

 - GC callbacks - Weak valued maps - Proxy with collection trap
The proposal gets squashed.

Unless this is attended to Javascript remains the required language on the web.

M8 14 hours ago 4 replies      
If I could just use my favourite language and not feel like a second class citizen, then I am not sure there would be anything else to complain about as a developer, really. A mark-up bytecode so that we could forget about the nightmare of HTML and CSS as well?
JoshTriplett 14 hours ago 1 reply      
I'm interested to see what the API side of WebAssembly looks like in browsers; hopefully this will make it easier to expose more full-featured sandboxed APIs to languages targeting the web, without having to tailor those APIs to JavaScript. For instance, API calls that actually accept integers and data structures rather than floating-point and objects.

For that matter, though in the short-term this will be polyfilled via JavaScript in browsers, it'll be fun to see the first JavaScript-to-WebAssembly compiler that allows you to use the latest ECMAScript features in every browser.

rhaps0dy 14 hours ago 0 replies      
Finally. IMO this is what the web has been calling for since AJAX went mainstream.

They are doing great work. The client's operating system matters little now, but it will not matter at all soon.

McElroy 11 hours ago 0 replies      
https://github.com/WebAssembly/design/blob/master/FAQ.md#wil... makes me happy as that was the first concern I had when reading these news :)
AndrewDucker 13 hours ago 1 reply      
Interestingly, about five years ago, he said he couldn't see this ever happening:https://news.ycombinator.com/item?id=1905291
haberman 10 hours ago 0 replies      
Very very happy to see this.

Politically it appears to be a fantastic collaboration.

Technically it looks like they have really thought through this hard -- if you look through the post-MVP plans (https://github.com/WebAssembly/design/blob/master/FutureFeat...) there are a lot of exciting ideas there. But it's not just pie-in-the-sky speculation, the amount of detail makes it clear that they have some really top compiler people who are really rigorously exploring the boundaries of what can be accomplished inside the web platform (SIMD, threading, GC integration, tail calls, multiprocess support, etc).

mhd 14 hours ago 0 replies      
Combined with something like e.g. Flipboard's react-canvas, this means we could bypass and re-implement most of the browser stack...
bhouston 14 hours ago 2 replies      
I guess this is in the spirit of NaCL and its bytecode, and the Java VM/Java bytecode, and the .NET runtime/.NET IR. It makes a lot of sense and I get it then sort of gets competitive with those efforts as well.
Murkin 13 hours ago 5 replies      
Can someone explain why not just go the JVM or .NET CLR path ?

Both well tested, well executed, great tooling, supported on many platforms, compilation targets of many existing languages.

Serious question.. is it licensing ?

thomasfoster96 14 hours ago 2 replies      
This is pretty awesome, and is a pretty good use of all the effort that's been going into asm.js

One question though - I found a proposal somewhere on a Mozilla run wiki about a web API for registering script transpiling/interpreter engines. I've lost the web address, but if anyone know any more about this is love to see it rekindled.

moron4hire 13 hours ago 3 replies      
So for now, the idea is to write C++, compile it to ASM.js, translate it into WebAssembly, GZIP it, transmit it, unGZIP it, then run a polyfill to translate the WebAssembly into ASM.js?

This sounds absurd. I can't even get through getting Clang, LLVM, and Emscripten built from source as it is, it's such a house-of-cards with configuration and dependency settings. Have any of you tried building Chromium from scratch? I have, on three separate occasions, as I'd like to try to contribute to WebVR. End result: fewer gigs of free space on my hard drive and no Chromium from source.

Part of that is my impatience: I'm used to C# and Java, where dependencies are always dynamically linked, the namespacing keeps everything from colliding, and the semantics are very easy to follow. But even Node's braindead NPM dependency manager would be better than the hoops they make you jump through to build open-source C++ projects. I mean, I just don't get how someone could have at any point said "yes, this is a good path, we should continue with this" for all these custom build systems in the wild on these projects.

I could be way off. I'm only just reading the FAQ now and I'm not entirely sure I understand what has actually been made versus what has been planned. There seems to be a lot of talk about supporting other languages than C++, but that's what they said about ASM.js, and where did that go? Is anyone using ASM.js in production who is not Archive.org and their arcade emulator?

I don't know... I really, really want to like the web browser as a platform. It has its flaws, but it's the least painful solution of all of the completely-cross-platform options. But it's hard. Getting harder. Hard enough I'm starting to wonder if it'd be smarter to develop better deployment strategies for an existing, better programming language than to try to develop better programming languages for the browser's existing, better deployment strategy.

This telephone game being played by translator tools and configuration management tools and polyfills and frameworks and... the list goes on! This thing we consider "modern" web development is getting way out of hand. JS's strength used to be that all you needed was a text editor. Everyone--both users and developers--can already use it and run it.

If it's just one tool, I'll get over it. But stringing these rickety, half-implemented tools together into a chain of codependent systems is unacceptable. It just feels like they're foisting their inability to finish and properly deploy their work on us. Vagrant recipes are nice, but they should be a convenience, not a necessity.

Sorry. Good for them. Just finish something already.

ncw33 13 hours ago 1 reply      
Nice, but I'm still waiting for 64-bit integer arithmetic!

For our use case, what I like about this is that we can continue to use emscripten and the technology will come to us, rather than requiring app developers to invest in yet another technology (our switchover from NaCl to emscripten was very time consuming!)

jacquesm 10 hours ago 0 replies      
So, is this the long way around to get us Java Applets all over again?
vmorgulis 9 hours ago 0 replies      
This is awesome.

We will probably need a package manager after that (like apt or npm).

A use case could be with ImageMagick, OpenCV, OpenSceneGraph or qemu inside the browser. All of them are huge and useful projects with many common dependencies.

jewel 14 hours ago 1 reply      
I hope that someone ports mruby to this. I've come to terms with javascript's syntax (via coffeescript), but I'd still rather not deal with javascript's semantics.
protomyth 9 hours ago 0 replies      
Didn't WMLScript (a subset of Javascript used for WML) have a required byte code representation?
McElroy 11 hours ago 1 reply      
This page makes Firefox on Android crash.
lorddoig 8 hours ago 1 reply      
Praise the lord, that was sooner than I expected. Next up: the DOM. Then there will be peace on Earth.

Does anyone know when all this started? I ask because only 83 days ago Brendan was on here telling us pretty emphatically that this was a bad idea and would never happen.

amelius 10 hours ago 0 replies      
Without support for proper threads, web assembly programming feels the same as programming a Z80 or 6502 back in the 80s.

And no, webworkers don't cut it, because they don't support structural sharing of immutable data structures in an efficient and flexible way.

garfij 12 hours ago 1 reply      
I'm curious what the debugging story for this is going to be? Source maps?
andybak 12 hours ago 0 replies      
Isomorphic Python here I come...
leoc 8 hours ago 0 replies      
w00t w00t. This is pretty great overall.
rockdoe 14 hours ago 2 replies      
So this is like PNaCl but targeting the web API and by making it collaborative, hopefully a real standard allowing independent reimplementation?

Ironic that Eich is the one to pull the trigger on JS.

joosters 7 hours ago 0 replies      
To an end user, how is this a different experience from flash? You browse to a website and must execute binary blobs in order to view the site.

Even worse, it's like Flash but where the flash 'plugin' has been written from scratch by each web browser, giving us endless possibilities of incompatibilities which are a nightmare to fix.

ECMAScript 2015 Approved ecma-international.org
396 points by espadrine  14 hours ago   90 comments top 15
fintler 12 hours ago 10 replies      
If you've been focusing on another language for a few years, you might not recognize JavaScript anymore. It's pretty awesome now.

Here's an example of what it looks like: http://pastebin.com/raw.php?i=yEB4mrty

As someone who usually works with C, Scala, and Java -- I'm currently working on a small app built on ec6/7 babel, npm, jspm, system.js, aurelia, gulp, etc. It's been a great experience so far.

lewisl9029 8 hours ago 0 replies      
For anyone interested in using ES2015/ES6 in production, I'd highly recommend checking out jspm and SystemJS.

It handles all the transpilation work for you (at runtime for development, or during a manual build/bundling for production) using either Babel, Traceur or Typescript, and allows you to seamlessly use ES6 everywhere in your code and even load third party code on Github and NPM as ES6 modules.



EDIT: Some more info copied from another post:

SystemJS (jspm's module loader) has the following main advantages compared to competing module loaders:

- Able to load any type of module as any other type of module (global, CommonJS, AMD, ES6)

- Can handle transpilation and module loading at runtime without requiring a manual build step

However, jspm itself is primarily a package manager. Its main advantages over existing package management solutions include:

- Tight integration with the SystemJS module loader for ES6 usage

- Maintains a flat dependency hierarchy with deduplication

- Ability to override package.json configuration for any dependency

- Allows loading of packages from just about any source (local git repos, Github, NPM) as any module format

jtempleton 13 hours ago 2 replies      
FYI, ECMAScript 2015 is also known as ES6.
crncosta 13 hours ago 0 replies      
They are providing a official HTML version, alongside the PDF version.


rememberlenny 13 hours ago 3 replies      
Can someone explain what this means for the browser ecosystem? What are the next steps to integration?
brndn 12 hours ago 2 replies      
What does it mean for a spec to be approved? Is it like a peer-review?
robocat 4 hours ago 1 reply      
Support for Octal numbers is insane (especially since uppercase O is supported as well as lower case o) e.g. 0O7. Add a feature that will need linting... wow.

Overall happy with many of the improvements (e.g. standard syntax for modules and classes).

cel1ne 9 hours ago 0 replies      
This is a good overview in my opinion: https://babeljs.io/docs/learn-es2015/
MagicWishMonkey 7 hours ago 1 reply      
How long before we see widespread browser support?
wallzz 10 hours ago 1 reply      
Can someone make a resum ?
Stephn_R 7 hours ago 0 replies      
Today marks an important day for us all :)
markthethomas 5 hours ago 0 replies      
but...whatever happened to es6? ;)
brianzelip 11 hours ago 2 replies      
oh brother does their (`<table>` based) web layout need an update!
muraiki 12 hours ago 3 replies      
Sorry, I accidentally downvoted you with a misclick :(
sirsuki 13 hours ago 2 replies      
It's about fricken time! Talk about procrastination!

Now I have to wait for browsers to get off their little snowflake asses and update. Oh wait then there is all those paranoids who use WinXP with IE8. Damn it, I'll be dead by the time this stuff is available universally.

Projects the Hard Way Coding Projects for Early Coders projectsthehardway.com
229 points by pyprism  13 hours ago   27 comments top 8
danso 9 hours ago 2 replies      
This is a great progression in the series by Zed...if you lurk r/learnprogramming, or any other place full of aspiring coders, a common complaint/desire is that students will have passed all the Khan/Codecademy courses, but have no idea what to do with the pieces of programming fundamentals that they've acquired. This is not just a problem with self-learners...I've seen a few r/learnprogramming posts by CS grads from 4-year-colleges who say they have literally no idea what an API is or why/how to work with one.

And I don't think it's necessarily correlated to the rigor/prestige of the program. I had a discussion with Stanford professor who is building a course that involves hands-on work with real-world data problems...he undertook this initiative after finding that some PhD students, while brilliant in their research and coursework, did not know where to begin with relatively easy data cleaning work. I don't know exactly what the disconnect was, but I'm guessing it wasn't because data cleaning is particularly difficult as a CS problem. But it does require the ability to "see the big picture"...not just how different code modules and components can be designed to talk to each other, but the context and general who-gives-a-shit in regards to a given data/computational problem.

So yeah, thinking about small projects to code for is a great way to make things "click". Can't wait to see what examples Zed comes up with.

red_hare 7 hours ago 1 reply      
This is awesome! When I was in school, this has always been my go to way to getting people to get out of their "assignment programming" boxes.

One of my favorites was having people build a URL shortener using flask and SQLite. The general requirements were something like: 1. Server a web page in flask with a form. 2. Accept URL as POST from form in flask app. 3. Hash URL. 4. Store hash and URL in database. 5. Return new hash URL (ie, myshortener.com/?hash=12345) as a new page or with ajax. 6. Accept GET requests with hashes. 7. Lookup hashes in database and 301 to URL or 404.

Other fun ones were building a Secret Santa web app (accepts sign up separately and emails everyone on a certain day so it's completely self run) and building a simple version of Galaga (in JavaScript using just canvas, setInterval, and keydown/keyup events).

rtpg 5 hours ago 1 reply      
This looks like a great idea, and I imagine the execution will be great.

Personally, though, I find no excitement in building a log searching tool. Something a bit more magical (I think something involving web scraping or markov chains would be interesting) would probably entice people to move forward much more.

Not that this tool wouldn't be useful, just that if I look at the end result, it doesn't give me the want to build it.

libraryatnight 8 hours ago 1 reply      
This is wonderful. I've been doing Learn Python the Hard Way and loving it, this is a really cool addition. I recently picked up the book 'Automate the Boring Stuff with Python,' and one of the best things about it is the practical applications really help drive understanding - at least for me.

At work I write a lot of PowerShell scripts, and I think the reason I took to it so quickly is because the need was there. It was never ambiguous what I was going to build: I knew what I wanted to make easier, what tool I wanted to use to accomplish that goal, and it set me to learning quite quickly.

bigtunacan 4 hours ago 0 replies      
I think this approach to teaching programming should be used more often. A good read for someone looking for something that is already out there is "Cloning Internet Application with Ruby". [I am in no way affiliated with this book]

It uses Sinatra & DataMapper to take the developer through some simpler projects (URL Shortener, Microblog, etc...)

hypertexthero 6 hours ago 1 reply      
Great to see this! Another great projects-based programming tutorial in Python is http://newcoder.io/
bcoates 7 hours ago 2 replies      
It's odd how much novice programmers are asked to work on blank-slate exercise projects, which strike me as inherently advanced work.

Building up from "hello world" to something with interesting learning-experience challenges involves a lot of boilerplate work. It's almost guaranteed that you won't learn the core skills surrounding managing and limiting complexity in medium and large projects, or even why these things are valuable.

It's not much work to get a build environment up for a real-world program the student might use, then have them do projects to modify it (games are great for this). This can be really rewarding, it exposes the learner to realistic, large-scale code, and it serves to get them a gut feeling for "none of this is magic, it's just a bunch of code".

dimino 7 hours ago 2 replies      
If Zed can navigate the morass that is python package management in a way that makes newbies grok the major points and understand the value of doing things The Right Way, I'd personally buy him a beer or something.
A reimplementation of NetBSD using a Microkernel youtube.com
114 points by agumonkey  9 hours ago   60 comments top 10
agumonkey 9 hours ago 0 replies      
Youtube video description:

Based on the MINIX 3 microkernel, we have constructed a system that to the user looks a great deal like NetBSD. It uses pkgsrc, NetBSD headers and libraries, and passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running, and without user processes noticing it. The talk will discuss the history, goals, technology, and status of the project.

Research at the Vrije Universiteit has resulted in a reimplementation of NetBSD using a microkernel instead of the traditional monolithic kernel. To the user, the system looks a great deal like NetBSD (it passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running.

The latest work has been adding live update, making it possible to upgrade to a new version of the operating system WITHOUT a reboot and without running processes even noticing. No other operating system can do this.

The system is built on MINIX 3, a derivative of the original MINIX system, which was intended for education. However, after the original author, Andrew Tanenbaum, received a 2 million euro grant from the Royal Netherlands Academy of Arts and Sciences and a 2.5 million euro grant from the European Research Council, the focus changed to building a highly reliable, secure, fault tolerant operating system, with an emphasis on embedded systems. The code is open source and can be downloaded from www.minix3.org. It runs on the x86 and ARM Cortex V8 (e.g., BeagleBones). Since 2007, the Website has been visited over 3 million times and the bootable image file has been downloaded over 600,000 times. The talk will discuss the history, goals, technology, and status of the project.

Animats 7 hours ago 6 replies      
That's nice, but late. QNX had that 10-15 years ago. With hard real time scheduling, too.

All you really need in a practical microkernel is process management, memory management, timer management, and message passing. (It's possible to have even less in the kernel; L4 moved the copying of messages out of the kernel. Then you have to have shared memory between processes to pass messages, which means the kernel is safe but processes aren't.)

The amusing thing is that Linux, after several decades, now has support for all that. But it also has all the legacy stuff which doesn't use those features. That's why the Linux kernel is insanely huge. The big advantage of a microkernel is that, if you do it right, you don't change it much, if at all. It can even be in ROM. That's quite common with QNX embedded systems.

(If QNX, the company, weren't such a pain... They went from closed source to partially open source (not free, but you could look at some code) to closed source to open source (you could look at the kernel) to closed source. Most of the developers got fed up and quit using it. It's still used; Boston Dynamics' robots use it. If you need hard real time and the problem is too big for something like VxWorks, QNX is still the way to go.)

nickysielicki 6 hours ago 3 replies      
I am 100% on the side of Linus Torvalds when it comes to microkernels.[0]

I will concede that in some instances a microkernel may outperform a monolithic kernel in stability or performance or both. I am not the least bit excited about any progress made in microkernels, I feel that it can only result in much more closed systems that are easier to implement in ways that make them harder to modify. This is why I wish for Hurd to continue to fail.

[0]: http://www.oreilly.com/openbook/opensources/book/appa.html

fmstephe 8 hours ago 1 reply      
That is very exciting. I am glad to hear about fairly substantial amounts of money being granted for this kind of project. I wish them well, but I won't be jumping on board this bus for a while.
cyber 7 hours ago 0 replies      
This is pretty cool. It would be neat to seem some of this technology folded back into NetBSD (potentially with the already existent modules infrastructure.)
luckydude 7 hours ago 1 reply      
He lost me when he said "start a new window" as the work around to not having job control.

Neat idea but seems nowhere near done.

And as others have said, this was nicely handled by QNX way more than 15 years ago, I was running multiple users on a 80286 around 1986 or so. Really neat system.

boardwaalk 6 hours ago 0 replies      
I was trying to install minix 3.3 for fun and ran into a bug in the e1000 driver that caused VirtualBox to throw up. It's fixed already, but not in 3.3:


virtio seems to be working.

vezzy-fnord 8 hours ago 3 replies      
MINIX 3 has been based on the NetBSD userland since the beginning, I think. That said, always interesting to hear Tanenbaum talk and the dynamic upgrade/checkpointing features sound interesting.
codezero 7 hours ago 4 replies      

 The latest work has been adding live update, making it possible to upgrade to a new version of the operating system WITHOUT a reboot and without running processes even noticing. No other operating system can do this.
Can anyone correct me if I'm wrong but can't Linux do this with Ksplice, and the more recent live kernel patching by Red Hat?

mzs 6 hours ago 0 replies      
Oh wow, mklinux is still around... http://www.mklinux.org/

Sorry this really has nothing to do with the video, just that the tangental thought of linux on mach made me wonder and I was pleasantly surprised.

Algorithmic surrealism: A slow-motion guide to high-frequency trading suitpossum.blogspot.com
118 points by Gigamouse  13 hours ago   18 comments top 7
Xcelerate 2 minutes ago 0 replies      
> "People are routinely worried about harmless things, and routinely completely unworried about incredibly harmful things."

God what a quote. I'm stealing this and using it everywhere I can. It basically sums up my entire attitude toward humanity.

washedup 9 hours ago 1 reply      
This is one of the better overviews of HFT I have come across. It hits on all the points I consider to be important (having worked in HFT for over six years); especially the Techno-Leviathan, trader psyche, and the perception of constant "war". I interviewed at Ronin once, and at the time I was super impressed by the beauty of their offices. Reading this helped me realize what a fucking circus it really is.

One facet which always fascinated me was the dispersion of trading ideas, including the code behind algorithms and any sort of research. Successful ideas are constantly being updated, adapted, and often times stolen. Traders are generally hired for the trading strategies they have been exposed to and the potential value within. There are very few individuals who create new and successful ideas. The rest are just copying what they have been exposed to and hoping that it sticks when they throw it all the wall, which eventually runs each successful idea into the ground as the value being captured disappears quickly.

Either way, it was a great school for learning how to program and use statistics effectively.

As my interviewer at Ronin told me after a I failed the interview (we both knew it): "This is all a game, you just need to learn the rules"

themartorana 6 hours ago 3 replies      
Having zero real knowledge of trading in any fashion at all, recently I've been wondering if there was a niche for "medium-speed trading." I know I can't get close to the exchanges, and I don't have the capital to hire experienced trader/engineers to develop the latest algorithms.

Basically, is there a slice of the pie in trading much faster than humans, but much slower than HST?

It's an academic exercise, but one I've been toying with.

MichaelCrawford 4 hours ago 2 replies      
I have some experience with quantitative investment coding and so am frequently asked to apply for HFT jobs.

I don't pursue it because I came to regard the practice as unethical.

crimsonalucard 11 minutes ago 0 replies      
All this technical power being focused on gambling. I'm not one of those people who thinks HFT is unethical. However it's still definitely a waste of humanities resources.
brentis 2 hours ago 0 replies      
It's only going to get worse (or better) depending on how you look at it. So many behavioral factors will be included to shake stops and squeeze shorts at just the right time to make huge profits from the bounce. It's part of the reason as a trader I'm now just chasing Momentum, ignoring most other signals as noise. So much so, I'm building an app that supports discovering these momentum breakouts. In the unlikely event anyone is interested - www.mometic.com
Natsu 5 hours ago 0 replies      
> if you'd like to support my ongoing Creative Commons writing, please consider buying me a virtual beer.

In the spirit of the article's talk about financialization, I wonder if there's yet a way to buy the author virtual beer options?

Aneurysm lithub.com
134 points by Vigier  13 hours ago   25 comments top 12
justifier 9 hours ago 0 replies      
"I rarely clip aneurysms now. All the skills that I slowly and painfully acquired to become an aneurysm surgeon have been rendered obsolete by technological change.

Although neurosurgery is no longer what it once was, the neurosurgeons loss has been the patients gain."

so nice to read this

many of the entrenched remain so through superstition at the loss of progress,

but to read someone speak highly of a practice that rendered a section of their skill set obsolete is heartening

i have family who work in trauma and they condescendingly balk at my excitement over new medical technology:



personally, i look forward to bones:


gregpilling 2 hours ago 1 reply      
My son had surgery for 3 aneurysms 5 years ago. At 8am he was a normal 6 year old boy, going with his mom to LA Fitness. Then at 8:30 I got a call that he couldn't get up off the floor. An MRI showed 3 golf ball sized aneurysms, and they put him in a helicopter from Tucson to Phoenix, about 100 miles apart. I was pretty crippled by fibromyalgia at the time, my wife was 7 months pregnant, and they wouldn't let us fly with him. We cried driving two hours to learn if our kid lived or not.

They used superglue to plug the fistula at Barrow's in Phoenix Arizona, they went up through the leg. http://www.thebarrow.org/index.htm Amazingly, he was walking again within 4 days and home within 7 days. I have a friend that had the same surgery 5 years before; she was there for months recovering from the clip method.

My son was ok for 4 1/2 years, but for unknown reasons he had another brain issue in December that caused temporary paralysis for a few hours. They went spelunking again and found nothing. They did decide that the 5 year old repair had healed perfectly. Dr. Cameron McDougall the surgeon was just beaming with happiness when he told us that.

My kid is doing well, he is pretty smart. He attends the hardest charter school in Arizona, BASIS. He is 11 years old, and I am his angel investor (Total investment about $500) in his website http://legimon.com which is his version of Pokemon. Total revenue so far is $282.00 and about $40 in profit for some t-shirts. He was quite amazed when I told him how long he would have to scrub toilets at McDonalds to make that much money. We then had an hour long discussion about trademark law; he knew more than I did. He has been doing all the wordpress and photoshop work lately, and he works about 3 hours a day on it, everyday for the last three years. He has developed 400 characters and tells me that he is not stopping until he passes Pokemon's 700+.. He has planned out the app, the Xbox game, the VR headset upgrade for gokart racing, and the theme park. I couldn't be more proud of him.

I keep a photo of him in the hospital taken right after they used a cordless drill to put a hole in his skull. He looks like corpse, his head is half shaved, his spine is twisted sideways and the drain from his skull is a bright orange. I use it when I am having a bad day at work, as a reminder of what a bad day really is.

Ask me anything.

mrbill 10 hours ago 0 replies      
Just sent this link to a friend of mine who suffered from a full-bleed aneurysm a few years ago. She was lucky enough to be out in public when it happened, and got to a hospital in time for surgery (couldn't do the noninvasive method described in the excerpt).

Two weeks later they sent her home! It was amazing. Doctors told her that "Fifty percent of people who have one never make it to the hospital, fifty percent of those never make it home."

She's had some personality changes (to be expected with such a traumatic brain injury), but nothing horrible, and I'm amazed at modern medical science.

One of the hardest parts (emotionally, for her) was me helping her shave her head after she got home; they'd had to do part of it anyway for the surgery, and two weeks in the hospital hadn't done what was left any good as it was matted and beyond fixing. Shave a bit, let her cry, shave a bit, let her cry... until it was all done.

jacquesm 10 hours ago 1 reply      
"All the skills that I slowly and painfully acquired to become an aneurysm surgeon have been rendered obsolete by technological change. Instead of open surgery, a catheter and wire is passed through a needle in the patients groin into the femoral artery and fed upwards into the aneurysm by a radiology doctornot a neurosurgeonand the aneurysm is blocked off from the inside rather than clipped off from the outside."

Wow. That's a most un-expected way of fixing things in the brain!

This subject never ceases to interest me because both my parents have had aneurysms, my dad died of them (he had three over the course of several years), my mom recovered (she had one which got clipped exactly as described in the article), both had paralysis effects, and both were heavy smokers.

Incredible how reading that story affected me, worse than any movie I've ever watched.

ggreer 10 hours ago 1 reply      
If you've read Marsh's Do No Harm and you're interested in more stories like it, I strongly recommend When the Air Hits Your Brain: Tales from Neurosurgery[1]. Its author-surgeon (Frank Vertosick) lacks much of the compassion shown by Marsh, but the stories cover his failures more than successes. I find failures and mistakes more interesting, as there are countless ways in which an operation can go wrong, but only one way it can succeed. Though sometimes morbid, the cases are always fascinating.

1. http://www.amazon.com/When-Air-Hits-Your-Brain-ebook/dp/B006...

vinceguidry 11 hours ago 1 reply      
I have the book this passage was excerpted from. It is phenomenal. Highly recommended. One chapter has him visiting a Ukrainian neurosurgery clinic not long after independence, and somehow getting involved in a titanic bureaucratic struggle. Utterly fascinating.
shenanigoat 9 hours ago 0 replies      
It bugs me so much that the huge font, single word title is a typo.
branchless 4 hours ago 0 replies      
I felt so nervous reading that I actually felt a little ill.

How people do these jobs I cannot know. Yet somebody has to. Amazing.

wumbernang 11 hours ago 3 replies      
Comedy timing for me.

My father was taken into hospital 4 hours ago after collapsing and turns out he's got a brain bleed. They stuck him in a CT straight away and are now doing a lumbar puncture to see if there is any blood in the spinal fluid. They don't know what has precisely happened but they suspect an aneurysm that went undetected. The mortality rate of this event is 50% in 30 days. He's had a long history of hypertension and a couple of surgeries to clean out arteries in his neck.

Not sure why I'm sitting here on HN but it was taking my mind off things. Fail :(

Anyway, moral of the story: Don't smoke and dint eat piles of shit; it'll get you one day.

Edit: Ordered the book as well now like an idiot. Scary tale when you're close to it but I find comfort in knowing things rather than ignorance.

Thanks for posting this.

madaxe_again 8 hours ago 1 reply      
Lovely piece, shame whoever posted it can't spell - stonking great big title reads "aneursym", rather than "aneurysm".
rudolf0 10 hours ago 1 reply      
Very well-written. The suspense was absolutely killing me as well.
pgrote 11 hours ago 0 replies      
Wonderfully told. Thanks for posting it.
A Look inside Googles Data Center Networks googlecloudplatform.blogspot.com
144 points by cjdulberger  12 hours ago   24 comments top 8
lstamour 11 hours ago 1 reply      
See also: http://www.wired.com/2015/06/google-reveals-secret-gear-conn... again, not much info, but puts it in context for a less technical audience...
Splendor 11 hours ago 4 replies      
I'm excited to see this type of information from Google but this post seems more like an announcemant that they released information and less like actual information.
mark_l_watson 6 hours ago 1 reply      
I think that Google is starting a marketing blitz to compete with Amazon and Microsoft for cloud services.

Inside Google, their technology is awesome but I think they need to get very solid in customer support to compete. You can quickly get tech help on the phone from Microsoft and Amazon, and Google needs to match that. That said, I have never signed up for their premium support so I might not be totally fair in this criticism.

fs111 9 hours ago 1 reply      
This would be a way better article if they kept the "ZOMG WE ARE SO AWESOME" and "WOW, WE INVENTED IT ALL" tone out..
hueving 8 hours ago 1 reply      
If any other company than Google posted a fluff piece like this, it would never see the front page! Are we really excited by an announcement about a future publication sprinkled with allusions to how much better they are than everyone?
Oculus 2 hours ago 0 replies      
SafdarIqbal 6 hours ago 1 reply      
These kinds of herculean efforts do make sense at Google's scale (at least from my reading of their papers & blog posts), but are there advantages for data centers operating at smaller scales to adopt some of the approaches used here e.g. custom-built commodity hardware-based network switches, SDN-based central controllers etc.?

EDIT: Maybe a better question is, at what point do you think about eschewing the traditional Cisco/Juniper gear and look towards these techniques?

_spoonman 11 hours ago 1 reply      
I clicked the article, scrolled down a bit and said, "...that's it?"
Who Has Your Back? Government Data Requests 2015 eff.org
137 points by FredericJ  13 hours ago   32 comments top 8
0xCMP 11 hours ago 1 reply      
I'd like to point out something related to what others have already said. First, they've pointed out the seemingly illogical picking of companies. Snapchat but not Instagram (maybe part of facebook?) and AT&T but not T-Mobile? etc.

Another issue here is that by looking at the past reports you see how quickly one company is the favorite and soon becomes the ugly step child. The columns with stars are also changing to what sound like very vague and lax requirements compared to the year before.

I didn't see any explanation there why. For instance they took out the "requires warrant" column. I wonder if companies are contributing to the EFF and so the EFF feels the pressure to make these companies look good in the face of this new Snowden era. For instance, isn't it great that Apple now has 5 stars as it's starting it's big "we're private" push while Google is now very low compared to previous years? And how about twitter? They used to be a poster child for good behavior as far as companies go.

suprgeek 7 hours ago 1 reply      
Usually the EFF does a good job with these reports but you got to wonder with a company like Dropbox.

- Condi Rice is on the Board of directors - an avowed supporter of NSA warantless wiretaps

- Users cannot control thier Keys such that it becomes impossible for them handover data to the Govt. even if they complied to the NSL or whatever other BS demand

And they get 5 stars for "Having our Backs" (!)

Splendor 11 hours ago 1 reply      
I'm interested in why the EFF chose these companies to rate. For example, rating AT&T and Verizon but not Sprint and T-Mobile seems odd to me. Rating Snapchat but not Instagram almost makes sense becuase they're rating Facebook, but then they've rated WhatsApp separately.
ywecur 11 hours ago 1 reply      
The only ones that actually have your back are those that use encryption to make data collecting impossible.
afsina 12 hours ago 3 replies      
Why does Google have only 3 stars?
ytdht 5 hours ago 0 replies      
Microsoft opposes backdoors but not some process that is very similar that allows "legitimate legal requests" to be fulfilled ...
yuvadam 12 hours ago 0 replies      
Curious if there's any project that aggregates all the transparency data data into a nice CSV, could be useful to chart and track trends.
dimino 8 hours ago 0 replies      
The stars should link to sources of each of these categories, that'd be cool.
Following a Select Statement Through Postgres Internals codeship.com
80 points by adamnemecek  11 hours ago   5 comments top 4
Animats 6 hours ago 0 replies      
Aw, just when it was getting good, it cuts off for the next episode, where the table has an index. This episode only covers what happens with no index.

It gets really interesting when the query involves multiple tables and indices, and the query optimizer has a choice of strategies. You don't really get the benefits of a full SQL query engine until you ask it to do something hard.

aidos 7 hours ago 0 replies      
I really enjoyed that. The Postgres docs have a really great overview of this stuff too. I started reading it one evening and ended up going through it all because it's so interesting and presented in a really approachable way.


mozumder 3 hours ago 1 reply      
If someone were to do a C-based API server, does Postgres have an API to interface with it at the PLAN or EXECUTE level, to save from long query processing? Say, a query with dozens of joins and columns, and you want to shave off 500 microseconds? Libpq doesn't seem to have something like that.
jaytaylor 8 hours ago 0 replies      
This article superbly bridges the (usually very large) gap between Database-Theory and Databases-in-Reality.

It'd also be interesting to learn about how MySQL and Postgres differ in terms of how they process queries internally. I'm sure there'd be interesting tradeoffs all over the place.

Changing branch alignment causes swings in performance gmane.org
43 points by luu  8 hours ago   7 comments top 4
rayiner 6 hours ago 0 replies      
A fuller description of the post-decode uop cache (with pictures!) is here: http://www.realworldtech.com/haswell-cpu/2.

Note that there are two paths for instructions: one, from the L1 icache through the traditional decoders into the instruction queue, and another a post-decode cache directly into the instruction queue. There are numerous advantages to the cache, such as power saved by idling the decode logic, as well as bypassing the 16-byte fetch restriction (which has been a feature of the architecture since the Pentium Pro days).

The gist of the surprising behavior is that the processor cannot execute out of the uop cache if a given 32-byte (naturally aligned) section of code decodes to more than 3 lines of 6 uops each (with the catch being that a branch ends any given line). In that case it falls back to the traditional instruction fetch/decode. Depending on the alignment of branches, you may or may not run into this limitation on an otherwise identical sequence of instructions.

userbinator 5 hours ago 0 replies      
I would be wary of microbenchmarks like this, especially when the faster sequence is bigger - keeping as much in cache as possible is more important for newer processors, and fetching NOPs wastes bandwidth without doing any useful work. A faster sequence of code won't be anymore if, upon exiting it, something else has to stall due to a cache miss. Pushing the function to the next alignment boundary might move the one after it as well, causing a cascade effect. If you can rearrange the code to spread out the jumps without making it bigger, that would be the best way to go.
nhaehnle 7 hours ago 0 replies      
If anybody else is having trouble accessing the presentation linked as an attachment: the download from the original LLVM bug at https://llvm.org/bugs/show_bug.cgi?id=5615 appears to be okay.
abc_lisper 6 hours ago 1 reply      
Instruction alignment is very important for performance. I remember a similar slow down when working on a VM for Itanium. The architecture manuals for processors usually describe this in detail.
AT&T fined $100M after slowing down its unlimited data washingtonpost.com
343 points by nvr219  13 hours ago   147 comments top 34
mangeletti 12 hours ago 7 replies      
To put this into perspective:

$100MM is 0.0759878% of AT&T's 2014 gross revenue, so less than 1/10th of 1%.

That's like earning a $100,000/yr salary, and then paying a $75.99 fine. It's basically less than your average speeding ticket.

istvan__ 12 hours ago 9 replies      
We should stop lawyers from re-defining words like unlimited. We should make sure that if somebody says unlimited in advertisment or product description it really means unlimited. I know, I am an idealist. :)
JoshGlazebrook 11 hours ago 1 reply      
I kind of saw this coming after the whole Verizon fiasco when they tried to throttle their LTE network and the FCC and media made it a frenzy and they backed down. But then again, Verizon's main wireless spectrum they use for their base layer of their LTE network has the open access rules attached to them that pretty much forbids the throttling of any devices using the spectrum and forcing them to allow any device on their network that is capable of using it.

I'm glad I still have my Verizon unlimited data plan. I renewed my contract (unlimited line is out of contract in August 2016), by using the transfer upgrade loophole last year. But they are the only carrier that does not throttle their LTE network at all, and also allow you to officially pay for unlimited tethering, something no other carrier has ever offered. On top of that the open access rules attached to the C block of the 700mhz spectrum they use lets me pop my sim card into a dedicated lte router, tablet, hotspot, etc. Even devices that Verizon stores refuse to activate for you like a T-Mobile bought iPhone, or any device that is not sold as "for verizon". It's unlocked and works on the network you can pop your sim card into it and it will just work.

baldfat 12 hours ago 0 replies      
GENERAL PUBLIC can be swayed into not knowing that Internet Data is not a commodity. People treat Data like it needs to be grown and a limited resource that the ISP must harvest and think it is unfair that you use more for data usage.

I try to explain that Data is more like a pipe and at certain times they can't get all the data through at the same time. So if this was about throttling for their network they would just do it during "Peak" times and not 24 hours a day. I still feel this is a move to charge per amount of data and not speed access.

japhyr 13 hours ago 2 replies      
The fine, which AT&T says it will fight, is the largest ever levied by the agency.

Does anyone know how likely this fine is to stick? It sounds like a significant fine to me, but I wonder if these kind of fines are often appealed down.

madaxe_again 12 hours ago 2 replies      
It'll be a cold day in hell before this sticks. They'll use every slippery tactic in the book to justify it and to fight it, they'll bribe^Wlobby the appropriate parties to legally define "unlimited" as "limited", and even if they are stuck with it, they'll just not pay.

I mean, what, are they going to arrest executives? Give me a break. There's no recourse either way.

rasz_pl 11 hours ago 0 replies      
There are countries where you cant simply LIE in a commercial/promotional material. I remember the case of Apple being fined and their ad campaign pulled when they tried to claim selling the world's fastest, most powerful personal computer (PowerPC times).

On the other hand in my country its ok for actors to lie about being doctors in commercials :/ ("Im a doctor and X is best for you")

Zekio 13 hours ago 1 reply      
Throttling speeds after a certain amount of data is not equal to unlimited... serves them right for using the "Unlimited" wrongly :)
fnordfnordfnord 10 hours ago 0 replies      
If this were a just world, in order to appeal the fine AT&T would have to first pay the fine, Net 30, and deal with the federal courts via an outsourced call-center in order to receive a credit on their account.
CRASCH 12 hours ago 2 replies      
I think this falls under a reasonable interpretation of the unlimited.

a reasonable person would understand that there are bandwidth limits both technological and environmental. A reasonable person would expect that the level of service they signed up for would continue or get better over time.

I see two issues.

One is that after a certain amount of data is used they limit bandwidth. If you limit something it is hard to call it unlimited.

The other issue is that early on throttling was not in place. They specifically added throttling to entice users to switch to more lucrative data plans.

jdlyga 12 hours ago 2 replies      
AT&T is still doing this as of yesterday. I just got a text that I've used 75% of my "unlimited" plan
ytdht 11 hours ago 1 reply      
I think AT&T should be fined (or be the target of class action lawsuit) for constantly lying to customers/future customers... the most common example being lying about u-verse being fiber-optic going to their customer's homes (while it only goes to a central box in the neighborhood).
bede 5 hours ago 0 replies      
T-Mobile UK (now largely assimilated by the EE mothership) comprehensively denied the existence of an 18-hours-a-day 4mbps throttle placed on its unlimited plans [1] for several years before getting in trouble with the regulators. As far as I'm aware they weren't even punished, which is a shame given how blatantly deceptive their practices were.

This strikes me as a reasonable fine. Well done FCC.

[1] http://www.techradar.com/news/phone-and-communications/mobil...

negrit 11 hours ago 1 reply      
The issue with this kind of fine is that the profit is greater than the fine so they will continue to do shady things like this.

Also people in charge for approving this should be held accountable.

lewisl9029 5 hours ago 0 replies      
Any idea what the legal landscape for these kinds of issues is like in Canada?

Wind Mobile also advertises unlimited plans yet throttles starting at a mere 3GB...


Granted, their true rates are still better than their big telecom counterparts, but I still find this distasteful as a marketing tactic.

treha 1 hour ago 0 replies      
What to the BURNED Customers Get? ZERO?
johnpococito 1 hour ago 0 replies      
nobody said unlimited mean fast internet all the time... and they inform in terms, internet speed will decrease!People must start using brain and start thinking or you will end like europe (government will forbid you everything and tell you everything what u can or not... how much u can earn and what kind of light bulb you are allowed to buy in shop... ehh).
beambot 12 hours ago 2 replies      
[Sorta OT...] Ugh, now we just need goad Comcast into improving their peering.

It's pretty sad when the TV viewing experience is better via torrents than Netflix. Comcast is doing some serious throttling.... For me, the Netflix stream is all pixelated, yet we can pull the entire hour-long HD content via torrent in ~5 minutes. Something is amiss.

sschueller 12 hours ago 0 replies      
Swisscom in Switzerland sells unlimited data plans that are capped at different speeds depending on how much you pay per month. Just like a DSL or cable plan.

I find this a lot more fair than selling unlimited that isn't. Or killing grandfathered accounts by capping them.

rail2rail 12 hours ago 2 replies      
> But consumers are unlikely to receive any money from the fine, which will go instead to the U.S. Treasury, said the agency official.

Well why the hell not?? If we were the wronged party, should we not benefit from the settlement directly?

revelation 11 hours ago 0 replies      
We need to stop calling this practice "slowing down" or "throttling". If you are slowed down, you'll be limited to 56kbit or less, by artifically induced packet loss. At this point, most websites and other internet services will just completely stop working as the massive packet loss suffocates any payload.

It's like advertising unlimited miles on a rental car, then slowing it down to 5mph after 200 miles. Sure, the car still moves, but you can't practically use it for anything.

mamcx 11 hours ago 2 replies      
This is so sad.

The fine is pay to somebody else than the victim.

Is like when Intel get fine for screw AMD, and the money go to some EU institution: Why not pay it to the victim?

That is what this is stupid, and a no-justice.

newobj 12 hours ago 1 reply      
Umm, did they STOP throttling in addition to this settlement? It feels like they did. I was getting throttled like crazy in March and April (I have no internet at my house other than AT&T LTE for stupid reasons, so I have to tether all the time), but in May and June, I seem to mostly never get throttled anymore... or if I do it's much more modest. Anyone else notice a change?
codazoda 12 hours ago 1 reply      
T-Mobile throttles my "unlimited" family plan. The main number gets 3G and each additional gets 1G and then is throttled. Are they also on the radar or is it less of a problem for them because they give you the throttle data up-front (while still using the unlimited word). In reality, however, when you hit your limit it becomes almost unusable.
deegles 12 hours ago 1 reply      
My unlimited plan gets throttled after 5GB usage. From what I understand, a 30GB family data plan won't get throttled until the 30GB are used up. If this is still true, how is it that throttling at 5GB is for "network management"?
d0ugie 11 hours ago 0 replies      
By the way, go here if you'd like to request a Project Fi invite: https://fi.google.com/signup
allsystemsgo 12 hours ago 0 replies      
I received a text just the other week from ATT letting me know I reached 75% of the 5GB network management threshold, and that I may experience reduced data speeds. Anything I can do about this now?
twoodfin 12 hours ago 0 replies      
I'll be shocked if after this AT&T continues to grandfather in their "Unlimited" plans.

Which is too bad, because mine is a really great deal even treated as a 5GB/device plan.

random778 10 hours ago 0 replies      
I'd like the fine to be in the form of refunding affected customers for the period they were defrauded.
calbear81 11 hours ago 0 replies      
What's the likelihood this will lead to some type of compensation for unlimited data users that were throttled?
dsp1234 12 hours ago 0 replies      
I'd like to see something like on the packages of food:

"No artificial limiters added"

williesleg 6 hours ago 0 replies      
So that means our rates are going up again.

So sick and tired of these hidden middle-class taxes.

flippyhead 6 hours ago 0 replies      
So awesome.
ianstallings 11 hours ago 0 replies      
Can we classify this as a revenue generating legal briefing on the net neutrality issue? Or is that wishful thinking?
Welcome Amy, Susan, Colleen, and Steven ycombinator.com
79 points by BIackSwan  11 hours ago   28 comments top 11
boomzilla 10 hours ago 1 reply      
I've always wondered how YC is structured legally? Are they an LLC, or an S corp? How is the corporate organized?

I think it seems to be a very efficient organization for a group of smart people putting up some money and make investment decisions together.

B4CKlash 10 hours ago 3 replies      
An office manager with a BS in Biomedical Engineering...Wow
pbiggar 7 hours ago 0 replies      
Interesting that none of these are partners (all previous announcements have been partners, I think).

Also super interesting to hire a psychologist! Founder breakups and founder dynamics are one of the hardest things about early stage startups (as YC has said before), so this seems an important step in trying to prevent those, and lead to overall more successful companies and investments! I wonder how long before every investment from will have require cofounders to go to couples counseling like the Genius folks do.

minimaxir 10 hours ago 1 reply      
What content does an Editorial Director entail in the context of YC? Just the http://blog.ycombinator.com ?
fitzwatermellow 6 hours ago 0 replies      
Love those TC Cribs segments with Colleen! Does this mean more video content for the Youtube channel? Though it may prove a tough beat to cover when every founder is like "we're in stealth mode, get that camera away from here..." ;)
walterbell 9 hours ago 2 replies      
What's involved in a "Chief of Staff" role, is it like an Operations Manager?
JoblessWonder 7 hours ago 0 replies      
Are these jobs posted somewhere? Or are they strictly word of mouth?
CrackpotGonzo 5 hours ago 0 replies      
Go Amy! Turtles all the way down.
sachinag 10 hours ago 0 replies      
Is this the first non-partner investment professional at YC?
spcoll 11 hours ago 0 replies      
Congrats everyone! Glad to see more women in key roles at YC : )
nphyte 10 hours ago 0 replies      
More women + a psychologist + possible growth fund. YC is killing it
Ask HN: What are the best tools for analyzing large bodies of text?
49 points by CoreSet  13 hours ago   28 comments top 18
drallison 12 hours ago 3 replies      
It seems to me that you are driving from the wrong direction. Given that you have a large body of text, what is it you want to learn about/from the text. Collecting and applying random tools and making measurements without some inkling about what you want or expect to discover makes no sense. Tell us more about the provenance of your corpus of text and what sort of information you want to derive from the data.
rasengan0 13 minutes ago 0 replies      
>a project that requires me to scrape a large amount of text and then use NLP to determine things like sentiment analysis, LSM compatibility, and other linguistic metrics for different subsections of that content.

I ran into a similar project and found this helpful working with the unstructured data:https://textblob.readthedocs.org/en/dev/https://radimrehurek.com/gensim/

codeonfire 4 hours ago 1 reply      
If you want high performance and simple why not use flat files, bash, grep (maybe parallel), cut, awk, wc, uniq, etc. You can get very far with these and if you have a fast local disk you cat get huge read rates. A few GB can be scanned in a matter of seconds. Awk can be used to write your queries. I don't understand what you are trying to do, but if it can be done with a SQL database and doesn't involve lots of joins then it can be done with a delimited text file. If you don't have a lot of disk space you can also work with gzipped files, zcat, zgrep, etc. I would not even consider distributed solutions or nosql until I had at least 100GB of data (more like 1TB of data). I would not consider any sort of SQL database unless I had a lot of complex joins.
lsiebert 8 hours ago 3 replies      
What social science?

You shouldn't be generating the text in advance and then processing it. You should be dynamically generating the text in memory, so you basically only have to worry about the memory for one text file at a time.

As for visualizations, R and ggplot2 may work (R can handle text and data munging, as well as sentiment analysis etc.) It may be worth using it as a social scientist.

ggplot2 has a python port.

That said, you are probably using nltk, right? There are some tools in nltk.draw. There is probably also a user's mailing list for whatever package or tool you are using, consider asking this there.

nutate 6 hours ago 0 replies      
Right now the fastest alternative to nltk is spaCy https://honnibal.github.io/spaCy/ definitely worth a look. I don't know what you're trying to do with the permutations part, but it seems like you can generate those on the fly through some reproducible algorithm (such that some integer seed describes the ordering in a reproducible way) then just keep track of the seeds, not the permuted data.
hudibras 2 hours ago 0 replies      
koopuluri 7 hours ago 0 replies      
What tools exactly are you using in Node and Python? Python has a nice data analysis tool Pandas(http://pandas.pydata.org/) which would help with your task of generating multiple permutations of your data. Check out plot.ly to visualize the data (it integrates well with a pandas pipeline from experience); It would also help if you mentioned exactly what kind of visualizations you're looking to create from the data.

With regards to your issue of scale, this might help: http://stackoverflow.com/questions/14262433/large-data-work-...

I had similar issues when doing research in computer science, and I feel a lot of researchers working with data have this headache of organizing, visualizing and scaling their infrastructure along with versioning data and coupling their data with code. Also adding more collaborators to this workflow would be very time consuming...

mark_l_watson 6 hours ago 0 replies      
One approach is to put text files in Amazon S3 and write map reduce jobs that you can run with Elastic MapReduce. I did this a number of years ago for a customer project and it was inexpensive and a nice platform to work with. Microsoft, Google, and Amazon all have data warehousing products you can try if you don't want to write MapReduce jobs.

That said, if you are only processing 2 GB of text, you can often do that in memory on your laptop. This is especially true if you are doing NLP on individual sentences, or paragraphs.

cafebeen 4 hours ago 0 replies      
Might be worth trying a visual analytics system like Overview:


There's also a nice evaluation paper:


chubot 5 hours ago 0 replies      
This sounds like an algorithmic issue. How many permutations are you generating? Are you sure you can scale it with different software tools or hardware, or is there an inherent exponential blowup?

Are you familiar with big-O / computational complexity (I ask since you say your background is in social sciences.)

A few GB's of input data is generally easy to work with on a single machine, using Python and bash. If you need big intermediate data, you can brute force it with distributed systems, hardware, C++, etc. but that can be time consuming, depending on the application.

jaz46 5 hours ago 0 replies      
I'd have to know a little more about your setup to be sure, but Pachyderm (pachyderm.io) might be a viable option. Full disclosure, I'm one of the founders. The biggest advantage you'd get from our system is that you can continue using all of those python and bash scripts to analyze your data in a distributed fashion instead of having to learn/use SQL. If it looks like Pachyderm might be a good fit, feel free to email me joey@pachyderm.io
tedchs 3 hours ago 0 replies      
Have you considered Google Bigquery? It's a managed data warehouse with a SQL-like query language. Easy to load in your data, run queries, then drop the database when you're done with it.
skadamat 8 hours ago 1 reply      
The first immediate thing I would recommend is moving all of your files into AWS S3: http://aws.amazon.com/s3/

Storage is super cheap, and you can get rid of the clutter on your laptop. I wouldn't recommend moving to a database yet, especially if you don't have any experience working with them before. S3 has great connector libraries and good integrations with things like Spark and Hadoop and other 'big data' analysis tools. I would start to go down that path and see which tools might be best for analyzing text files from S3!

cmarciniak 7 hours ago 0 replies      
More information would be helpful. In terms of having a data store that you can easily query text I would recommend Elasticsearch. Kibana is a dashboard built on Elasticsearch for performing analytics and visualization on your texts. Elasticsearch also has a RESTful api which would play nicely with your Python scripts or any scripting language for that matter. I would also recommend the Python package gensim for your NLP.
gt565k 5 hours ago 0 replies      
apache solr or elasticsearch
nodivbyzero 3 hours ago 0 replies      
grep, sed
developer1 2 hours ago 1 reply      
Does the NSA allow its employees to look for help from the general public like this? Seems odd for such a secretive organization to post publicly asking for help on how to parse our conversations.
Uber Drivers Deemed Employees by California Labor Commission techcrunch.com
560 points by uptown  16 hours ago   566 comments top 58
beering 15 hours ago 15 replies      
Skimming through the doc, court findings are:

1) Drivers providing their own cars is not a strong factor - pizza delivery employees also drive their own cars.

2) Uber "control the tools that drivers use" by regulating the newness of the car.

3) Uber exercises extensive control over vetting and hiring drivers and requires extensive personal information from drivers.

4) Uber alone sets prices, and tipping is discouraged, so there is no mechanism for driver (as "contractor") to set prices.

5) Plaintiff driver only provided her time and car. "Plaintiff's work did not entail any 'managerial' skills that could affect profit or loss."

6) Drivers cannot subcontract (presumably negating Uber's position as a "lead generation" tool for contractors).

Sorry that these are out of order. Look on Page 9 of court documents for full text.

grellas 14 hours ago 8 replies      
A few thoughts:

1. This is an appeal from a decision by a hearing officer of the California Labor Commissioner. Most of the time such officers spend their days hearing things such as minimum wage claims. Hearings do not follow the strict rules of evidence and are literally recorded on the modern equivalent of what used to be a tape casette instead of by a court reporter. Such hearings might run a few hours or, in a more complex case, possibly a full day as the normative max. The quality of the hearing officers themselves is highly variable: some are very good, others are much, much less than good in terms of legal and analytical strengths. In a worst case, you get nothing more than a pro-employee hack. The very purpose of the forum is to help protect the rights of employees and the bias is heavily tilted in that direction. That does not mean it is not an honest forum. It is. But anything that comes from the Labor Commissioner's office has to be taken with a large grain of salt when considering its potential value as precedent. Hearing officers tend to see themselves as those who have a duty to be diligent in protecting rights of employees. Whether what they decide will ever hold up in court is another question altogether.

2. Normally the rules are tilted against employers procedurally as well. When an employer appeals a Labor Commissioner ruling and loses, the employer gets stuck paying the attorneys' fees of the prevailing claimant on the appeal. This discourages many employers from going to superior court with an appeal because the risk of paying attorneys' fees often is too much when all that is at stake is some minimum wage claim. With a company like Uber, though, the attorney fee risk is trivial and all that counts is the precedential value of any final decision. It will therefore be motivated to push it to the limit.

3. And that is where the forum matters a lot. The binding effect of the current Labor Commissioner ruling in the court is nil. The same is true of any evidentiary findings. The case is simply heard de novo - that is, as if the prior proceedings did not even occur. Of course, a court may consider what the hearing officer concluded in a factual sense and how the officer reasoned in a legal sense. But the court can equally disregard all this. This means that the value of the current ruling will only be as good as its innate strength or weakness. If the reasoning and factual findings are compelling, this may well influence a court. Otherwise, it will have no effect whatever or at most a negligible one.

4. What all this means is that this ruling has basically symbolic importance only, representing what state regulators might want as an idealized outcome. Its potential to shape or influence what might ultimately happen in court is, in my view, basically negligible.

5. This doesn't mean that Uber doesn't have a huge battle on its hands, both here and elsewhere. It just means that this ruling sheds little or no light on how it will fare in that battle. You can't predict the outcome of a criminal trial by asking the prosecutor what he thinks. In the same way, you can't predict the outcome here by asking what the Labor Commissioner thinks. In effect, you are getting one side of the case only.

6. The contractor/employee distinction is highly nebulous but turns in the end on whether the purported contractor is actually bearing true entrepreneurial risk in being, supposedly, "in business." There are a number of factors here that do seem to support the idea of true entrepreneurial risk but that just means there are two sides to the argument, not that Uber has the better case.

7. In the end, this will be decided in superior court and then, likely, on appeal to the California courts of appeal beyond that. It will take years to determine. In the meantime, the Uber juggernaut will continue to roll on. So the real question will be: should we as a society welcome disruptive changes that upset our old models or should we use the old regulations to stymie them? Courts are not immune from such considerations and, as I see it, they will apply the legal standards in a way that takes the public policy strongly into account. It will be fascinating to see which way it goes.

tomasien 16 hours ago 11 replies      
I'm curious to read this argument. For all the hand wringing over Uber drivers as 1099 workers, they seem to be the very definition of contractors. They provide their own equipment, keep their own hours, NEVER have to work if they don't want to and 0 consequences for working or not working specific hours, etc. What is it about them that makes the employee like? Anyone know?

Edit: it appears that the critical factor they considered was whether or not the driver could have operated their business independently of Uber. They said they could not. They also cited the fact that Uber controls the way payments are collected and other aspects of operations as critical to showing employment. http://www.scribd.com/doc/268946016/Uber-v-Berwick

Dwolb 15 hours ago 6 replies      
This is a good ruling for workers, but maintains society's status quo. That is, Uber has realized significant margin gains by pushing all risk of carrying passengers and car maintenance onto its drivers. Therefore, this risk is transferred to either drivers (who are on average mot equipped to handle this risk) or insurance companies (who pass the costs on to their entire insurance pool) and not borne directly by Uber nor its customer base. By classifying drivers as employees, risk becomes better aligned.

Now, what society is really missing out on is an opportunity or reason to transition from employer-based benefits to government or society-based benefits. This ruling will postpone a public discussion on the role of employer-based insurance and benefits.

nugget 16 hours ago 5 replies      
I wonder -- if Uber converted drivers in California to employees and dealt with the increased costs (passing them on to riders) but also prevented the now-employed drivers from driving with competing services (Lyft) -- whether the company wouldn't actually become even more valuable than they already are. If you are driving for both services but Uber comprises 80% of your volume and Lyft 20%, it's an easy decision to make. Given that the real asset for all these sharing economy companies is their elastic work forces (drivers and cars for Uber, residents and homes for Airbnb), the CLC may have just created an entrenched monopoly without realizing it.

Beyond that there is a really interesting debate as to whether sharing economy jobs are an end-run around minimum wage laws, rendering such laws meaningless for certain industries going forward. If the majority of workers are turned into 1099 consultants, but are doing effectively the same jobs (drivers, delivery people, etc) that employees did in the past, what does that mean for society?

dotBen 13 hours ago 0 replies      
Just to point out -- the California Labor Commissions ruling is non-binding and applies to a single driver, it's not a class-action or applies to anyone else. Reports of the demise of Uber due to 'all partner drivers now being employees' is grossly exaggerated. Uber is also appealing. [disclosure: I work for Uber]

(see http://newsroom.uber.com/2015/06/clcstatement/)

gmisra 12 hours ago 1 reply      
The right answer is that the "on demand economy" does not fit into existing labor structures, and trying to shoehorn these new jobs into current legal frameworks is probably doomed to confusion. This is especially complicated because, in the United States, too much of the social safety net is explicitly tied to employer-employee relationships (workers comp, unemployment, healthcare, etc).

What I want is confidence that somebody providing a service to me is provided these benefits - if you work 40 hours/week in "on demand" jobs, you should receive commensurate coverage from the safety net, and you should receive at least the mandated minimum wage. If you work 10 hours in a week, you should receive the pro-rated equivalents of those services. This is, of course, complicated - how do you account for people working two services at the same time, or the "uber on the couch" issue, or who pays for vehicles and other capital goods. But pretending that existing labor laws will cover the changing workforce is silly.

We hear all the time about how the nature of work, especially service work is changing. It seems like a logical consequence that the nature of how society classifies, supports, and regulates work should also change. Uber, et al, and their VC comrades have a huge opportunity to shape the future of how people work, and how the social safety net works - to effect real disruption.

Based on their actions, however, it is hard to conclude that Uber, et al are actually interested in this discussion, beyond the marketing rhetoric it enables. As far as I can tell, they view the friction between existing laws and their business model as a profit opportunity and not a leadership opportunity. And so the inefficient behemoth of government regulation will inevitably step in.

codecamper 10 hours ago 0 replies      
I'm surprised that Uber discouraging their drivers to drive for other providers was not called out.

From what I understand, if you are an Uber driver and you do not accept a call too many times, Uber will simply stop giving you ride requests. This effectively squashes a driver's desire to drive for other networks because if he/she is busy with another network's ride when an Uber request comes in, he cannot accept it. Do that that some unknown number of times, and you don't get more work from Uber.

jussij 14 hours ago 3 replies      
The only problem I have with Uber is they get away with not having to compete on a level playing field.

I live in Sydney Australia and catch a fair few taxis.

That taxi diver I use has to pay many $100,000.00 to buy a taxi plate just to work (or work for someone who has bought such a plate), but the same Uber driver does not have such an overhead.

Also, that taxi has to pay insurance in case I'm injured while I'm in their cab, another cost the Uber driver does not have to cover with an insurance policy.

So government has to decide, does it want to eliminate those costs and make it a level playing field, making it an effective free for all.

But why politicians will never do that is because the first crash with the resulting insurance claim will bring the industry to it's knees and from that point on all hell will brake loose.

At present the politicians just don't want to make a decision because it is just a little too hard.

steven2012 15 hours ago 1 reply      
The ruling is not unexpected at all. After the ruling against Microsoft back in the 90s on contracting, it's pretty clear that a business needs to be very careful how they hire contractors, so that they don't become implicit employees. Google has to jump through hoops so that their contractors aren't considered employees (only work for 1 yr max, etc).

I'm curious how much this will affect Uber and what it will do to their business model. If I had to speculate, it would be that it becomes unprofitable almost instantly, but they do have a gigantic warchest, so maybe they can fight the ruling or figure out another way to classify their drivers.

Maybe they can advertise fares and jobs ("This person wants to be driven from SFO to Mountain View") and drivers bid on it like an auction. I wonder if that might change the equation? But then it means that drivers will have a lot more friction in the process.

a-dub 14 hours ago 0 replies      
To be clear, this is not new regulation. This is a hearing that weighed the facts against the current set of laws as they are written. Under those laws, they're pretty clearly employees.

Changing the existing laws is a different issue entirely. There are serious pros and cons on both sides and the right answer is not obvious.

mikeryan 16 hours ago 3 replies      
I think Uber can maybe weather this storm but I wonder how this will trickle down to the smaller personal service players like TaskRabbit/Caviar/Luxe etc who employ independent contractors.
nemo44x 15 hours ago 2 replies      
Just throwing a hypothetical out there. What if this sticks and the drivers decide to organize with a union and collectively bargain a living wage, benefits, etc?

What would be the value of Uber (and related businesses)? Would it stay in business even? How many VC's would lose fortunes over Uber going nearly to 0? Would this be the popping of what some suspect is a private equity bubble as the effects of this ripple throughout?

Regardless, it would be a very different business with a very different valuation.

bdcravens 15 hours ago 1 reply      
Won't this mean that every single "employee" covered by this will have to file amended tax returns?
shawnee_ 13 hours ago 0 replies      
Beekeeping analogy: both ber and lyft are hives. The California Labor Commission's ruling does more to preserve hives in general (and thus the well-being of bees (drivers) as a whole), rather than any specific hive. Yeah, it's making things a little harder on one specific hive right now, but maybe this just means more hives will be popping up. It's the right call.

The phenomenon in nature is for bees to switch hives if theirs is in demise. "Any worker bee that is bringing in food is welcomed." [source: http://www.beemaster.com/forum/index.php?topic=8374.0]

kposehn 12 hours ago 0 replies      
From Uber:

> Reuters original headline was not accurate. The California Labor Commissions ruling is non-binding and applies to a single driver. Indeed it is contrary to a previous ruling by the same commission, which concluded in 2012 that the driver performed services as an independent contractor, and not as a bona fide employee. Five other states have also come to the same conclusion. Its important to remember that the number one reason drivers choose to use Uber is because they have complete flexibility and control. The majority of them can and do choose to earn their living from multiple sources, including other ride sharing companies.

joshjkim 6 hours ago 1 reply      
You can estimate how much this will cost Uber in CA as follows: $0.56 multiplied by total miles driven by UberX drivers over all time.

Its almost as simple as that, since damages were given out almost entirely on those grounds.

I'll leave it to HN to figure out a guess on mileage =)

Some other interesting notes:

Plaintiff was engaged with Uber from July 23 to Sept 18, less than 2 months (p 2)

She worked for 470 hours in that time, so quite a bit (p. 6)

Damages broken down as follows: $0.56/mile reimbursement, for a total of $3,622, tolls for $256, interest of $274, for a total of $4,152 (p10)

Claims for wages, liquidated damages and penalties for violations were all dismissed (p11)

encoderer 15 hours ago 1 reply      
Drivers are just temporary anyway. Uber is going to be the company to beat when autonomous cars make Autos as a Service a huge business. I see that, and not some low-margin package delivery service, as the driver of their future growth.
chx 13 hours ago 0 replies      
You all downvoted my comment three weeks ago: Uber is constantly trying to run from the law but eventually the law will catch up with them and finish this farce. Good.

Well, there you have it.

codegeek 16 hours ago 11 replies      
Isn't Uber concept similar to AirBnB ? DOes this mean AirBnB users are also at risk of being classified as employees of Airbnb ? Uber, you drive your own car. Airbnb, you rent your own apartment.
randomname2 13 hours ago 0 replies      
Reuters and Techcrunch may have jumped the gun here. This ruling only applies to a single driver. Reuters has updated their headline accordingly now: http://www.reuters.com/article/2015/06/17/us-uber-california...

Uber's response:

"Reuters original headline was not accurate. The California Labor Commissions ruling is non-binding and applies to a single driver. Indeed it is contrary to a previous ruling by the same commission, which concluded in 2012 that the driver performed services as an independent contractor, and not as a bona fide employee. Five other states have also come to the same conclusion. Its important to remember that the number one reason drivers choose to use Uber is because they have complete flexibility and control. The majority of them can and do choose to earn their living from multiple sources, including other ride sharing companies.'

 Uber spokeswoman

paulsutter 7 hours ago 0 replies      
If this sticks, it just means that Uber drivers will get paid less in cash, more in benefits, and lose the ability to take business tax deductions.

Is that really better for the drivers? Sounds worse to me.

I ask because many people have been claiming Uber is a bad actor for making drivers contractors, but it's not clear to me that it's a big win for the drivers to be classified as employees. Actually it seems worse in many ways.

mayneack 16 hours ago 2 replies      
This makes all the R&D into driverless cars worth it. Driverless cars can't be employees.
sudioStudio64 10 hours ago 0 replies      
The regulations on taxi drivers exist for a reason. Uber found a way to skirt some of those regulations for a time. Avoiding that regulation created a revenue stream that they used to operate and grow. It was always in danger of regulators catching up to them.

If you read some of the driver's reports then it becomes hard to really buy their "big taxi" schtick. That being said, they obviously provided something that people want. Taxi companies will have to adjust to this. (In some places like SF they already are.) In the end, I think that Uber will go the way of Napster and the taxi companies will end up adopting their techniques the way that the big record companies did.

jacquesm 12 hours ago 0 replies      
This is going to put a knife into a lot of 'modern' start-ups. The theme where the company sets the prices, controls the payments and so on and where the people doing the work are contractors without employee protection or benefits applies to many of the business models of the new middle men.
pbreit 12 hours ago 0 replies      
I tend to agree with Uber here: http://newsroom.uber.com/2015/06/clcstatement/

I don't think this ruling will have much of an impact on anything.

Spearchucker 14 hours ago 10 replies      
Uber and Lyft are operating a model that clearly works for customers. And yet they rightly face legal issues.

The thing that perplexes me is that existing taxi companies, who are licensed and otherwise compliant with the law, don't adopt the best parts of Uber and Lyft?

Why can't I call a black cab in London the way I call a ride from Uber?

DannyBee 15 hours ago 2 replies      
This should be 100% not shocking to anyone (including Uber, i expect). Given their recent executive hires, i'm sure they saw this coming, and already have an appeal strategy.

From a legal standpoint, riding the edge rarely works. Look at what happened to Aereo.

todd3834 15 hours ago 0 replies      
I was asking about this a little while back and no one chimed in: https://news.ycombinator.com/item?id=9551467
anigbrowl 9 hours ago 0 replies      
One of the downsides of so many tech companies being privately held due to the unfashionability of IPOs (per the Andreessen Horowitz slide deck the other day) is that we lose out on the price signals that a stock market listing would normally provide in response to something like this.
smiksarky 14 hours ago 0 replies      
Control is the key factor when determining a 1099 vs W2 employee. FedEx has been doing this for many years...which is why they just got fined a shit-ton. I think Uber drivers will just unionize within the near future causing all of their future driving 'employees' to get paid more - thus cutting profits - resulting in either a new business model which is again bending the new rules, or back to the way things were in the good ol' yellow cab.
jleyank 8 hours ago 0 replies      
Stupid question, perhaps, but IANAL... How does the Uber situation differ from the contractor situation Microsoft dealt with 5-10 years ago? If there's not significant difference, isn't this all settled case law?
beatpanda 6 hours ago 0 replies      
I hope this spells the end of labor exploitation as an "innovation" strategy by "technology" companies. It's getting sickening.
yueq 14 hours ago 2 replies      
What are the impacts to Uber if drivers are employees?
pbreit 13 hours ago 0 replies      
I don't think this holds. Either Uber makes slight changes that comply, some other legal body re-evaluates or laws are changed slightly to accommodate.

This is too powerful of a concept to dismantle so easily. Being able to pick and choose when you work and still be able to make decent earnings is very useful to society.

marcusgarvey 13 hours ago 0 replies      
How long will Uber's appeal take?

What happens if they lose?

Can other jurisdictions use this finding to change the way Uber operates?

bhouston 16 hours ago 5 replies      
What is the expected cost to Uber of this change?
gregoryw 10 hours ago 0 replies      
Anyone who has hired contractors knows where the line is. The damning part is the drivers have Uber provided iPhones in their cars. You have to provide your own equipment to be a contractor.
randomname2 11 hours ago 0 replies      
HN mods:

Techcrunch have retracted their original headline as this ruling only applied to a single driver, could we get the HN headline updated accordingly to "Uber Driver Deemed Employee By California Labor Commission"?

reviseddamage 14 hours ago 1 reply      
Despite this, Uber will still take over the taxi industry, albeit perhaps a bit slower. The quality control over all its components that Uber exercises will continue to bring market attraction to its services and will keep winning market share.
c-slice 15 hours ago 0 replies      
Who is Rasier LLC? This seems to be some sort of shell corp for Uber's insurance.
jellicle 15 hours ago 2 replies      
This was obvious from the beginning. There's really not the slightest doubt that all government authorities are going to classify Uber employees as employees, except perhaps a few that might be bribed/pressured into not doing so.

Uber controls every aspect of the business, from the fares charged (and how much profit Uber will take from each) to the route taken to the conditions of the vehicle to preventing subcontracting. It isn't even close or arguable. As the ruling points out, these people aren't independent drivers with their own businesses that just happen to have engaged in a contract with Uber, nor could Uber's business exist without them.

The short version:


brentm 15 hours ago 0 replies      
This feels like an inevitability at their scale. Outside of the tech world it was always going to be hard to sell their labor pool as contract when put under the microscope.
bickfordb 14 hours ago 1 reply      
If all California Uber drivers become employees, wouldn't Uber be on the hook for reimbursing past Uber drivers for all past vehicle expenses?
jkot 15 hours ago 1 reply      
Does it not complicate things for drivers as well? As employees they may have to tax entire income from Uber, including car expenses, maintenance etc..
6d6b73 14 hours ago 0 replies      
If the ruling holds, Uber is finished. It will have to play by the rules set by the same rules any taxi company has to follow.
hiou 15 hours ago 0 replies      
Curious to see what the public markets will do in response. Nasdaq is taking a bit of a slide as we speak.
dylanjermiah 15 hours ago 0 replies      
Terrible for drivers, and riders and Uber.
big_data 8 hours ago 0 replies      
Better hurry up with that IPO ...
aikah 15 hours ago 1 reply      
Oops ... here goes Uber's competitive advantage ...
louithethrid 15 hours ago 1 reply      
Ueber always was a desert flower. The field it disrupted was scheduled to vannish with automated cars anyway.
phpdeveloperfan 15 hours ago 0 replies      
I'm surprised this happened, though I feel like it'll be appealed heavily.
zekevermillion 15 hours ago 1 reply      
surely Uber is prepared for this eventuality and has a strategy ready to go
rebootthesystem 14 hours ago 3 replies      
A point is being lost here in major way: We need to, as much as possible, get government out of our homes and businesses or they will continue to bury us under so much muck that we'll asphyxiate.

Seriously, what the hell does government have to do with the relationship between me and my source of income. I should be able to do whatever I want, whenever I want to and at whatever rate I choose to work for so long as it isn't illegal in some fundamental way (fraud, theft, murder, burglary, etc.). Beyond that they should stick to painting white and yellow lines on the roads and changing light bulbs on road signs, thank you very much.

It is just incredible to see how our own government looks for every possible angle they can find to destroy progress. I am not defending Uber and their practices. I've never used the service (troed but not available where I am). I am simply using them as an example of a fantastic innovative company trying to find a better way to do something and, instead of our government helping facilitate the exploration of solutions that could advance society and make life better, simpler, healthier, whatever, they become our own worst enemies.

Who the hell do they think they are? They work for US. We don't work for them. We are not their slaves.

Folks, wake the fuck up. Next election you need to send a solid message to everyone in government that they better truly start working for us or they are gone. The way you do that is to support moderate Libertarian candidates. Moderate is the key here, the extremists on any party are friggin insane.


Look at what is happening here in California. We are going to BURN a hundred billion dollars (likely more) building a joke of a high speed train to nowhere and NOBODY is stopping it. Why? Because you are watching government greasing unions to gain votes and favors. The whole thing is sick beyond recognition.

astaroth360 12 hours ago 0 replies      
Please, please let Uber get pwned in the face for their general combative business practices :D

Seriously though, they give the "ride sharing" economy a bad name.

ThomPete 16 hours ago 2 replies      
Considering that Uber eventually is going to replace them all with self driving cars I think it's only fair that the people who help making Uber so valuable gets part of the spoils of being in a company that grows this fast. I am assuming this also means healthcare.
dylanjermiah 16 hours ago 3 replies      
"Uber is said to have more than a million drivers using the platform across the globe."

If this ruling sticks, many of those drivers will no longer have a position.

Karunamon 16 hours ago 5 replies      
So let me get this straight:

* People sign up with Uber

* They drive literally whenever they want

* Uber has no standards for their drivers other than "get good ratings" and "pass a background check"

..and they're considered employees? WTF?

Inside an Official GameBoy Dev Cartridge hentenaar.com
72 points by danso  17 hours ago   8 comments top 3
PebblesHD 2 hours ago 0 replies      
Those are some of the most visually appealing circuit diagrams I've seen in a long time.
tsomctl 7 hours ago 2 replies      
So how did Nintendo intend you to program this? I believe their official sdk ran on DOS, so was there a ISA card that let you program this?
coldpie 10 hours ago 3 replies      
Man, I wish I understood ICs. I should've taken a few EE courses in college.
Microsoft Shakes Up Its Leadership and Internal Structure techcrunch.com
148 points by rbanffy  16 hours ago   100 comments top 12
Animats 12 hours ago 6 replies      
Bing isn't even mentioned.

There used to be a CEO of Bing (Qu Lu and Nadella had been in that slot) but last year, Bing was split up under 5 VPs; there's no longer a real Bing organization.

It's strange. Bing has 20% market search share under its own name, plus 13% under the Yahoo brand. That's 33%. Google has 64%, so Bing is now more than half the size of Google in terms of searches. That's OK market share; it's like being Chrysler vs General Motors.

Bing's profits, though, are awful. Microsoft apparently loses money on Bing.[1] It's hard to tell from the way Microsoft reports online services. Google's advertising revenue is 18 times Microsoft's. That's Bing's problem. Nobody buys Bing ads. It's surprising, with the the market share, that Bing can't fix this.

[1] https://www.ventureharbour.com/visualising-size-google-bing-...

Roritharr 15 hours ago 9 replies      
As per my comment on the Stephen Elop Thread:

Ex Microsoft Manager went to Nokia, destroyed its Market Value, sold the debris back to Microsoft is now leaving Microsoft again.Time to short whatever company he is going to next. ;)

smithkl42 14 hours ago 2 replies      
Interesting to see that Scott Guthrie is now getting talked about in the same breath as the MS senior management. His career is on a tear. Hopefully he can continue to help with the turnaround.
VieElm 14 hours ago 1 reply      
Microsoft is separating applications and Windows. Is that a new thing? Does this mean Outlook and Office are no longer part of the same internal organizational structures when they used to be? I think if this is true and a new development that's going to be very good for Microsoft. I feel like Windows, while still a great workhorse OS has been like a anchor weighing Microsoft applications down on other platforms.
sdar 13 hours ago 0 replies      
Takeaways and what I'll watch for:

Kirill failed to cloudify. Qi isn't interested in the Dynamics business. Benioff couldn't get on-boarded. Guthrie is happy to step in.

* Azure can improve with Dynamics. Can Dynamics business improve with Guthrie?* Will cloud revenue reporting get more "obfuscated" in quarters to come?

Terry's and Elop's orgs were a) building cohesive/unified experiences and b) fighting conspiring threats to their long-term business solvency. Consolidation chose the most prominent leader.

* Does the bench of Terry's replacements change?* Could it be the first step towards scaling down hardware and devices?

fredkbloggs 14 hours ago 1 reply      
It's standard practice at any large company to make org changes at least once a year, and is considered mandatory at any that's "struggling"; i.e., not growing, losing money, and/or flopping with high-profile new products or acquisitions. It doesn't necessarily reflect any real change in product mix, day-to-day life for the rank and file, or any high-minded view of how the company ought to be run. It's just what CEOs do to demonstrate that they're "doing something" so they can keep their highly lucrative jobs a while longer. The departure of Elop was expected and is also standard for CEOs of acquired companies at some point from a few months to a few years after the deal closes. So all in all, nothing to see here.
njloof 14 hours ago 0 replies      
> I dont think that Microsoft is shedding its most popular executives.

And perhaps some "ding, dong, the witch is dead" from Redmond over those that are departing?

q2 11 hours ago 1 reply      
It seems, new Microsoft is sharply focused on consumer part (aimed at Apple and Google) and enterprise part(aimed at Amazon cloud and others). Article conveys the same.

I won't be surprised if bing is handed over to yahoo completely.

edwinnathaniel 14 hours ago 2 replies      
Dynamics CRM approaching $2B and the Head is cut... :(
pjmlp 13 hours ago 1 reply      
Can we please have .NET, C++ and Windows on the same unit to avoid the usual issues among them (e.g. Longhorn)?
deciplex 5 hours ago 0 replies      
Who was responsible for the Windows 10 nag that appeared in my system tray a few weeks back? Have they been sacked?
notNow 9 hours ago 1 reply      
The "Great Purge" by Nadella to get rid of all of Balmer era old guards so Nadella & Co can reign supreme in Microsoft.
Add bytecode cache to Ruby github.com
73 points by ksec  13 hours ago   38 comments top 8
haberman 13 minutes ago 0 replies      
I wrote a benchmark that measures the speed of various VM parsers and the speedup that precompiling brings. I found that precompiling was a huge speed benefit: http://blog.reverberate.org/2014/10/the-speed-of-python-ruby...
3JPLW 12 hours ago 1 reply      
What's the relationship between github/ruby and ruby/ruby? It looks like they've diverged quite a far ways away from each other, but that might just be an artifact of which branches GitHub uses when comparing the two.
aaronbrethorst 1 hour ago 0 replies      
Does anyone know, offhand (or have a good educated guess), on what the largest monolithic Ruby codebase in existence is? Is it GitHub?
daurnimator 7 hours ago 2 replies      

The Lua community has found that bytecode is actually slower to load than it is to generate from source:The extra latency of loading the (larger) bytecode from disk/ssd/flash, exceeds the cpu time to lex/parse.

JulianWasTaken 6 hours ago 2 replies      
Hah. .pyc files are one of the worst part of Python for developers.

export PYTHONDONTWRITEBYTECODE=true is the first thing anyone should be doing.

I guess it figures that we copy each others' mistakes.

Arnor 10 hours ago 2 replies      
How significant is the performance impact on a mid-large Ruby on Rails application?
claudiug 12 hours ago 3 replies      
what is the advantages of adding bytecode cache?
Mojah 8 hours ago 2 replies      
This is the equivalent of APC or OpCache in PHP's world?
DuckDuckGo on CNBC: Weve grown 600% since NSA surveillance news broke technical.ly
607 points by wnm  22 hours ago   229 comments top 36
vixsomnis 19 hours ago 8 replies      
Yes, the search results aren't that good, but they're good enough. A single search almost always gets what I'm looking for on the front page or the entries immediately visible, which is impressive considering how little DDG knows about me.

Add in the !bang feature for searching most websites (classics like !w - Wikipedia, !g - Google, and stuff like !gh - GitHub, !aur - Arch User Repository) and my favorite "define X" keyword that links straight to Wordnik, and my search experience is better than Google.

The !bangs also function as bookmarks, so if I ever want to go to GitHub, I can just search !gh and it'll take me there. It's like having a set of search engines stored universally, accessible from any device with web access.

And of course if I need Google, say for word etymologies, it's just a !g away.

click170 16 hours ago 1 reply      
I switched my default search to DDG and haven't thought twice about it.

It's maybe once every couple days that I have to use "!g" to get google results, for everything else DDG works excellently. Even the times when I have to use "!g", it's often a hint that I'm searching for an unpopular phrase, and find that if I rephrase my search results I get much better results out of both search engines.

I remember there was a story on HN once a few months back where a kind soul from DDG posted an email address that one could submit notes to wrt highlighting poor search results so that they could address them, I don't recall the email and haven't been able to find it. If this is still available with DDG could someone please re-post that email here? I would very much like to help improve the quality of DDG to make it better for everyone but I can't find anywhere to suggest improvements on their website.

Edit: I was able to find their Feedback page, but I much prefer email personally: https://duckduckgo.com/feedback

simias 18 hours ago 4 replies      
I've been using ddg as my main search engine for close to a year I think. It's definitely not as good as google but it's good enough most of the time.

My main concern is that it's still a free service and I really don't see how it'll be sustainable in the long term without compromising privacy in their current model. If you're not the customer you're the product etc...

I'd gladly pay $10 a month for a "premium" search engine with strong privacy garantees. I'm definitely not going to enable ads in ddg and I can't imagine that the average duckduckgo user thinks differently.

VMG 20 hours ago 6 replies      
As a governmental spy organization, why wouldn't you just put surveillance on the search engine that is used by people that have "something to hide" (in their mind) and also put a gag order on the operators of that service?
BuckRogers 46 minutes ago 0 replies      
I like having my default search being anonymous DDG. 99% of my searches I never even bother with getting another engine's results.

For Google results, in the very rare case that I need to see what they have, I use "searchthis !sp" to get anonymous Google results from Startpage. I used SP fulltime previously but they've had reliability issues and an odd issue with the back button.

Most !bangs that I use are for !w, !a and !gm. Firefox's dedicated search bar helps a lot with editing a long search string with another bang. Doubtful I'll be moving off DDG due to that feature.

finnjohnsen2 17 hours ago 2 replies      
I should use DuckDuckGo, but when I need to search I'm too much in some mindset and context I can't allow it to get broken by the poor search results DDG gives me. So my life has come to the point where I'm aware that I give all my search data to someone I know spies on me. 24/7/365.
mrweasel 19 hours ago 0 replies      
>"If you're not collection user information, how are you going to make money? How are you going to become a big brand that people can trust long term?

I know that the interviewer has to ask at least the first part to get the interview going, but it also highlight everything that wrong in the thinking of online ads/marketing.

If you need to collect user information to make money, them perhaps your product isn't that great to begin with (unless you're an ad company like Google, but the we get into the argument who the user is).

Also collection information, so you can grow to become a big brand, I would argue that you've thrown trust out the window a long time ago.

aidos 20 hours ago 1 reply      
As always, you can see the actual ddg traffic numbers on their website


antris 18 hours ago 4 replies      
DDG is based in the US, therefore it is entirely possible that they have been ordered to keep logs and track everything the users do on that site with a gag order.

Being based in the US is a dealbreaker for privacy.

joelrunyon 6 hours ago 1 reply      
This might be a random point, but I'm curious if DDG will have to change their branding to be accepted beyond the tech space into mainstream searching.

I feel like "duck duck go" is too long for the avg american to grasp or use in an ongoing convo when compared to "google", "bing" or "yahoo."

I can't see people saying "just duck duck go it." Maybe something like DDG or "duckduck"instead?

Maybe that's just me...

zawaideh 16 hours ago 1 reply      
The only thing stopping me from using DDG is the inability to limit search results by date. I want search results from a week ago, a month ago, a year ago.

I know they have sort by date, but this just sorts by date without taking into account how relevant the result is.

factorialboy 20 hours ago 2 replies      
IMHO 600% is meaningless number.

What's the estimate market share of DuckDuckGo today, that's the real question.

Do they dominate a niche? I think they have significant market share among HackerNews users.

brianzelip 11 hours ago 1 reply      
Where is DDG located?

I can't watch the video, about which the text reads "The news anchor just cant resist a little jab about DuckDuckGos location choice".

castell 17 hours ago 1 reply      
Does DuckDuckGo still use Yahoo BOSS search API? (based on Bing)

$1.80 / 1000 queries:


josefresco 17 hours ago 1 reply      
One feature that would allow me to eventually move to DDG would be a "toggle" of sorts within my browser that would allow me to switch to DDG results (from Google).

Deciding before my query to use DDG is a hard habit/practice to employ. However, when shown results from Google, if DDG results were just a click away (maybe already rendered in another "tab") it would make A/B testing easier and seemless - which would be essential to eventually moving away and changing my "default" search engine.

Just my $0.02

akhatri_aus 20 hours ago 21 replies      
What is HN's opinion on the quality of the search results?
nvk 18 hours ago 0 replies      
DDG's search results have substantially improved since a few months ago. I now use it as my primary search engine.
newscracker 13 hours ago 0 replies      
DDG is my default on some browsers and machines, but it still lacks a lot in relevant results. I find myself using startpage.com or even Google (the latter for better image searches) very often. The lack of a date based search is a huge disadvantage since I use that very often in other engines.

Lately, DDG has also been quite slow for me and doesn't load at all for several seconds. Overall, I love the privacy part, but it's not as useful as a search engine ought to be for my usage. So I'm unable to quit the other alternatives, even though I badly want to.

rurban 19 hours ago 2 replies      
They do log the queries, as google does. You can only trust a search engine when they do stop logging.

Any NSL can order them to hand over the logs in certain regimes (bulk or per IP? We know what happened), but they cannot force them to write logs in the first hand. Without logging it will also be ~10% faster.

FrankenPC 12 hours ago 0 replies      
I love what DDG represents and I try it first just to show my support. But, if I can't find what I'm looking for I activate VPN, open an incognito window and search with Google. Civilian OPSEC is painful. But I refuse to give up my freedom. I wish there was an application/URL level VPN option. That would solve a lot of problems.
luckydude 15 hours ago 1 reply      
This is just a me too comment, but one of my guys told me two days ago that duckduckgo is good enough. So I switched to it and so far I'm liking the results.

And is it just me or is it actually faster when you click on one of the results? Whenever I do that with google it seems like there is a delay while google does some analytics or something.

blerud 10 hours ago 0 replies      
Is it possible to append all regular ddg searches with !g? I'd like to use Google for my searches but still be able to use the ddg bangs.
chjohasbrouck 11 hours ago 1 reply      
I choose Google because cntrl+t g o <enter> flows better than cntrl+t d u <enter>.

Every time I've tried to switch to DuckDuckGo, this has been the primary stumbling point.

shmerl 15 hours ago 0 replies      
I use DDG, and it works very well most of the time, but for some obscure and very targeted searches Google still beats it by a big margin (especially since Google has time filtering and etc.). So in such cases I just add a !g :)
k2enemy 17 hours ago 0 replies      
Does anyone happen to know how to make it so that the preview of a DDG search result is not a link to the result? I often want to copy and paste something from the preview, but having it as a link makes that difficult.
dude_abides 10 hours ago 0 replies      
I wish the headline was s/600%/from 1.5M to 7M in 2 years/

No less impressive, and so much more informative!

pwenzel 12 hours ago 0 replies      
As a Minnesotan, I'm not going to use this service until is named a more suitable Duck Duck Grey Duck.
SalesHelp 14 hours ago 0 replies      
Way to go Gabriel! A unicorn who is a really nice, genuine guy who wants to help start-ups.
bane 18 hours ago 0 replies      
The privacy aspects are not that important to me, I just want to get out of Google's increasingly irrelevant search results. I tried switching to DDG a couple years ago and it was a pretty meh experience, so I went back to Google.

But, I tried again just a couple months ago (I went whole hog and changed the search engine in chrome to ddg) and have been very impressed with it. It's been continuously worked on enough that it now serves about 90-95% of my daily search needs without any fuss and I actually prefer the way it presents images, semantic search and videos in search results over google's. It does a much better job at returning results for what I'm actually searching for and that's awesome.

For example in Google, if I search for "Mad Max" I get showtimes for "Mad Max: Fury Road" at the top and and imdb-like bit of information for "Mad Max: Fury Road" on the right (neither of which I searched for) and then a list of search results which these days are increasingly just links to Wikipedia's take on whatever I'm searching for (this time for "Mad Max" and "Mad Max (franchise)") followed by news on "Fury Road" the "Fury Road" video game, IMDB links to "Mad Max (1979)" and "Mad Max:Fury Road (2015)", etc. then trailers on youtube for both movies and links to the movie sites etc.

It's okay, I suppose, but Google first assumes I'm looking for "Mad Max:Fury Road" and fills the results for that, then I get links to WP and IMDB on the same topic (I could have just gone to those), except for WP it's not for Fury Road. And why no love for Thunderdome?

Guess what happens when I search ddg? I get a list of possible meanings, the first of which is "Mad Max" not Fury Road (that's #2), then a list of other possible meaning (which include Fury Road, the videogame, the Franchise, etc.). This is awesome, it's not assuming which meaning I want, and thereby getting it wrong like Google and the list of possible meanings is better ordered. Then the search results are better too, of course the prerequisite IMDB and WP links are there, but the top 4 results are for "Mad Max" (or the franchise) and not for "Fury Road"...I'm actually getting results for what I searched for, not for what it thinks I searched for. The mix of results after that is also "better" to my eyes, it includes a large fan site, which Google doesn't ever seem to get around to, Amazon, ebay, games, non-WP fan wikis, reviews, and so on.

Google seems intent on shoving the latest thing that the film studio marketing departments are currently pushing, while DDG provide links to information on what I actually searched for.

I've found this to be true for most of my searches. DDG is actually finding what I want instead of what Google wants.

About the only times I'm finding I'm using Google any more is in two cases:

* I've exhausted DDG's results and want to see if Google's bigger index has something else.

* Google's more sophisticated time constraints on searches. DDG just lets me order results, but Google let's me slice out results between time ranges, which I often find more useful for research purposes.

Bonus: Privacy, again not my main interest, but it's nice that it's there. !Bang syntax. I don't use many of them, but I find them useful (it's also how I execute google searches, just put a !g before my search in ddg).

Wishes: time-slice for search, someway to make it my default in mobile chrome on my android devices

nfoz 14 hours ago 0 replies      
People should not have to change their behaviours like this in order to avoid the intrusion of their own government.
whoisthemachine 18 hours ago 1 reply      
Once I learned about the !bangs, I switched immediately. Usually the results are "good enough", and when not, I try a !bang.
ljk 15 hours ago 0 replies      
what's hn's opinion on this screenshot from 4chan that suggests to not use DDG? http://a.1339.cf/xaikik.png

been using DDG since almost the beginning so i'm kind of conflicted...

tiatia 18 hours ago 0 replies      
It has gotten faster. I somehow like DDG but finally got stuck with www.startpage.com
adwordsjedi 16 hours ago 0 replies      
So are they up to like 600 or 1200 users now?
callum85 17 hours ago 1 reply      
The NSA news broke a long time ago and 600% over that period doesn't sound like outlandish growth for a startup. Or maybe it is, I don't know. It would be good to see this figure over time on a chart, then we could see if there is any change in growth rate.
jister 20 hours ago 6 replies      
The rest of the world doesn't care about NSA surveillance so 600% doesn't really mean anything and is meaningless. Hell, a lot of people don't even know what Bing is!
Internetarchive.bak archiveteam.org
52 points by edward  15 hours ago   3 comments top 3
sp332 3 hours ago 0 replies      
Current status: http://iabak.archiveteam.org/

Helping out is super easy: On Linux or Mac, git clone https://github.com/ArchiveTeam/IA.BAK/ and run the iabak script. It will walk you through setup.


userbinator 5 hours ago 0 replies      
This is what 21(decimal)PB looks like in terms of Backblaze storage pods:


Boffins reveal password-killer 0days for iOS and OS X theregister.co.uk
482 points by moe  20 hours ago   132 comments top 19
tptacek 14 hours ago 4 replies      
It's not bad work but it looks like The Register has hyped it much too far. Breakdown:

* OSX (but not iOS) apps can delete (but not read) arbitrary Keychain entries and create new ones for arbitrary applications. The creator controls the ACL. A malicious app could delete another app's Keychain entry, recreate it with itself added to the ACL, and wait for the victim app to repopulate it.

* A malicious OSX (but not iOS) application can contain helpers registered to the bundle IDs of other applications. The app installer will add those helpers to the ACLs of those other applications (but not to the ACLs of any Apple application).

* A malicious OSX (but not iOS) application can subvert Safari extensions by installing itself and camping out on a Websockets port relied on by the extension.

* A malicious iOS application can register itself as the URL handler for a URL scheme used by another application and intercept its messages.

The headline news would have to be about iOS, because even though OSX does have a sandbox now, it's still not the expectation of anyone serious about security that the platform is airtight against malware. Compared to other things malware can likely do on OSX, these seem pretty benign. The Keychain and BID things are certainly bugs, but I can see why they aren't hair-on-fire priorities.

Unfortunately, the iOS URL thing is I think extraordinarily well-known, because for many years URL schemes were practically the only interesting thing security consultants could assess about iOS apps, so limited were the IPC capabilities on the platform. There are surely plenty of apps that use URLs insecurely in the manner described by this paper, but it's a little unfair to suggest that this is a new platform weakness.

andor 19 hours ago 5 replies      
Quick summary of the keychain "crack":

Keychain items have access control lists, where they can whitelist applications, usually only themselves. If my banking app creates a keychain item, malware will not have access. But malware can delete and recreate keychain items, and add both itself and the banking app to the ACL. Next time the banking app needs credentials, it will ask me to reenter them, and then store them in the keychain item created by the malware.

MagerValp 18 hours ago 1 reply      
That paper is rife with confusing or just plain wrong terminology, and the discussion jumps between Android, iOS, and OS X, making it really hard to digest. I think these are the bugs they have discovered, but if anyone could clarify that would be great:

The keychain can be compromised by a malicious app that plants poisoned entires for other apps, which when they store entries that should be private end up readable by the malicious app.

A malicious app can contain helper apps, and Apple fails to ensure that the helper app has a unique bundle ID, giving it access to another app's sandbox.

WebSockets are unauthenticated. This seems to be by design rather than a bug though, and applications would presumably authenticate clients themselves, or am I missing something?

URL schemes are unauthenticated, again as far as I can tell by design, and not a channel where you'd normally send sensitive data.

therealmarv 19 hours ago 8 replies      
So Apple was aware of this for 6 months and are doing NOTHING, not even communicating?! How serious do they take security and fixing it (at least within 6 months) ?
SlashmanX 18 hours ago 1 reply      
The paper in question: http://arxiv.org/abs/1505.06836
StavrosK 19 hours ago 4 replies      
"Boffins"? Isn't that rather dismissive, as in "oh, look at what those crazy boffins cooked up now!"?
drtse4 19 hours ago 1 reply      
From the paper:

> Since the issues may not be easily fixed, we built a simple program that detects exploit attempts on OS~X, helping protect vulnerable apps before the problems can be fully addressed.

I'm wondering if the tool is publicly accessible, couldn't find any reference to it.

dodongogo 14 hours ago 2 replies      
It sounds like a temporary fix for the keychain hack on iOS would be to just never use the SecItemUpdate keychain API, and always use SecItemDelete followed by SecItemAdd with the updated data which according to http://opensource.apple.com/source/Security/Security-55471/s...:

> @constant kSecAttrAccessGroup ...Unless a specific access group is provided as the value of kSecAttrAccessGroup when SecItemAdd is called, new items are created in the application's default access group.

If I understand this correctly that would always make sure that when an existing entry is updated in an app, the 'hack' app would again be restricted in being able to access the entry's data. It could still clear the data, but wouldn't be able to access the contents.

The paper seems to note this as well:

> It turns out that all of [the apps] can be easilyattacked except todo Cloud and Contacts Sync For GoogleGmail, which delete their current keychain items and createnew ones before updating their data. Note that this practice(deleting an existing item) is actually discouraged by Apple,which suggests to modify the item instead [9].

coldcode 16 hours ago 0 replies      
The first defense they can perform is to change the automatic checks in the App Store review process to identify the attack in a malicious app and stop it from being approved. This could be fairly easy, of course Apple doesn't tell anyone what they do in this process so we have no way to verify it. Still you have to identify how the process could be hidden but since it uses known API calls in an uncommon way, I think this is quite doable.

The second defense is more complex, changing the way Keychain API works without breaking every app out there is much more complex. Not knowing much about this is implemented it might take a lot of testing to verify a fix without breaking apps.

The last thing they can also do is to build a verified system tool that checks the existing keychain for incorrect ACL usage. You can't hide the hack from the system. This way Apple could fix the ACL to not allow incorrect usage and not give access where it doesn't belong. I think this is fairly easy to do since it will break very little.

This is why building security is hard no matter who you are and everyone gets it wrong sometimes. At least Apple has the ability to readily (except for point 2) repair the problem, unlike Samsung having to have 800 million phones somehow patched by third parties to fix the keyboard hack.

0x0 19 hours ago 2 replies      
Anyone have any more information about (or even a source for) "Google's Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level"?

Is this going to happen in an upcoming stable release? What is it being replaced with?

w8rbt 18 hours ago 3 replies      
The fundamental design flaw of all of these compromised password managers, keychains, etc. is that they keep state in a file. That causes all sorts of problems (syncing among devices, file corruption, unauthorized access, tampering, backups, etc.).

Edit - I seldom downvote others and the few times I do, I comment as to why I think the post was inappropriate. What is inappropriate about my post?

Few people stop and think about the burden of keeping state and the problems that introduces with password storage. Many even compound the problems by keeping state in the Cloud (solve device syncing issues). It's worth discussing. There are other ways.

jpmoral 17 hours ago 0 replies      
Okay, so if a user only retrieve Keychain items manually (unlock keychain, view password, type/paste into app/website) and never allow apps to access it, is s/he safe?
jusio 18 hours ago 1 reply      
Oh God. After reading the paper I wouldn't expect a fix from Apple anytime soon :(
gchp 19 hours ago 1 reply      
Well, shit. Finally I feel justified for never (read: rarely) using the "Save password", feature in my web browser.

Does anyone know if Apple have done anything towards resolving this in the 6 month window they requested? Slightly worrying now that this has been published without a fix from Apple. I don't really download apps very often on my Mac, but probably won't for sure now until I know this has been resolved. Annoying.

ikeboy 18 hours ago 1 reply      
I wonder if the new "Rootless" feature prevents this, and if it was developed because of this.
wahsd 18 hours ago 0 replies      
Bravo, Apple. Humongous security hole and you don't address it in six months?

I hope it's being readied for inclusion in 8.4. We all know how it bruises Apple's ego to have to patch stuff without acting like it'a a feature enhancement.

glasz 14 hours ago 0 replies      
they've known for half a year and still, just 2 weeks back, cook is cooking things up about their stance on encryption and privacy [0]. you've gotta love the hypocrisy on every side of the discussion. it's so hilarious that it makes me wanna do harm to certain people.

[0] http://9to5mac.com/2015/06/02/tim-cook-privacy-encryption/

vbezhenar 16 hours ago 0 replies      
Don't run untrusted apps outside of virtual machines. Too bad that web taught us to trust the code we shouldn't trust. Noscript must be integrated in every browser and enabled by default. Sandboxing was and will be broken.
josteink 17 hours ago 3 replies      
Once again goes to show that Apple is mostly interested in the security of its iStore, platform lock down and DRM.

I'm not exactly shocked.

Just for kicks... Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything? Turns out, with marketshare they can! Just like Windows. Strange thing eh?

Introduction to Neural Machine Translation with GPUs (Part 2) nvda.ly
26 points by bsprings  9 hours ago   2 comments top 2
cyorir 4 hours ago 0 replies      
A correction, I think, for part one. It is mentioned that neural network based translation is "recently proposed," but attempts to use neural networks in machine translation date back to at least the 90's.

In high school I chose CS as the category for my IB paper, and chose to write about machine translation. Neural network methods from the 90's is one thing I looked at, and I'm a bit surprised to see neural network translators making a small comeback in the past couple of years.

bsprings 9 hours ago 0 replies      
This is part two in an in-depth series on Neural Machine Translation by Kyunghyun Cho, a leading expert on machine translation (Postdoc at U. Montreal, joining NYU faculty in fall).
YubiKey Making secure login easy yubico.com
57 points by hernantz  8 hours ago   39 comments top 13
2bluesc 6 hours ago 1 reply      
I use the Yubikey Neo as a smartcard + gpg for ssh private key logins[1], U2F with Google[2] accounts, and their OTP for things like LastPass[3].

I wrote some patches for KeepassX to use the Yubikey to derive the encryption key (completely offline)[4] but unfortunately the maintainer has zero interest in merging them.

[1] https://www.yubico.com/2012/12/yubikey-neo-openpgp/

[2] http://googleonlinesecurity.blogspot.com/2014/10/strengtheni...

[3] https://www.yubico.com/products/services-software/personaliz...

[4] https://github.com/keepassx/keepassx/pull/52 and https://news.ycombinator.com/item?id=7801131

kriro 39 minutes ago 0 replies      
I own a YubiKey Neo. My plan was to use it with KeePass/OATH HOTP (I used it with master password only on my three main devices). Turns out the OtpKeyProv plugin won't work on the OSX version I used before (MyPass Companion, switched to MacPass since because well it's on github). So for now I'm using the non-native Windows version with Mono.

Alas synching between different machines isn't easy (counter gets out of synch) and I'm not all that comfortable with keeping the databse in my owncloud.

If anyone has a good suggestion for a crossplattform (Xubuntu, OSX, Android), synchable and FLOSS OATH HOTP password storage solution that doesn't rely on 3rd party cloud storage I'm all ears. Not exactly a security expert but I feel that's the setup I want :)I could fallback to challange/response and that would fix some issues but be less secure.

[The Yubikey itself is pretty cool though]

artursapek 3 hours ago 0 replies      
I use a Yubikey for my Google accounts. They did a great job integrating it as a multi-factor auth option. It's a lot easier than punching in numbers from an SMS/Google Authenticator.

My Yubikey feels like a natural member of my key ring! I love it.

dguido 2 hours ago 1 reply      
FYI anyone can integrate yubikey u2f logins on their website. It's easy, try it out:


codewritinfool 4 hours ago 3 replies      
I bought a YubiKey so I could use it on my laptop with LastPass. Works fine. One day I grabbed my iPad and opened the LastPass app and it hit me... how am I going to authenticate with a YubiKey on an iPad. It took my password and then just worked.

I guess I misunderstood. I thought that once I enabled two-factor auth for LastPass, it'd require that no matter what. Nope, just open the iPad app and no two-factor required.

goblin89 5 hours ago 1 reply      
Compared to Google Authenticator app, YubiKey (a) makes hardware-based OTPs as opposed to time-based OTPs (does that offer stronger security?) and (b) can be used as smart card in GnuPG solutions.

It being a separate piece of plastic might arguably be another advantage, if we assume that most people are more likely to lose their phone than their keyring.

Its interesting: apparently[0], YubiKey is Googles initiative and the company itself uses YubiKeys internally.

[0] http://www.forbes.com/sites/amadoudiallo/2013/11/30/google-w...

lwf 6 hours ago 0 replies      
See also: https://developers.yubico.com/PGP/ -- OpenPGP/GnuPG support https://developers.yubico.com/PIV/ -- PKCS certificates
falsedan 1 hour ago 0 replies      
My biggest issue with YubiKey is I have to be mindful when picking up my laptop or else I eidlioustrioutnasdillkaoei all over the place.
homakov 5 hours ago 4 replies      
Thanks but no, google auth can be installed on any mobile device. Why bother with some "keys"
salibhai 3 hours ago 3 replies      
What happens if you lose or break this thing and you have it configured on lastpass or google login?
newman314 4 hours ago 2 replies      
I don't see a good way to use this with an iOS device and 1Password...
cmbaus 7 hours ago 3 replies      
This looks interesting, but I don't totally understand how it works. How is the key changed every time on the server? It looks like it requires server side support.
api 2 hours ago 0 replies      
Just got some of these to secure ssh login to our infrastructure. Work great but be prepared for a bit of a hassle especially if you've never used anything like a smart card before. Finding simple answers to how to use as an rsa smart card device for ssh took a few hours and getting it into the right mode took some obscure commands.
Algebraic knot theory for kids: equations bham.ac.uk
35 points by danghica  11 hours ago   1 comment top
Hackpad released as opensource, then nothing github.com
103 points by skimmas  7 hours ago   19 comments top 5
tylercubell 3 hours ago 1 reply      
What warrants the outrage here? An announcement doesn't create a binding obligation with a rigid timetable. The sense of entitlement is unreal.
rattray 3 hours ago 2 replies      
I think Dropbox is the entity we should be disappointed in here. They clearly want to be good citizens of the community, but this is a sign of "evil megacorpdom" that rather contrasts with "employs Guido Van Rossum to work on Python and sponsors Pyston".

I hope we're able to demonstrate to their M&A team that moves like this hurt their reputation and make it harder for them to hire & acquire.

shiven 4 hours ago 3 replies      
Moral of the story? Use GPL.

If only Etherpad had done so in the first place.

bhaumik 2 hours ago 0 replies      
They're probably just waiting until Dropbox Notes (which looks like a clone of HackHands now) is live.


Sharma 4 hours ago 0 replies      
It is called "Hackpad". You might need to hack it to see it.
Fully Homomorphic Encryption Without Bootstrapping [pdf] iacr.org
53 points by kushti  19 hours ago   7 comments top 4
sweis 12 hours ago 1 reply      
This Yagisawa paper is of poor quality and lacks adequate proofs. It unfortunately uses the exact title as a 2011 paper by Brakerski, Gentry, and Vaikuntanathan: https://eprint.iacr.org/2011/277.pdf

The BGV paper is the real deal.

pbsd 11 hours ago 0 replies      
Broken a few weeks later: https://eprint.iacr.org/2015/519
themeek 12 hours ago 1 reply      
The paper suggests that it would take on order 2^27 bit operations for encryption/decryption for securely parameterized instances of this scheme, small ciphertexts and keys, and a simple multiplication and addition formula. If true, this would be an incredible leap - a breakthrough in efficiency. And efficient FHE would be a game changer for the Internet and the world.

Evaluating the claim thus requires skepticism and care. The quality of the paper is suspect as are its proofs. But, as is the case with science, heuristics like this count for little.

They do seem to have selected a hard worst-case problem to base the system on. Obviously this means very little if secure keys and setting in this complexity space can not be found in practice - or if the details of the cryptosystem, for one of many many reasons, lead to its easy compromise.

Looking forward to a proper peer review of the scheme.

Edit: Looks like it's already broken!


noahtkoch 11 hours ago 0 replies      
Totally clicked on it because I wanted to know what "Homophobic Encryption" was.
Unicorns stratechery.com
50 points by Rifu  15 hours ago   3 comments top 3
robbfitzsimmons 7 hours ago 0 replies      
Everybody who reads HN should pay Ben Thompson $10/mo for his Daily Update (this is the once-a-week public version). The man's writing has unequivocally had a major impact on my ability to think about technology, and is the only email I read before coffee in the morning.
madsravn 11 hours ago 0 replies      
This was a tough read. I find it hard to read through stuff just throwing quotes, graphs and all into a big mess. Seems like the consistency just takes a dive.
ableal 10 hours ago 0 replies      
> It turns out winner-take-all doesnt apply just to [...]

I'd like to see someone knowledgeable look at Vilfredo Pareto's 80-20 ratio and do a modern update.

Did anyone do it sort of recently?

Manhattan Beach, targeting AirBnB, officially bans short term rentals easyreadernews.com
3 points by remarkEon  2 hours ago   discuss
The construction of the Statue of Liberty google.com
39 points by ramisama  16 hours ago   5 comments top 4
Einstalbert 8 hours ago 0 replies      
I was born a hundred years after this was gifted to America. When I was young, about the same time my elementary school class could have been covering the importance of France as it related to American Independence, pop culture had decided that making fun of French people was cool. I grew up instilled with the idea that they were lazy, smelly, 'weird' people who never helped in any major wars. My parents said nothing about them, so it was entirely what I consumed as a watcher of television.

That kind of disturbing falsehood wouldn't be changed in me until I was much older, post-911 freedom-fries even.

So when did that all change? Did the Greatest Generation come back from the war and stop thinking about its former allies? I always looked up to the Statue of Liberty, and I was surprised to hear it was a gift from France (certainly not something I heard the first time I saw it).

Animats 1 hour ago 0 replies      
"Google Culture"? Nice, but will it last? Remember the book and newspaper scanning projects?
wodenokoto 3 hours ago 0 replies      
How did US-French relations go from "We're donating this huge statue" to we-can't-even-call-it-french-fries ?
bhartzer 10 hours ago 1 reply      
I was just on Liberty Island and Ellis Island on Saturday. Everyone should have a chance to go there in person.
       cached 18 June 2015 07:02:04 GMT