One from Vincent Vanhoucke: "This is the most fun we've had in the office in a while. We've even made some of those 'Inceptionistic' art pieces into giant posters. Beyond the eye candy, there is actually something deeply interesting in this line of work: neural networks have a bad reputation for being strange black boxes that that are opaque to inspection. I have never understood those charges: any other model (GMM, SVM, Random Forests) of any sufficient complexity for a real task is completely opaque for very fundamental reasons: their non-linear structure makes it hard to project back the function they represent into their input space and make sense of it. Not so with backprop, as this blog post shows eloquently: you can query the model and ask what it believes it is seeing or 'wants' to see simply by following gradients. This 'guided hallucination' technique is very powerful and the gorgeous visualizations it generates are very evocative of what's really going on in the network."
In the same way a normal fractal is a recursive application of some drawing function, this is a recursive application of different generation or "recognition -> generation" drawing functions built on top of the CNN.
So I believe that, given a random noise image, these networks don't generate the crazy trippy fractal patterns directly. Instead, that happens by feeding the generated image back to the network over and over again (with e.g. zooming in between).
Think of it a bit like a Rorschach test. But instead of ink blots, we'd use random noise and an artificial neural network. And instead of switching to the next Rorschach card after someone thinks they see a pattern, you continuously move the ink blot around until it looks more and more like the image the person thinks they see.
But because we're dealing with ink, and we're just randomly scattering it around, you'd start to see more and more of your original guess, or other recognized patterns, throughout the different parts of the scattered ink. Repeat this over and over again and you have these amazing fractals!
Ibis: http://3.bp.blogspot.com/-4Uj3hPFupok/VYIT6s_c9OI/AAAAAAAAAl...Seurat: http://4.bp.blogspot.com/-PK_bEYY91cw/VYIVBYw63uI/AAAAAAAAAl...Clouds: http://4.bp.blogspot.com/-FPDgxlc-WPU/VYIV1bK50HI/AAAAAAAAAl...Buildings: http://1.bp.blogspot.com/-XZ0i0zXOhQk/VYIXdyIL9kI/AAAAAAAAAm...
I'd love to experiment with this and video. I predict a nerdy music video soon, and a pop video appropriation soon after.
Reminds me very heavily of The Starry Night https://www.google.com/culturalinstitute/asset-viewer/the-st...
I never had much luck with generative networks. I did some work putting RBMs on a GPU partly because I'd seen Hinton talk showing starting with a low level description and feeding it forwards, but always ended up with highly unstable networks myself.
I can't help but think of people who report seeing faces in their toast. Humans are biased towards seeing faces in randomness. A neural network trained on millions of puppy pictures will see dogs in clouds.
So, tell the machine to think about bananas, and it will conjure up a mental image of bananas. Tell it to imagine a fish-dog and it'll do its best. What happens if/when we have enough storage to supply it a 24/7 video feed (aka eyes), give a robot some navigational logic (or strap it to someone's head), and give it the ability to ask questions, say, below some confidence interval (and us the ability to supply it answers)? What would this represent? What would come out on the other side? A fraction of a human being? Or perhaps just an artificial representation of "the human experience".
...what if we fed it books?
Which makes me wonder, are these sophisticated neural nets mentally ill, and what would a course of therapy for them be like?
In this paper, we will focus on an efficient deep neural network architecture for computer vision, codenamed Inception, which derives its name from the Network in network paper by Lin et al  in conjunction with the famous we need to go deeper internet meme 
(Or if you want to put them full-screen on infinite loop in a darkened room: http://www.infinitelooper.com/?v=XNZIN7Jh3Sg&p=nhttp://www.infinitelooper.com/?v=ogBPFG6qGLM&p=n )
The code for the 1st is available in a Gist linked from its comments; the creator of the 2nd has a few other videos animating grid 'fantasies' of digit-recognition neural-nets.
It may have taken a while, but with all these individuals and organizations cooperating in an open space, we may finally advance yet again into another new era of innovation for the web.
I am really excited about this, much like others in these comments.
To the creators (Brendan Eich et. al) & supporters, well done and best of luck in this endeavor. It's already started on the right foot (asm.js was what lead the way to this I think) - let's hope they can keep it cooperative and open as much as possible for the benefit of everyone!
I'm all for making the web faster/safer/better and all that. But I am worried about losing the web's "open by design" nature.
Much of what I've learned and am learning comes from me going to websites, opening the inspector and stepping through their code. It's educational. You learn things you may never read about in tutorials or books. And it's great because the author may have never intended for their code to be studied. But whether they like it or not, other people will learn from their code, and perhaps come up with [occasionally] better versions of it on their own.
This has helped the web development to evolve faster, and it's obvious how democratizing this "open-by-design" property is, and I think we should be concerned that it's being traded away for another (also essential) property.
Human beings cannot read asm.js code. And a bytecode format will be more or less the same. So, no matter how much faster and more flexible this format/standard is, it will still turn web apps into black boxes that no one can look into and learn from.
 https://github.com/berkus/Juice/blob/master/intro.htm ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf
Of course, there would still be UI differences required between the 3 platforms, but you would no longer need 3 separate development teams.
While this is welcome news, I am also torn. The possibilities are pretty amazing. Think seamless isomorphic apps in any language that can target WebAssembly and has a virtual dom implementation.
However, it finally seems like JS is getting some of its core problems solved and is getting usable. I wonder if it might have short term productivity loss as the churn ramps up to new levels with a million different choices of language/platform.
Either way, it will be an interesting time... and a time to keep up or risk being left behind.
For quite a while, I've been thinking about how instead of hacks like asm.js, we should be pushing an actual "Web IR" which would actually be designed from the ground up as an IR language. Something similar to PNaCl (a subset of LLVM IR), except divorced from the Chrome sandbox, really.
For instance Flash can target this format in place of the Flash player, making swf files future proof since backed by standard web techs.
The only constraint is obviously the fact that the bytecode has only access to web apis and cant talk directly to the OS like with classic browser plugin architectures.
e.g. if I allocate a callback function, and hand it to setTimeout, I have no way to know when to collect it.
Sure, you can encode rules about some of the common functions; but as soon as you get to e.g. attaching an 'onreadystatechange' to an XHR: you can't follow all the different code paths.
Every time a proposal comes up to fix this:
- GC callbacks - Weak valued maps - Proxy with collection trap
They are doing great work. The client's operating system matters little now, but it will not matter at all soon.
Politically it appears to be a fantastic collaboration.
Technically it looks like they have really thought through this hard -- if you look through the post-MVP plans (https://github.com/WebAssembly/design/blob/master/FutureFeat...) there are a lot of exciting ideas there. But it's not just pie-in-the-sky speculation, the amount of detail makes it clear that they have some really top compiler people who are really rigorously exploring the boundaries of what can be accomplished inside the web platform (SIMD, threading, GC integration, tail calls, multiprocess support, etc).
Both well tested, well executed, great tooling, supported on many platforms, compilation targets of many existing languages.
Serious question.. is it licensing ?
One question though - I found a proposal somewhere on a Mozilla run wiki about a web API for registering script transpiling/interpreter engines. I've lost the web address, but if anyone know any more about this is love to see it rekindled.
This sounds absurd. I can't even get through getting Clang, LLVM, and Emscripten built from source as it is, it's such a house-of-cards with configuration and dependency settings. Have any of you tried building Chromium from scratch? I have, on three separate occasions, as I'd like to try to contribute to WebVR. End result: fewer gigs of free space on my hard drive and no Chromium from source.
Part of that is my impatience: I'm used to C# and Java, where dependencies are always dynamically linked, the namespacing keeps everything from colliding, and the semantics are very easy to follow. But even Node's braindead NPM dependency manager would be better than the hoops they make you jump through to build open-source C++ projects. I mean, I just don't get how someone could have at any point said "yes, this is a good path, we should continue with this" for all these custom build systems in the wild on these projects.
I could be way off. I'm only just reading the FAQ now and I'm not entirely sure I understand what has actually been made versus what has been planned. There seems to be a lot of talk about supporting other languages than C++, but that's what they said about ASM.js, and where did that go? Is anyone using ASM.js in production who is not Archive.org and their arcade emulator?
I don't know... I really, really want to like the web browser as a platform. It has its flaws, but it's the least painful solution of all of the completely-cross-platform options. But it's hard. Getting harder. Hard enough I'm starting to wonder if it'd be smarter to develop better deployment strategies for an existing, better programming language than to try to develop better programming languages for the browser's existing, better deployment strategy.
This telephone game being played by translator tools and configuration management tools and polyfills and frameworks and... the list goes on! This thing we consider "modern" web development is getting way out of hand. JS's strength used to be that all you needed was a text editor. Everyone--both users and developers--can already use it and run it.
If it's just one tool, I'll get over it. But stringing these rickety, half-implemented tools together into a chain of codependent systems is unacceptable. It just feels like they're foisting their inability to finish and properly deploy their work on us. Vagrant recipes are nice, but they should be a convenience, not a necessity.
Sorry. Good for them. Just finish something already.
For our use case, what I like about this is that we can continue to use emscripten and the technology will come to us, rather than requiring app developers to invest in yet another technology (our switchover from NaCl to emscripten was very time consuming!)
We will probably need a package manager after that (like apt or npm).
A use case could be with ImageMagick, OpenCV, OpenSceneGraph or qemu inside the browser. All of them are huge and useful projects with many common dependencies.
Does anyone know when all this started? I ask because only 83 days ago Brendan was on here telling us pretty emphatically that this was a bad idea and would never happen.
And no, webworkers don't cut it, because they don't support structural sharing of immutable data structures in an efficient and flexible way.
Ironic that Eich is the one to pull the trigger on JS.
Even worse, it's like Flash but where the flash 'plugin' has been written from scratch by each web browser, giving us endless possibilities of incompatibilities which are a nightmare to fix.
Here's an example of what it looks like: http://pastebin.com/raw.php?i=yEB4mrty
As someone who usually works with C, Scala, and Java -- I'm currently working on a small app built on ec6/7 babel, npm, jspm, system.js, aurelia, gulp, etc. It's been a great experience so far.
It handles all the transpilation work for you (at runtime for development, or during a manual build/bundling for production) using either Babel, Traceur or Typescript, and allows you to seamlessly use ES6 everywhere in your code and even load third party code on Github and NPM as ES6 modules.
EDIT: Some more info copied from another post:
SystemJS (jspm's module loader) has the following main advantages compared to competing module loaders:
- Able to load any type of module as any other type of module (global, CommonJS, AMD, ES6)
- Can handle transpilation and module loading at runtime without requiring a manual build step
However, jspm itself is primarily a package manager. Its main advantages over existing package management solutions include:
- Tight integration with the SystemJS module loader for ES6 usage
- Maintains a flat dependency hierarchy with deduplication
- Ability to override package.json configuration for any dependency
- Allows loading of packages from just about any source (local git repos, Github, NPM) as any module format
Overall happy with many of the improvements (e.g. standard syntax for modules and classes).
Now I have to wait for browsers to get off their little snowflake asses and update. Oh wait then there is all those paranoids who use WinXP with IE8. Damn it, I'll be dead by the time this stuff is available universally.
And I don't think it's necessarily correlated to the rigor/prestige of the program. I had a discussion with Stanford professor who is building a course that involves hands-on work with real-world data problems...he undertook this initiative after finding that some PhD students, while brilliant in their research and coursework, did not know where to begin with relatively easy data cleaning work. I don't know exactly what the disconnect was, but I'm guessing it wasn't because data cleaning is particularly difficult as a CS problem. But it does require the ability to "see the big picture"...not just how different code modules and components can be designed to talk to each other, but the context and general who-gives-a-shit in regards to a given data/computational problem.
So yeah, thinking about small projects to code for is a great way to make things "click". Can't wait to see what examples Zed comes up with.
One of my favorites was having people build a URL shortener using flask and SQLite. The general requirements were something like: 1. Server a web page in flask with a form. 2. Accept URL as POST from form in flask app. 3. Hash URL. 4. Store hash and URL in database. 5. Return new hash URL (ie, myshortener.com/?hash=12345) as a new page or with ajax. 6. Accept GET requests with hashes. 7. Lookup hashes in database and 301 to URL or 404.
Personally, though, I find no excitement in building a log searching tool. Something a bit more magical (I think something involving web scraping or markov chains would be interesting) would probably entice people to move forward much more.
Not that this tool wouldn't be useful, just that if I look at the end result, it doesn't give me the want to build it.
At work I write a lot of PowerShell scripts, and I think the reason I took to it so quickly is because the need was there. It was never ambiguous what I was going to build: I knew what I wanted to make easier, what tool I wanted to use to accomplish that goal, and it set me to learning quite quickly.
It uses Sinatra & DataMapper to take the developer through some simpler projects (URL Shortener, Microblog, etc...)
Building up from "hello world" to something with interesting learning-experience challenges involves a lot of boilerplate work. It's almost guaranteed that you won't learn the core skills surrounding managing and limiting complexity in medium and large projects, or even why these things are valuable.
It's not much work to get a build environment up for a real-world program the student might use, then have them do projects to modify it (games are great for this). This can be really rewarding, it exposes the learner to realistic, large-scale code, and it serves to get them a gut feeling for "none of this is magic, it's just a bunch of code".
Based on the MINIX 3 microkernel, we have constructed a system that to the user looks a great deal like NetBSD. It uses pkgsrc, NetBSD headers and libraries, and passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running, and without user processes noticing it. The talk will discuss the history, goals, technology, and status of the project.
Research at the Vrije Universiteit has resulted in a reimplementation of NetBSD using a microkernel instead of the traditional monolithic kernel. To the user, the system looks a great deal like NetBSD (it passes over 80% of the KYUA tests). However, inside, the system is completely different. At the bottom is a small (about 13,000 lines of code) microkernel that handles interrupts, message passing, low-level scheduling, and hardware related details. Nearly all of the actual operating system, including memory management, the file system(s), paging, and all the device drivers run as user-mode processes protected by the MMU. As a consequence, failures or security issues in one component cannot spread to other ones. In some cases a failed component can be replaced automatically and on the fly, while the system is running.
The latest work has been adding live update, making it possible to upgrade to a new version of the operating system WITHOUT a reboot and without running processes even noticing. No other operating system can do this.
The system is built on MINIX 3, a derivative of the original MINIX system, which was intended for education. However, after the original author, Andrew Tanenbaum, received a 2 million euro grant from the Royal Netherlands Academy of Arts and Sciences and a 2.5 million euro grant from the European Research Council, the focus changed to building a highly reliable, secure, fault tolerant operating system, with an emphasis on embedded systems. The code is open source and can be downloaded from www.minix3.org. It runs on the x86 and ARM Cortex V8 (e.g., BeagleBones). Since 2007, the Website has been visited over 3 million times and the bootable image file has been downloaded over 600,000 times. The talk will discuss the history, goals, technology, and status of the project.
All you really need in a practical microkernel is process management, memory management, timer management, and message passing. (It's possible to have even less in the kernel; L4 moved the copying of messages out of the kernel. Then you have to have shared memory between processes to pass messages, which means the kernel is safe but processes aren't.)
The amusing thing is that Linux, after several decades, now has support for all that. But it also has all the legacy stuff which doesn't use those features. That's why the Linux kernel is insanely huge. The big advantage of a microkernel is that, if you do it right, you don't change it much, if at all. It can even be in ROM. That's quite common with QNX embedded systems.
(If QNX, the company, weren't such a pain... They went from closed source to partially open source (not free, but you could look at some code) to closed source to open source (you could look at the kernel) to closed source. Most of the developers got fed up and quit using it. It's still used; Boston Dynamics' robots use it. If you need hard real time and the problem is too big for something like VxWorks, QNX is still the way to go.)
I will concede that in some instances a microkernel may outperform a monolithic kernel in stability or performance or both. I am not the least bit excited about any progress made in microkernels, I feel that it can only result in much more closed systems that are easier to implement in ways that make them harder to modify. This is why I wish for Hurd to continue to fail.
Neat idea but seems nowhere near done.
And as others have said, this was nicely handled by QNX way more than 15 years ago, I was running multiple users on a 80286 around 1986 or so. Really neat system.
virtio seems to be working.
The latest work has been adding live update, making it possible to upgrade to a new version of the operating system WITHOUT a reboot and without running processes even noticing. No other operating system can do this.
Sorry this really has nothing to do with the video, just that the tangental thought of linux on mach made me wonder and I was pleasantly surprised.
God what a quote. I'm stealing this and using it everywhere I can. It basically sums up my entire attitude toward humanity.
One facet which always fascinated me was the dispersion of trading ideas, including the code behind algorithms and any sort of research. Successful ideas are constantly being updated, adapted, and often times stolen. Traders are generally hired for the trading strategies they have been exposed to and the potential value within. There are very few individuals who create new and successful ideas. The rest are just copying what they have been exposed to and hoping that it sticks when they throw it all the wall, which eventually runs each successful idea into the ground as the value being captured disappears quickly.
Either way, it was a great school for learning how to program and use statistics effectively.
As my interviewer at Ronin told me after a I failed the interview (we both knew it): "This is all a game, you just need to learn the rules"
Basically, is there a slice of the pie in trading much faster than humans, but much slower than HST?
It's an academic exercise, but one I've been toying with.
I don't pursue it because I came to regard the practice as unethical.
In the spirit of the article's talk about financialization, I wonder if there's yet a way to buy the author virtual beer options?
Although neurosurgery is no longer what it once was, the neurosurgeons loss has been the patients gain."
so nice to read this
many of the entrenched remain so through superstition at the loss of progress,
but to read someone speak highly of a practice that rendered a section of their skill set obsolete is heartening
i have family who work in trauma and they condescendingly balk at my excitement over new medical technology:
personally, i look forward to bones:
They used superglue to plug the fistula at Barrow's in Phoenix Arizona, they went up through the leg. http://www.thebarrow.org/index.htm Amazingly, he was walking again within 4 days and home within 7 days. I have a friend that had the same surgery 5 years before; she was there for months recovering from the clip method.
My son was ok for 4 1/2 years, but for unknown reasons he had another brain issue in December that caused temporary paralysis for a few hours. They went spelunking again and found nothing. They did decide that the 5 year old repair had healed perfectly. Dr. Cameron McDougall the surgeon was just beaming with happiness when he told us that.
My kid is doing well, he is pretty smart. He attends the hardest charter school in Arizona, BASIS. He is 11 years old, and I am his angel investor (Total investment about $500) in his website http://legimon.com which is his version of Pokemon. Total revenue so far is $282.00 and about $40 in profit for some t-shirts. He was quite amazed when I told him how long he would have to scrub toilets at McDonalds to make that much money. We then had an hour long discussion about trademark law; he knew more than I did. He has been doing all the wordpress and photoshop work lately, and he works about 3 hours a day on it, everyday for the last three years. He has developed 400 characters and tells me that he is not stopping until he passes Pokemon's 700+.. He has planned out the app, the Xbox game, the VR headset upgrade for gokart racing, and the theme park. I couldn't be more proud of him.
I keep a photo of him in the hospital taken right after they used a cordless drill to put a hole in his skull. He looks like corpse, his head is half shaved, his spine is twisted sideways and the drain from his skull is a bright orange. I use it when I am having a bad day at work, as a reminder of what a bad day really is.
Ask me anything.
Two weeks later they sent her home! It was amazing. Doctors told her that "Fifty percent of people who have one never make it to the hospital, fifty percent of those never make it home."
She's had some personality changes (to be expected with such a traumatic brain injury), but nothing horrible, and I'm amazed at modern medical science.
One of the hardest parts (emotionally, for her) was me helping her shave her head after she got home; they'd had to do part of it anyway for the surgery, and two weeks in the hospital hadn't done what was left any good as it was matted and beyond fixing. Shave a bit, let her cry, shave a bit, let her cry... until it was all done.
Wow. That's a most un-expected way of fixing things in the brain!
This subject never ceases to interest me because both my parents have had aneurysms, my dad died of them (he had three over the course of several years), my mom recovered (she had one which got clipped exactly as described in the article), both had paralysis effects, and both were heavy smokers.
Incredible how reading that story affected me, worse than any movie I've ever watched.
How people do these jobs I cannot know. Yet somebody has to. Amazing.
My father was taken into hospital 4 hours ago after collapsing and turns out he's got a brain bleed. They stuck him in a CT straight away and are now doing a lumbar puncture to see if there is any blood in the spinal fluid. They don't know what has precisely happened but they suspect an aneurysm that went undetected. The mortality rate of this event is 50% in 30 days. He's had a long history of hypertension and a couple of surgeries to clean out arteries in his neck.
Not sure why I'm sitting here on HN but it was taking my mind off things. Fail :(
Anyway, moral of the story: Don't smoke and dint eat piles of shit; it'll get you one day.
Edit: Ordered the book as well now like an idiot. Scary tale when you're close to it but I find comfort in knowing things rather than ignorance.
Thanks for posting this.
Inside Google, their technology is awesome but I think they need to get very solid in customer support to compete. You can quickly get tech help on the phone from Microsoft and Amazon, and Google needs to match that. That said, I have never signed up for their premium support so I might not be totally fair in this criticism.
EDIT: Maybe a better question is, at what point do you think about eschewing the traditional Cisco/Juniper gear and look towards these techniques?
Another issue here is that by looking at the past reports you see how quickly one company is the favorite and soon becomes the ugly step child. The columns with stars are also changing to what sound like very vague and lax requirements compared to the year before.
I didn't see any explanation there why. For instance they took out the "requires warrant" column. I wonder if companies are contributing to the EFF and so the EFF feels the pressure to make these companies look good in the face of this new Snowden era. For instance, isn't it great that Apple now has 5 stars as it's starting it's big "we're private" push while Google is now very low compared to previous years? And how about twitter? They used to be a poster child for good behavior as far as companies go.
- Condi Rice is on the Board of directors - an avowed supporter of NSA warantless wiretaps
- Users cannot control thier Keys such that it becomes impossible for them handover data to the Govt. even if they complied to the NSL or whatever other BS demand
And they get 5 stars for "Having our Backs" (!)
It gets really interesting when the query involves multiple tables and indices, and the query optimizer has a choice of strategies. You don't really get the benefits of a full SQL query engine until you ask it to do something hard.
It'd also be interesting to learn about how MySQL and Postgres differ in terms of how they process queries internally. I'm sure there'd be interesting tradeoffs all over the place.
Note that there are two paths for instructions: one, from the L1 icache through the traditional decoders into the instruction queue, and another a post-decode cache directly into the instruction queue. There are numerous advantages to the cache, such as power saved by idling the decode logic, as well as bypassing the 16-byte fetch restriction (which has been a feature of the architecture since the Pentium Pro days).
The gist of the surprising behavior is that the processor cannot execute out of the uop cache if a given 32-byte (naturally aligned) section of code decodes to more than 3 lines of 6 uops each (with the catch being that a branch ends any given line). In that case it falls back to the traditional instruction fetch/decode. Depending on the alignment of branches, you may or may not run into this limitation on an otherwise identical sequence of instructions.
$100MM is 0.0759878% of AT&T's 2014 gross revenue, so less than 1/10th of 1%.
That's like earning a $100,000/yr salary, and then paying a $75.99 fine. It's basically less than your average speeding ticket.
I'm glad I still have my Verizon unlimited data plan. I renewed my contract (unlimited line is out of contract in August 2016), by using the transfer upgrade loophole last year. But they are the only carrier that does not throttle their LTE network at all, and also allow you to officially pay for unlimited tethering, something no other carrier has ever offered. On top of that the open access rules attached to the C block of the 700mhz spectrum they use lets me pop my sim card into a dedicated lte router, tablet, hotspot, etc. Even devices that Verizon stores refuse to activate for you like a T-Mobile bought iPhone, or any device that is not sold as "for verizon". It's unlocked and works on the network you can pop your sim card into it and it will just work.
I try to explain that Data is more like a pipe and at certain times they can't get all the data through at the same time. So if this was about throttling for their network they would just do it during "Peak" times and not 24 hours a day. I still feel this is a move to charge per amount of data and not speed access.
Does anyone know how likely this fine is to stick? It sounds like a significant fine to me, but I wonder if these kind of fines are often appealed down.
I mean, what, are they going to arrest executives? Give me a break. There's no recourse either way.
On the other hand in my country its ok for actors to lie about being doctors in commercials :/ ("Im a doctor and X is best for you")
a reasonable person would understand that there are bandwidth limits both technological and environmental. A reasonable person would expect that the level of service they signed up for would continue or get better over time.
I see two issues.
One is that after a certain amount of data is used they limit bandwidth. If you limit something it is hard to call it unlimited.
The other issue is that early on throttling was not in place. They specifically added throttling to entice users to switch to more lucrative data plans.
This strikes me as a reasonable fine. Well done FCC.
Also people in charge for approving this should be held accountable.
Wind Mobile also advertises unlimited plans yet throttles starting at a mere 3GB...
Granted, their true rates are still better than their big telecom counterparts, but I still find this distasteful as a marketing tactic.
It's pretty sad when the TV viewing experience is better via torrents than Netflix. Comcast is doing some serious throttling.... For me, the Netflix stream is all pixelated, yet we can pull the entire hour-long HD content via torrent in ~5 minutes. Something is amiss.
I find this a lot more fair than selling unlimited that isn't. Or killing grandfathered accounts by capping them.
Well why the hell not?? If we were the wronged party, should we not benefit from the settlement directly?
It's like advertising unlimited miles on a rental car, then slowing it down to 5mph after 200 miles. Sure, the car still moves, but you can't practically use it for anything.
The fine is pay to somebody else than the victim.
Is like when Intel get fine for screw AMD, and the money go to some EU institution: Why not pay it to the victim?
That is what this is stupid, and a no-justice.
Which is too bad, because mine is a really great deal even treated as a 5GB/device plan.
"No artificial limiters added"
So sick and tired of these hidden middle-class taxes.
I think it seems to be a very efficient organization for a group of smart people putting up some money and make investment decisions together.
Also super interesting to hire a psychologist! Founder breakups and founder dynamics are one of the hardest things about early stage startups (as YC has said before), so this seems an important step in trying to prevent those, and lead to overall more successful companies and investments! I wonder how long before every investment from will have require cofounders to go to couples counseling like the Genius folks do.
I ran into a similar project and found this helpful working with the unstructured data:https://textblob.readthedocs.org/en/dev/https://radimrehurek.com/gensim/
You shouldn't be generating the text in advance and then processing it. You should be dynamically generating the text in memory, so you basically only have to worry about the memory for one text file at a time.
As for visualizations, R and ggplot2 may work (R can handle text and data munging, as well as sentiment analysis etc.) It may be worth using it as a social scientist.
ggplot2 has a python port.
That said, you are probably using nltk, right? There are some tools in nltk.draw. There is probably also a user's mailing list for whatever package or tool you are using, consider asking this there.
With regards to your issue of scale, this might help: http://stackoverflow.com/questions/14262433/large-data-work-...
I had similar issues when doing research in computer science, and I feel a lot of researchers working with data have this headache of organizing, visualizing and scaling their infrastructure along with versioning data and coupling their data with code. Also adding more collaborators to this workflow would be very time consuming...
That said, if you are only processing 2 GB of text, you can often do that in memory on your laptop. This is especially true if you are doing NLP on individual sentences, or paragraphs.
There's also a nice evaluation paper:
Are you familiar with big-O / computational complexity (I ask since you say your background is in social sciences.)
A few GB's of input data is generally easy to work with on a single machine, using Python and bash. If you need big intermediate data, you can brute force it with distributed systems, hardware, C++, etc. but that can be time consuming, depending on the application.
Storage is super cheap, and you can get rid of the clutter on your laptop. I wouldn't recommend moving to a database yet, especially if you don't have any experience working with them before. S3 has great connector libraries and good integrations with things like Spark and Hadoop and other 'big data' analysis tools. I would start to go down that path and see which tools might be best for analyzing text files from S3!
1) Drivers providing their own cars is not a strong factor - pizza delivery employees also drive their own cars.
2) Uber "control the tools that drivers use" by regulating the newness of the car.
3) Uber exercises extensive control over vetting and hiring drivers and requires extensive personal information from drivers.
4) Uber alone sets prices, and tipping is discouraged, so there is no mechanism for driver (as "contractor") to set prices.
5) Plaintiff driver only provided her time and car. "Plaintiff's work did not entail any 'managerial' skills that could affect profit or loss."
6) Drivers cannot subcontract (presumably negating Uber's position as a "lead generation" tool for contractors).
Sorry that these are out of order. Look on Page 9 of court documents for full text.
1. This is an appeal from a decision by a hearing officer of the California Labor Commissioner. Most of the time such officers spend their days hearing things such as minimum wage claims. Hearings do not follow the strict rules of evidence and are literally recorded on the modern equivalent of what used to be a tape casette instead of by a court reporter. Such hearings might run a few hours or, in a more complex case, possibly a full day as the normative max. The quality of the hearing officers themselves is highly variable: some are very good, others are much, much less than good in terms of legal and analytical strengths. In a worst case, you get nothing more than a pro-employee hack. The very purpose of the forum is to help protect the rights of employees and the bias is heavily tilted in that direction. That does not mean it is not an honest forum. It is. But anything that comes from the Labor Commissioner's office has to be taken with a large grain of salt when considering its potential value as precedent. Hearing officers tend to see themselves as those who have a duty to be diligent in protecting rights of employees. Whether what they decide will ever hold up in court is another question altogether.
2. Normally the rules are tilted against employers procedurally as well. When an employer appeals a Labor Commissioner ruling and loses, the employer gets stuck paying the attorneys' fees of the prevailing claimant on the appeal. This discourages many employers from going to superior court with an appeal because the risk of paying attorneys' fees often is too much when all that is at stake is some minimum wage claim. With a company like Uber, though, the attorney fee risk is trivial and all that counts is the precedential value of any final decision. It will therefore be motivated to push it to the limit.
3. And that is where the forum matters a lot. The binding effect of the current Labor Commissioner ruling in the court is nil. The same is true of any evidentiary findings. The case is simply heard de novo - that is, as if the prior proceedings did not even occur. Of course, a court may consider what the hearing officer concluded in a factual sense and how the officer reasoned in a legal sense. But the court can equally disregard all this. This means that the value of the current ruling will only be as good as its innate strength or weakness. If the reasoning and factual findings are compelling, this may well influence a court. Otherwise, it will have no effect whatever or at most a negligible one.
4. What all this means is that this ruling has basically symbolic importance only, representing what state regulators might want as an idealized outcome. Its potential to shape or influence what might ultimately happen in court is, in my view, basically negligible.
5. This doesn't mean that Uber doesn't have a huge battle on its hands, both here and elsewhere. It just means that this ruling sheds little or no light on how it will fare in that battle. You can't predict the outcome of a criminal trial by asking the prosecutor what he thinks. In the same way, you can't predict the outcome here by asking what the Labor Commissioner thinks. In effect, you are getting one side of the case only.
6. The contractor/employee distinction is highly nebulous but turns in the end on whether the purported contractor is actually bearing true entrepreneurial risk in being, supposedly, "in business." There are a number of factors here that do seem to support the idea of true entrepreneurial risk but that just means there are two sides to the argument, not that Uber has the better case.
7. In the end, this will be decided in superior court and then, likely, on appeal to the California courts of appeal beyond that. It will take years to determine. In the meantime, the Uber juggernaut will continue to roll on. So the real question will be: should we as a society welcome disruptive changes that upset our old models or should we use the old regulations to stymie them? Courts are not immune from such considerations and, as I see it, they will apply the legal standards in a way that takes the public policy strongly into account. It will be fascinating to see which way it goes.
Edit: it appears that the critical factor they considered was whether or not the driver could have operated their business independently of Uber. They said they could not. They also cited the fact that Uber controls the way payments are collected and other aspects of operations as critical to showing employment. http://www.scribd.com/doc/268946016/Uber-v-Berwick
Now, what society is really missing out on is an opportunity or reason to transition from employer-based benefits to government or society-based benefits. This ruling will postpone a public discussion on the role of employer-based insurance and benefits.
Beyond that there is a really interesting debate as to whether sharing economy jobs are an end-run around minimum wage laws, rendering such laws meaningless for certain industries going forward. If the majority of workers are turned into 1099 consultants, but are doing effectively the same jobs (drivers, delivery people, etc) that employees did in the past, what does that mean for society?
What I want is confidence that somebody providing a service to me is provided these benefits - if you work 40 hours/week in "on demand" jobs, you should receive commensurate coverage from the safety net, and you should receive at least the mandated minimum wage. If you work 10 hours in a week, you should receive the pro-rated equivalents of those services. This is, of course, complicated - how do you account for people working two services at the same time, or the "uber on the couch" issue, or who pays for vehicles and other capital goods. But pretending that existing labor laws will cover the changing workforce is silly.
We hear all the time about how the nature of work, especially service work is changing. It seems like a logical consequence that the nature of how society classifies, supports, and regulates work should also change. Uber, et al, and their VC comrades have a huge opportunity to shape the future of how people work, and how the social safety net works - to effect real disruption.
Based on their actions, however, it is hard to conclude that Uber, et al are actually interested in this discussion, beyond the marketing rhetoric it enables. As far as I can tell, they view the friction between existing laws and their business model as a profit opportunity and not a leadership opportunity. And so the inefficient behemoth of government regulation will inevitably step in.
From what I understand, if you are an Uber driver and you do not accept a call too many times, Uber will simply stop giving you ride requests. This effectively squashes a driver's desire to drive for other networks because if he/she is busy with another network's ride when an Uber request comes in, he cannot accept it. Do that that some unknown number of times, and you don't get more work from Uber.
I live in Sydney Australia and catch a fair few taxis.
That taxi diver I use has to pay many $100,000.00 to buy a taxi plate just to work (or work for someone who has bought such a plate), but the same Uber driver does not have such an overhead.
Also, that taxi has to pay insurance in case I'm injured while I'm in their cab, another cost the Uber driver does not have to cover with an insurance policy.
So government has to decide, does it want to eliminate those costs and make it a level playing field, making it an effective free for all.
But why politicians will never do that is because the first crash with the resulting insurance claim will bring the industry to it's knees and from that point on all hell will brake loose.
At present the politicians just don't want to make a decision because it is just a little too hard.
I'm curious how much this will affect Uber and what it will do to their business model. If I had to speculate, it would be that it becomes unprofitable almost instantly, but they do have a gigantic warchest, so maybe they can fight the ruling or figure out another way to classify their drivers.
Maybe they can advertise fares and jobs ("This person wants to be driven from SFO to Mountain View") and drivers bid on it like an auction. I wonder if that might change the equation? But then it means that drivers will have a lot more friction in the process.
Changing the existing laws is a different issue entirely. There are serious pros and cons on both sides and the right answer is not obvious.
What would be the value of Uber (and related businesses)? Would it stay in business even? How many VC's would lose fortunes over Uber going nearly to 0? Would this be the popping of what some suspect is a private equity bubble as the effects of this ripple throughout?
Regardless, it would be a very different business with a very different valuation.
The phenomenon in nature is for bees to switch hives if theirs is in demise. "Any worker bee that is bringing in food is welcomed." [source: http://www.beemaster.com/forum/index.php?topic=8374.0]
> Reuters original headline was not accurate. The California Labor Commissions ruling is non-binding and applies to a single driver. Indeed it is contrary to a previous ruling by the same commission, which concluded in 2012 that the driver performed services as an independent contractor, and not as a bona fide employee. Five other states have also come to the same conclusion. Its important to remember that the number one reason drivers choose to use Uber is because they have complete flexibility and control. The majority of them can and do choose to earn their living from multiple sources, including other ride sharing companies.
Its almost as simple as that, since damages were given out almost entirely on those grounds.
I'll leave it to HN to figure out a guess on mileage =)
Some other interesting notes:
Plaintiff was engaged with Uber from July 23 to Sept 18, less than 2 months (p 2)
She worked for 470 hours in that time, so quite a bit (p. 6)
Damages broken down as follows: $0.56/mile reimbursement, for a total of $3,622, tolls for $256, interest of $274, for a total of $4,152 (p10)
Claims for wages, liquidated damages and penalties for violations were all dismissed (p11)
Well, there you have it.
"Reuters original headline was not accurate. The California Labor Commissions ruling is non-binding and applies to a single driver. Indeed it is contrary to a previous ruling by the same commission, which concluded in 2012 that the driver performed services as an independent contractor, and not as a bona fide employee. Five other states have also come to the same conclusion. Its important to remember that the number one reason drivers choose to use Uber is because they have complete flexibility and control. The majority of them can and do choose to earn their living from multiple sources, including other ride sharing companies.'
Is that really better for the drivers? Sounds worse to me.
I ask because many people have been claiming Uber is a bad actor for making drivers contractors, but it's not clear to me that it's a big win for the drivers to be classified as employees. Actually it seems worse in many ways.
If you read some of the driver's reports then it becomes hard to really buy their "big taxi" schtick. That being said, they obviously provided something that people want. Taxi companies will have to adjust to this. (In some places like SF they already are.) In the end, I think that Uber will go the way of Napster and the taxi companies will end up adopting their techniques the way that the big record companies did.
I don't think this ruling will have much of an impact on anything.
The thing that perplexes me is that existing taxi companies, who are licensed and otherwise compliant with the law, don't adopt the best parts of Uber and Lyft?
Why can't I call a black cab in London the way I call a ride from Uber?
From a legal standpoint, riding the edge rarely works. Look at what happened to Aereo.
This is too powerful of a concept to dismantle so easily. Being able to pick and choose when you work and still be able to make decent earnings is very useful to society.
What happens if they lose?
Can other jurisdictions use this finding to change the way Uber operates?
Techcrunch have retracted their original headline as this ruling only applied to a single driver, could we get the HN headline updated accordingly to "Uber Driver Deemed Employee By California Labor Commission"?
Uber controls every aspect of the business, from the fares charged (and how much profit Uber will take from each) to the route taken to the conditions of the vehicle to preventing subcontracting. It isn't even close or arguable. As the ruling points out, these people aren't independent drivers with their own businesses that just happen to have engaged in a contract with Uber, nor could Uber's business exist without them.
The short version:
Seriously, what the hell does government have to do with the relationship between me and my source of income. I should be able to do whatever I want, whenever I want to and at whatever rate I choose to work for so long as it isn't illegal in some fundamental way (fraud, theft, murder, burglary, etc.). Beyond that they should stick to painting white and yellow lines on the roads and changing light bulbs on road signs, thank you very much.
It is just incredible to see how our own government looks for every possible angle they can find to destroy progress. I am not defending Uber and their practices. I've never used the service (troed but not available where I am). I am simply using them as an example of a fantastic innovative company trying to find a better way to do something and, instead of our government helping facilitate the exploration of solutions that could advance society and make life better, simpler, healthier, whatever, they become our own worst enemies.
Who the hell do they think they are? They work for US. We don't work for them. We are not their slaves.
Folks, wake the fuck up. Next election you need to send a solid message to everyone in government that they better truly start working for us or they are gone. The way you do that is to support moderate Libertarian candidates. Moderate is the key here, the extremists on any party are friggin insane.
WE NEED TO REDUCE GOVERNMENT TO A MINIMUM OR THEY ARE GOING TO KILL US OFF.
Look at what is happening here in California. We are going to BURN a hundred billion dollars (likely more) building a joke of a high speed train to nowhere and NOBODY is stopping it. Why? Because you are watching government greasing unions to gain votes and favors. The whole thing is sick beyond recognition.
Seriously though, they give the "ride sharing" economy a bad name.
If this ruling sticks, many of those drivers will no longer have a position.
* People sign up with Uber
* They drive literally whenever they want
* Uber has no standards for their drivers other than "get good ratings" and "pass a background check"
..and they're considered employees? WTF?
There used to be a CEO of Bing (Qu Lu and Nadella had been in that slot) but last year, Bing was split up under 5 VPs; there's no longer a real Bing organization.
It's strange. Bing has 20% market search share under its own name, plus 13% under the Yahoo brand. That's 33%. Google has 64%, so Bing is now more than half the size of Google in terms of searches. That's OK market share; it's like being Chrysler vs General Motors.
Bing's profits, though, are awful. Microsoft apparently loses money on Bing. It's hard to tell from the way Microsoft reports online services. Google's advertising revenue is 18 times Microsoft's. That's Bing's problem. Nobody buys Bing ads. It's surprising, with the the market share, that Bing can't fix this.
Ex Microsoft Manager went to Nokia, destroyed its Market Value, sold the debris back to Microsoft is now leaving Microsoft again.Time to short whatever company he is going to next. ;)
Kirill failed to cloudify. Qi isn't interested in the Dynamics business. Benioff couldn't get on-boarded. Guthrie is happy to step in.
* Azure can improve with Dynamics. Can Dynamics business improve with Guthrie?* Will cloud revenue reporting get more "obfuscated" in quarters to come?
Terry's and Elop's orgs were a) building cohesive/unified experiences and b) fighting conspiring threats to their long-term business solvency. Consolidation chose the most prominent leader.
* Does the bench of Terry's replacements change?* Could it be the first step towards scaling down hardware and devices?
And perhaps some "ding, dong, the witch is dead" from Redmond over those that are departing?
I won't be surprised if bing is handed over to yahoo completely.
The Lua community has found that bytecode is actually slower to load than it is to generate from source:The extra latency of loading the (larger) bytecode from disk/ssd/flash, exceeds the cpu time to lex/parse.
export PYTHONDONTWRITEBYTECODE=true is the first thing anyone should be doing.
I guess it figures that we copy each others' mistakes.
Add in the !bang feature for searching most websites (classics like !w - Wikipedia, !g - Google, and stuff like !gh - GitHub, !aur - Arch User Repository) and my favorite "define X" keyword that links straight to Wordnik, and my search experience is better than Google.
The !bangs also function as bookmarks, so if I ever want to go to GitHub, I can just search !gh and it'll take me there. It's like having a set of search engines stored universally, accessible from any device with web access.
And of course if I need Google, say for word etymologies, it's just a !g away.
It's maybe once every couple days that I have to use "!g" to get google results, for everything else DDG works excellently. Even the times when I have to use "!g", it's often a hint that I'm searching for an unpopular phrase, and find that if I rephrase my search results I get much better results out of both search engines.
I remember there was a story on HN once a few months back where a kind soul from DDG posted an email address that one could submit notes to wrt highlighting poor search results so that they could address them, I don't recall the email and haven't been able to find it. If this is still available with DDG could someone please re-post that email here? I would very much like to help improve the quality of DDG to make it better for everyone but I can't find anywhere to suggest improvements on their website.
Edit: I was able to find their Feedback page, but I much prefer email personally: https://duckduckgo.com/feedback
My main concern is that it's still a free service and I really don't see how it'll be sustainable in the long term without compromising privacy in their current model. If you're not the customer you're the product etc...
I'd gladly pay $10 a month for a "premium" search engine with strong privacy garantees. I'm definitely not going to enable ads in ddg and I can't imagine that the average duckduckgo user thinks differently.
For Google results, in the very rare case that I need to see what they have, I use "searchthis !sp" to get anonymous Google results from Startpage. I used SP fulltime previously but they've had reliability issues and an odd issue with the back button.
Most !bangs that I use are for !w, !a and !gm. Firefox's dedicated search bar helps a lot with editing a long search string with another bang. Doubtful I'll be moving off DDG due to that feature.
I know that the interviewer has to ask at least the first part to get the interview going, but it also highlight everything that wrong in the thinking of online ads/marketing.
If you need to collect user information to make money, them perhaps your product isn't that great to begin with (unless you're an ad company like Google, but the we get into the argument who the user is).
Also collection information, so you can grow to become a big brand, I would argue that you've thrown trust out the window a long time ago.
Being based in the US is a dealbreaker for privacy.
I feel like "duck duck go" is too long for the avg american to grasp or use in an ongoing convo when compared to "google", "bing" or "yahoo."
I can't see people saying "just duck duck go it." Maybe something like DDG or "duckduck"instead?
Maybe that's just me...
I know they have sort by date, but this just sorts by date without taking into account how relevant the result is.
What's the estimate market share of DuckDuckGo today, that's the real question.
Do they dominate a niche? I think they have significant market share among HackerNews users.
I can't watch the video, about which the text reads "The news anchor just cant resist a little jab about DuckDuckGos location choice".
$1.80 / 1000 queries:
Deciding before my query to use DDG is a hard habit/practice to employ. However, when shown results from Google, if DDG results were just a click away (maybe already rendered in another "tab") it would make A/B testing easier and seemless - which would be essential to eventually moving away and changing my "default" search engine.
Just my $0.02
Lately, DDG has also been quite slow for me and doesn't load at all for several seconds. Overall, I love the privacy part, but it's not as useful as a search engine ought to be for my usage. So I'm unable to quit the other alternatives, even though I badly want to.
Any NSL can order them to hand over the logs in certain regimes (bulk or per IP? We know what happened), but they cannot force them to write logs in the first hand. Without logging it will also be ~10% faster.
And is it just me or is it actually faster when you click on one of the results? Whenever I do that with google it seems like there is a delay while google does some analytics or something.
Every time I've tried to switch to DuckDuckGo, this has been the primary stumbling point.
No less impressive, and so much more informative!
But, I tried again just a couple months ago (I went whole hog and changed the search engine in chrome to ddg) and have been very impressed with it. It's been continuously worked on enough that it now serves about 90-95% of my daily search needs without any fuss and I actually prefer the way it presents images, semantic search and videos in search results over google's. It does a much better job at returning results for what I'm actually searching for and that's awesome.
For example in Google, if I search for "Mad Max" I get showtimes for "Mad Max: Fury Road" at the top and and imdb-like bit of information for "Mad Max: Fury Road" on the right (neither of which I searched for) and then a list of search results which these days are increasingly just links to Wikipedia's take on whatever I'm searching for (this time for "Mad Max" and "Mad Max (franchise)") followed by news on "Fury Road" the "Fury Road" video game, IMDB links to "Mad Max (1979)" and "Mad Max:Fury Road (2015)", etc. then trailers on youtube for both movies and links to the movie sites etc.
It's okay, I suppose, but Google first assumes I'm looking for "Mad Max:Fury Road" and fills the results for that, then I get links to WP and IMDB on the same topic (I could have just gone to those), except for WP it's not for Fury Road. And why no love for Thunderdome?
Guess what happens when I search ddg? I get a list of possible meanings, the first of which is "Mad Max" not Fury Road (that's #2), then a list of other possible meaning (which include Fury Road, the videogame, the Franchise, etc.). This is awesome, it's not assuming which meaning I want, and thereby getting it wrong like Google and the list of possible meanings is better ordered. Then the search results are better too, of course the prerequisite IMDB and WP links are there, but the top 4 results are for "Mad Max" (or the franchise) and not for "Fury Road"...I'm actually getting results for what I searched for, not for what it thinks I searched for. The mix of results after that is also "better" to my eyes, it includes a large fan site, which Google doesn't ever seem to get around to, Amazon, ebay, games, non-WP fan wikis, reviews, and so on.
Google seems intent on shoving the latest thing that the film studio marketing departments are currently pushing, while DDG provide links to information on what I actually searched for.
I've found this to be true for most of my searches. DDG is actually finding what I want instead of what Google wants.
About the only times I'm finding I'm using Google any more is in two cases:
* I've exhausted DDG's results and want to see if Google's bigger index has something else.
* Google's more sophisticated time constraints on searches. DDG just lets me order results, but Google let's me slice out results between time ranges, which I often find more useful for research purposes.
Bonus: Privacy, again not my main interest, but it's nice that it's there. !Bang syntax. I don't use many of them, but I find them useful (it's also how I execute google searches, just put a !g before my search in ddg).
Wishes: time-slice for search, someway to make it my default in mobile chrome on my android devices
been using DDG since almost the beginning so i'm kind of conflicted...
Helping out is super easy: On Linux or Mac, git clone https://github.com/ArchiveTeam/IA.BAK/ and run the iabak script. It will walk you through setup.
and (more recently) https://news.ycombinator.com/item?id=9602868
* OSX (but not iOS) apps can delete (but not read) arbitrary Keychain entries and create new ones for arbitrary applications. The creator controls the ACL. A malicious app could delete another app's Keychain entry, recreate it with itself added to the ACL, and wait for the victim app to repopulate it.
* A malicious OSX (but not iOS) application can contain helpers registered to the bundle IDs of other applications. The app installer will add those helpers to the ACLs of those other applications (but not to the ACLs of any Apple application).
* A malicious OSX (but not iOS) application can subvert Safari extensions by installing itself and camping out on a Websockets port relied on by the extension.
* A malicious iOS application can register itself as the URL handler for a URL scheme used by another application and intercept its messages.
The headline news would have to be about iOS, because even though OSX does have a sandbox now, it's still not the expectation of anyone serious about security that the platform is airtight against malware. Compared to other things malware can likely do on OSX, these seem pretty benign. The Keychain and BID things are certainly bugs, but I can see why they aren't hair-on-fire priorities.
Unfortunately, the iOS URL thing is I think extraordinarily well-known, because for many years URL schemes were practically the only interesting thing security consultants could assess about iOS apps, so limited were the IPC capabilities on the platform. There are surely plenty of apps that use URLs insecurely in the manner described by this paper, but it's a little unfair to suggest that this is a new platform weakness.
Keychain items have access control lists, where they can whitelist applications, usually only themselves. If my banking app creates a keychain item, malware will not have access. But malware can delete and recreate keychain items, and add both itself and the banking app to the ACL. Next time the banking app needs credentials, it will ask me to reenter them, and then store them in the keychain item created by the malware.
The keychain can be compromised by a malicious app that plants poisoned entires for other apps, which when they store entries that should be private end up readable by the malicious app.
A malicious app can contain helper apps, and Apple fails to ensure that the helper app has a unique bundle ID, giving it access to another app's sandbox.
WebSockets are unauthenticated. This seems to be by design rather than a bug though, and applications would presumably authenticate clients themselves, or am I missing something?
URL schemes are unauthenticated, again as far as I can tell by design, and not a channel where you'd normally send sensitive data.
> Since the issues may not be easily fixed, we built a simple program that detects exploit attempts on OS~X, helping protect vulnerable apps before the problems can be fully addressed.
I'm wondering if the tool is publicly accessible, couldn't find any reference to it.
> @constant kSecAttrAccessGroup ...Unless a specific access group is provided as the value of kSecAttrAccessGroup when SecItemAdd is called, new items are created in the application's default access group.
If I understand this correctly that would always make sure that when an existing entry is updated in an app, the 'hack' app would again be restricted in being able to access the entry's data. It could still clear the data, but wouldn't be able to access the contents.
The paper seems to note this as well:
> It turns out that all of [the apps] can be easilyattacked except todo Cloud and Contacts Sync For GoogleGmail, which delete their current keychain items and createnew ones before updating their data. Note that this practice(deleting an existing item) is actually discouraged by Apple,which suggests to modify the item instead .
The second defense is more complex, changing the way Keychain API works without breaking every app out there is much more complex. Not knowing much about this is implemented it might take a lot of testing to verify a fix without breaking apps.
The last thing they can also do is to build a verified system tool that checks the existing keychain for incorrect ACL usage. You can't hide the hack from the system. This way Apple could fix the ACL to not allow incorrect usage and not give access where it doesn't belong. I think this is fairly easy to do since it will break very little.
This is why building security is hard no matter who you are and everyone gets it wrong sometimes. At least Apple has the ability to readily (except for point 2) repair the problem, unlike Samsung having to have 800 million phones somehow patched by third parties to fix the keyboard hack.
Is this going to happen in an upcoming stable release? What is it being replaced with?
Edit - I seldom downvote others and the few times I do, I comment as to why I think the post was inappropriate. What is inappropriate about my post?
Few people stop and think about the burden of keeping state and the problems that introduces with password storage. Many even compound the problems by keeping state in the Cloud (solve device syncing issues). It's worth discussing. There are other ways.
Does anyone know if Apple have done anything towards resolving this in the 6 month window they requested? Slightly worrying now that this has been published without a fix from Apple. I don't really download apps very often on my Mac, but probably won't for sure now until I know this has been resolved. Annoying.
I hope it's being readied for inclusion in 8.4. We all know how it bruises Apple's ego to have to patch stuff without acting like it'a a feature enhancement.
I'm not exactly shocked.
Just for kicks... Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything? Turns out, with marketshare they can! Just like Windows. Strange thing eh?
In high school I chose CS as the category for my IB paper, and chose to write about machine translation. Neural network methods from the 90's is one thing I looked at, and I'm a bit surprised to see neural network translators making a small comeback in the past couple of years.
I wrote some patches for KeepassX to use the Yubikey to derive the encryption key (completely offline) but unfortunately the maintainer has zero interest in merging them.
 https://github.com/keepassx/keepassx/pull/52 and https://news.ycombinator.com/item?id=7801131
Alas synching between different machines isn't easy (counter gets out of synch) and I'm not all that comfortable with keeping the databse in my owncloud.
If anyone has a good suggestion for a crossplattform (Xubuntu, OSX, Android), synchable and FLOSS OATH HOTP password storage solution that doesn't rely on 3rd party cloud storage I'm all ears. Not exactly a security expert but I feel that's the setup I want :)I could fallback to challange/response and that would fix some issues but be less secure.
[The Yubikey itself is pretty cool though]
My Yubikey feels like a natural member of my key ring! I love it.
I guess I misunderstood. I thought that once I enabled two-factor auth for LastPass, it'd require that no matter what. Nope, just open the iPad app and no two-factor required.
It being a separate piece of plastic might arguably be another advantage, if we assume that most people are more likely to lose their phone than their keyring.
Its interesting: apparently, YubiKey is Googles initiative and the company itself uses YubiKeys internally.
I hope we're able to demonstrate to their M&A team that moves like this hurt their reputation and make it harder for them to hire & acquire.
If only Etherpad had done so in the first place.
The BGV paper is the real deal.
Evaluating the claim thus requires skepticism and care. The quality of the paper is suspect as are its proofs. But, as is the case with science, heuristics like this count for little.
They do seem to have selected a hard worst-case problem to base the system on. Obviously this means very little if secure keys and setting in this complexity space can not be found in practice - or if the details of the cryptosystem, for one of many many reasons, lead to its easy compromise.
Looking forward to a proper peer review of the scheme.
Edit: Looks like it's already broken!
I'd like to see someone knowledgeable look at Vilfredo Pareto's 80-20 ratio and do a modern update.
Did anyone do it sort of recently?
That kind of disturbing falsehood wouldn't be changed in me until I was much older, post-911 freedom-fries even.
So when did that all change? Did the Greatest Generation come back from the war and stop thinking about its former allies? I always looked up to the Statue of Liberty, and I was surprised to hear it was a gift from France (certainly not something I heard the first time I saw it).