hacker news with inline top comments    .. more ..    18 Jun 2015 Best
home   ask   best   3 years ago   
From Asm.js to WebAssembly brendaneich.com
652 points by fabrice_d  10 hours ago   263 comments top 36
1
sixdimensional 9 hours ago 1 reply      
I think this quote speaks volumes - "WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla, and a few other folks." Sometimes I think maybe, just maybe the W3C and other web standards groups finally have some wind behind their sails.

It may have taken a while, but with all these individuals and organizations cooperating in an open space, we may finally advance yet again into another new era of innovation for the web.

I am really excited about this, much like others in these comments.

We have been beating around the bush to have a true assembly/development layer in the browser for a long time: Java applets, Flash, Silverlight, you name it - but no true standard that was open like Javascript is open. This component has the possibility of being the neutral ground that everyone can build on top of.

To the creators (Brendan Eich et. al) & supporters, well done and best of luck in this endeavor. It's already started on the right foot (asm.js was what lead the way to this I think) - let's hope they can keep it cooperative and open as much as possible for the benefit of everyone!

2
AriaMinaei 5 hours ago 12 replies      
Does everyone think this is good news?

I'm all for making the web faster/safer/better and all that. But I am worried about losing the web's "open by design" nature.

Much of what I've learned and am learning comes from me going to websites, opening the inspector and stepping through their code. It's educational. You learn things you may never read about in tutorials or books. And it's great because the author may have never intended for their code to be studied. But whether they like it or not, other people will learn from their code, and perhaps come up with [occasionally] better versions of it on their own.

This has helped the web development to evolve faster, and it's obvious how democratizing this "open-by-design" property is, and I think we should be concerned that it's being traded away for another (also essential) property.

Human beings cannot read asm.js code. And a bytecode format will be more or less the same. So, no matter how much faster and more flexible this format/standard is, it will still turn web apps into black boxes that no one can look into and learn from.

3
pcwalton 9 hours ago 3 replies      
Having been on one side of the perpetual (and tiresome) PNaCl-versus-asm.js debate, I'm thrilled to see a resolution. I really think this is a strategy that combines the best of both worlds. The crucial aspect is that this is polyfillable via JIT compilation to asm.js, so it's still just JavaScript, but it has plenty of room for extensibility to support threads, SIMD, and so forth.
4
wora 8 hours ago 1 reply      
Oberon language had a similar system called Juice back in 1997.It does exactly the same thing, e.g. using binary format tostore a compressed abstract syntax tree as intermediate formatwhich can be compiled efficiently and quickly. I think it evenhas a browser plugin much as Java applet. Life has interestingcycles. I don't have the best link to the Juice.

[1] https://github.com/berkus/Juice/blob/master/intro.htm[2] ftp://ftp.cis.upenn.edu/pub/cis700/public_html/papers/Franz97b.pdf

5
dankohn1 8 hours ago 3 replies      
This is enormous news. I could see a scenario where, in ~5 years, WebAssembly could provide an alternative to having to develop apps with HTML for the web, Swift for iOS, and Java for Android. Instead, you could build browser-based apps that actually delivered native performance, even for CPU- and GPU-intensive tasks.

Of course, there would still be UI differences required between the 3 platforms, but you would no longer need 3 separate development teams.

6
daurnimator 2 hours ago 1 reply      
This still doesn't fix the biggest issue with running non-javascript code in the browser: browsers still offer no way to know when a value is collected.

e.g. if I allocate a callback function, and hand it to setTimeout, I have no way to know when to collect it.

Sure, you can encode rules about some of the common functions; but as soon as you get to e.g. attaching an 'onreadystatechange' to an XHR: you can't follow all the different code paths.

Every time a proposal comes up to fix this:

 - GC callbacks - Weak valued maps - Proxy with collection trap
The proposal gets squashed.

Unless this is attended to Javascript remains the required language on the web.

7
amyjess 8 hours ago 2 replies      
This is probably the best thing that can happen to web development.

For quite a while, I've been thinking about how instead of hacks like asm.js, we should be pushing an actual "Web IR" which would actually be designed from the ground up as an IR language. Something similar to PNaCl (a subset of LLVM IR), except divorced from the Chrome sandbox, really.

8
addisonj 8 hours ago 4 replies      
Prepare for the onslaught of new (and old) languages targeting the web.

While this is welcome news, I am also torn. The possibilities are pretty amazing. Think seamless isomorphic apps in any language that can target WebAssembly and has a virtual dom implementation.

However, it finally seems like JS is getting some of its core problems solved and is getting usable. I wonder if it might have short term productivity loss as the churn ramps up to new levels with a million different choices of language/platform.

Either way, it will be an interesting time... and a time to keep up or risk being left behind.

9
spullara 4 hours ago 1 reply      
It is really too bad that at some point in the last 18 years of Java VMs being in browsers that they didn't formalize the connection between the DOM and Java so that you could write code that interacted directly with the DOM and vice/versa in a mature VM that was already included. Would have been way better than applets, way faster than Javascript and relatively easy to implement. The browsers actually have (had?) APIs for this but they were never really stabilized.
10
aikah 8 hours ago 0 replies      
So it's basically bytecode for the web without compiling to javascript right ?

Any language can now target that specific bytecode without the need for javascript transpilation.

For instance Flash can target this format in place of the Flash player, making swf files future proof since backed by standard web techs.

So it's basically the return of Flash,Java applets and co on the web. And web developers won't have to use Javascript anymore.

The only constraint is obviously the fact that the bytecode has only access to web apis and cant talk directly to the OS like with classic browser plugin architectures.

11
comex 5 hours ago 1 reply      
Has any consideration been given to using a subset or derivation of LLVM bitcode a la PNaCl? I know there are significant potential downsides (e.g. according to [1], it's not actually very space efficient despite/because of being bit-oriented and having fancy abbreviation features), but it already has a canonical text encoding, it has been 'battle-tested' and has semantics by definition well suited for compilers, and using it as a base would generally avoid reinventing the wheel.

[1] https://aaltodoc.aalto.fi/handle/123456789/13468

12
kodablah 9 hours ago 2 replies      
I think the biggest win is https://github.com/WebAssembly/design/blob/master/FutureFeat.... Now instead of asm.js being only for emscripten-compiled code (or other non-GC code) WebAssembly can be used for higher level, GC'd languages. And even better, https://github.com/WebAssembly/design/blob/master/NonWeb.md, means we may get a new, generic, standalone VM out of this which is always good (I hope I'm not reading in to the details too much). As someone who likes to write compilers/transpilers, I look forward to targetting this.
13
M8 9 hours ago 4 replies      
If I could just use my favourite language and not feel like a second class citizen, then I am not sure there would be anything else to complain about as a developer, really. A mark-up bytecode so that we could forget about the nightmare of HTML and CSS as well?
14
haberman 5 hours ago 0 replies      
Very very happy to see this.

Politically it appears to be a fantastic collaboration.

Technically it looks like they have really thought through this hard -- if you look through the post-MVP plans (https://github.com/WebAssembly/design/blob/master/FutureFeat...) there are a lot of exciting ideas there. But it's not just pie-in-the-sky speculation, the amount of detail makes it clear that they have some really top compiler people who are really rigorously exploring the boundaries of what can be accomplished inside the web platform (SIMD, threading, GC integration, tail calls, multiprocess support, etc).

15
McElroy 6 hours ago 0 replies      
https://github.com/WebAssembly/design/blob/master/FAQ.md#wil... makes me happy as that was the first concern I had when reading these news :)
16
JoshTriplett 9 hours ago 1 reply      
I'm interested to see what the API side of WebAssembly looks like in browsers; hopefully this will make it easier to expose more full-featured sandboxed APIs to languages targeting the web, without having to tailor those APIs to JavaScript. For instance, API calls that actually accept integers and data structures rather than floating-point and objects.

For that matter, though in the short-term this will be polyfilled via JavaScript in browsers, it'll be fun to see the first JavaScript-to-WebAssembly compiler that allows you to use the latest ECMAScript features in every browser.

17
AndrewDucker 8 hours ago 1 reply      
Interestingly, about five years ago, he said he couldn't see this ever happening:https://news.ycombinator.com/item?id=1905291
18
Murkin 8 hours ago 5 replies      
Can someone explain why not just go the JVM or .NET CLR path ?

Both well tested, well executed, great tooling, supported on many platforms, compilation targets of many existing languages.

Serious question.. is it licensing ?

19
bhouston 9 hours ago 2 replies      
I guess this is in the spirit of NaCL and its bytecode, and the Java VM/Java bytecode, and the .NET runtime/.NET IR. It makes a lot of sense and I get it then sort of gets competitive with those efforts as well.
20
rhaps0dy 9 hours ago 0 replies      
Finally. IMO this is what the web has been calling for since AJAX went mainstream.

They are doing great work. The client's operating system matters little now, but it will not matter at all soon.

21
mhd 9 hours ago 0 replies      
Combined with something like e.g. Flipboard's react-canvas, this means we could bypass and re-implement most of the browser stack...
22
ncw33 8 hours ago 1 reply      
Nice, but I'm still waiting for 64-bit integer arithmetic!

For our use case, what I like about this is that we can continue to use emscripten and the technology will come to us, rather than requiring app developers to invest in yet another technology (our switchover from NaCl to emscripten was very time consuming!)

23
thomasfoster96 9 hours ago 2 replies      
This is pretty awesome, and is a pretty good use of all the effort that's been going into asm.js

One question though - I found a proposal somewhere on a Mozilla run wiki about a web API for registering script transpiling/interpreter engines. I've lost the web address, but if anyone know any more about this is love to see it rekindled.

24
moron4hire 8 hours ago 3 replies      
So for now, the idea is to write C++, compile it to ASM.js, translate it into WebAssembly, GZIP it, transmit it, unGZIP it, then run a polyfill to translate the WebAssembly into ASM.js?

This sounds absurd. I can't even get through getting Clang, LLVM, and Emscripten built from source as it is, it's such a house-of-cards with configuration and dependency settings. Have any of you tried building Chromium from scratch? I have, on three separate occasions, as I'd like to try to contribute to WebVR. End result: fewer gigs of free space on my hard drive and no Chromium from source.

Part of that is my impatience: I'm used to C# and Java, where dependencies are always dynamically linked, the namespacing keeps everything from colliding, and the semantics are very easy to follow. But even Node's braindead NPM dependency manager would be better than the hoops they make you jump through to build open-source C++ projects. I mean, I just don't get how someone could have at any point said "yes, this is a good path, we should continue with this" for all these custom build systems in the wild on these projects.

I could be way off. I'm only just reading the FAQ now and I'm not entirely sure I understand what has actually been made versus what has been planned. There seems to be a lot of talk about supporting other languages than C++, but that's what they said about ASM.js, and where did that go? Is anyone using ASM.js in production who is not Archive.org and their arcade emulator?

I don't know... I really, really want to like the web browser as a platform. It has its flaws, but it's the least painful solution of all of the completely-cross-platform options. But it's hard. Getting harder. Hard enough I'm starting to wonder if it'd be smarter to develop better deployment strategies for an existing, better programming language than to try to develop better programming languages for the browser's existing, better deployment strategy.

This telephone game being played by translator tools and configuration management tools and polyfills and frameworks and... the list goes on! This thing we consider "modern" web development is getting way out of hand. JS's strength used to be that all you needed was a text editor. Everyone--both users and developers--can already use it and run it.

If it's just one tool, I'll get over it. But stringing these rickety, half-implemented tools together into a chain of codependent systems is unacceptable. It just feels like they're foisting their inability to finish and properly deploy their work on us. Vagrant recipes are nice, but they should be a convenience, not a necessity.

Sorry. Good for them. Just finish something already.

25
vmorgulis 4 hours ago 0 replies      
This is awesome.

We will probably need a package manager after that (like apt or npm).

A use case could be with ImageMagick, OpenCV, OpenSceneGraph or qemu inside the browser. All of them are huge and useful projects with many common dependencies.

26
amelius 5 hours ago 0 replies      
Without support for proper threads, web assembly programming feels the same as programming a Z80 or 6502 back in the 80s.

And no, webworkers don't cut it, because they don't support structural sharing of immutable data structures in an efficient and flexible way.

27
jacquesm 5 hours ago 0 replies      
So, is this the long way around to get us Java Applets all over again?
28
protomyth 5 hours ago 0 replies      
Didn't WMLScript (a subset of Javascript used for WML) have a required byte code representation?
29
lorddoig 4 hours ago 1 reply      
Praise the lord, that was sooner than I expected. Next up: the DOM. Then there will be peace on Earth.

Does anyone know when all this started? I ask because only 83 days ago Brendan was on here telling us pretty emphatically that this was a bad idea and would never happen.

30
McElroy 6 hours ago 0 replies      
This page makes Firefox on Android crash.
31
garfij 7 hours ago 1 reply      
I'm curious what the debugging story for this is going to be? Source maps?
32
leoc 3 hours ago 0 replies      
w00t w00t. This is pretty great overall.
33
jewel 9 hours ago 1 reply      
I hope that someone ports mruby to this. I've come to terms with javascript's syntax (via coffeescript), but I'd still rather not deal with javascript's semantics.
34
andybak 8 hours ago 0 replies      
Isomorphic Python here I come...
35
rockdoe 9 hours ago 2 replies      
So this is like PNaCl but targeting the web API and by making it collaborative, hopefully a real standard allowing independent reimplementation?

Ironic that Eich is the one to pull the trigger on JS.

36
joosters 2 hours ago 0 replies      
To an end user, how is this a different experience from flash? You browse to a website and must execute binary blobs in order to view the site.

Even worse, it's like Flash but where the flash 'plugin' has been written from scratch by each web browser, giving us endless possibilities of incompatibilities which are a nightmare to fix.

Uber Drivers Deemed Employees by California Labor Commission techcrunch.com
538 points by uptown  11 hours ago   556 comments top 58
1
beering 11 hours ago 14 replies      
Skimming through the doc, court findings are:

1) Drivers providing their own cars is not a strong factor - pizza delivery employees also drive their own cars.

2) Uber "control the tools that drivers use" by regulating the newness of the car.

3) Uber exercises extensive control over vetting and hiring drivers and requires extensive personal information from drivers.

4) Uber alone sets prices, and tipping is discouraged, so there is no mechanism for driver (as "contractor") to set prices.

5) Plaintiff driver only provided her time and car. "Plaintiff's work did not entail any 'managerial' skills that could affect profit or loss."

6) Drivers cannot subcontract (presumably negating Uber's position as a "lead generation" tool for contractors).

Sorry that these are out of order. Look on Page 9 of court documents for full text.

2
grellas 9 hours ago 8 replies      
A few thoughts:

1. This is an appeal from a decision by a hearing officer of the California Labor Commissioner. Most of the time such officers spend their days hearing things such as minimum wage claims. Hearings do not follow the strict rules of evidence and are literally recorded on the modern equivalent of what used to be a tape casette instead of by a court reporter. Such hearings might run a few hours or, in a more complex case, possibly a full day as the normative max. The quality of the hearing officers themselves is highly variable: some are very good, others are much, much less than good in terms of legal and analytical strengths. In a worst case, you get nothing more than a pro-employee hack. The very purpose of the forum is to help protect the rights of employees and the bias is heavily tilted in that direction. That does not mean it is not an honest forum. It is. But anything that comes from the Labor Commissioner's office has to be taken with a large grain of salt when considering its potential value as precedent. Hearing officers tend to see themselves as those who have a duty to be diligent in protecting rights of employees. Whether what they decide will ever hold up in court is another question altogether.

2. Normally the rules are tilted against employers procedurally as well. When an employer appeals a Labor Commissioner ruling and loses, the employer gets stuck paying the attorneys' fees of the prevailing claimant on the appeal. This discourages many employers from going to superior court with an appeal because the risk of paying attorneys' fees often is too much when all that is at stake is some minimum wage claim. With a company like Uber, though, the attorney fee risk is trivial and all that counts is the precedential value of any final decision. It will therefore be motivated to push it to the limit.

3. And that is where the forum matters a lot. The binding effect of the current Labor Commissioner ruling in the court is nil. The same is true of any evidentiary findings. The case is simply heard de novo - that is, as if the prior proceedings did not even occur. Of course, a court may consider what the hearing officer concluded in a factual sense and how the officer reasoned in a legal sense. But the court can equally disregard all this. This means that the value of the current ruling will only be as good as its innate strength or weakness. If the reasoning and factual findings are compelling, this may well influence a court. Otherwise, it will have no effect whatever or at most a negligible one.

4. What all this means is that this ruling has basically symbolic importance only, representing what state regulators might want as an idealized outcome. Its potential to shape or influence what might ultimately happen in court is, in my view, basically negligible.

5. This doesn't mean that Uber doesn't have a huge battle on its hands, both here and elsewhere. It just means that this ruling sheds little or no light on how it will fare in that battle. You can't predict the outcome of a criminal trial by asking the prosecutor what he thinks. In the same way, you can't predict the outcome here by asking what the Labor Commissioner thinks. In effect, you are getting one side of the case only.

6. The contractor/employee distinction is highly nebulous but turns in the end on whether the purported contractor is actually bearing true entrepreneurial risk in being, supposedly, "in business." There are a number of factors here that do seem to support the idea of true entrepreneurial risk but that just means there are two sides to the argument, not that Uber has the better case.

7. In the end, this will be decided in superior court and then, likely, on appeal to the California courts of appeal beyond that. It will take years to determine. In the meantime, the Uber juggernaut will continue to roll on. So the real question will be: should we as a society welcome disruptive changes that upset our old models or should we use the old regulations to stymie them? Courts are not immune from such considerations and, as I see it, they will apply the legal standards in a way that takes the public policy strongly into account. It will be fascinating to see which way it goes.

3
tomasien 11 hours ago 11 replies      
I'm curious to read this argument. For all the hand wringing over Uber drivers as 1099 workers, they seem to be the very definition of contractors. They provide their own equipment, keep their own hours, NEVER have to work if they don't want to and 0 consequences for working or not working specific hours, etc. What is it about them that makes the employee like? Anyone know?

Edit: it appears that the critical factor they considered was whether or not the driver could have operated their business independently of Uber. They said they could not. They also cited the fact that Uber controls the way payments are collected and other aspects of operations as critical to showing employment. http://www.scribd.com/doc/268946016/Uber-v-Berwick

4
Dwolb 11 hours ago 6 replies      
This is a good ruling for workers, but maintains society's status quo. That is, Uber has realized significant margin gains by pushing all risk of carrying passengers and car maintenance onto its drivers. Therefore, this risk is transferred to either drivers (who are on average mot equipped to handle this risk) or insurance companies (who pass the costs on to their entire insurance pool) and not borne directly by Uber nor its customer base. By classifying drivers as employees, risk becomes better aligned.

Now, what society is really missing out on is an opportunity or reason to transition from employer-based benefits to government or society-based benefits. This ruling will postpone a public discussion on the role of employer-based insurance and benefits.

5
nugget 11 hours ago 5 replies      
I wonder -- if Uber converted drivers in California to employees and dealt with the increased costs (passing them on to riders) but also prevented the now-employed drivers from driving with competing services (Lyft) -- whether the company wouldn't actually become even more valuable than they already are. If you are driving for both services but Uber comprises 80% of your volume and Lyft 20%, it's an easy decision to make. Given that the real asset for all these sharing economy companies is their elastic work forces (drivers and cars for Uber, residents and homes for Airbnb), the CLC may have just created an entrenched monopoly without realizing it.

Beyond that there is a really interesting debate as to whether sharing economy jobs are an end-run around minimum wage laws, rendering such laws meaningless for certain industries going forward. If the majority of workers are turned into 1099 consultants, but are doing effectively the same jobs (drivers, delivery people, etc) that employees did in the past, what does that mean for society?

6
dotBen 8 hours ago 0 replies      
Just to point out -- the California Labor Commissions ruling is non-binding and applies to a single driver, it's not a class-action or applies to anyone else. Reports of the demise of Uber due to 'all partner drivers now being employees' is grossly exaggerated. Uber is also appealing. [disclosure: I work for Uber]

(see http://newsroom.uber.com/2015/06/clcstatement/)

7
gmisra 7 hours ago 1 reply      
The right answer is that the "on demand economy" does not fit into existing labor structures, and trying to shoehorn these new jobs into current legal frameworks is probably doomed to confusion. This is especially complicated because, in the United States, too much of the social safety net is explicitly tied to employer-employee relationships (workers comp, unemployment, healthcare, etc).

What I want is confidence that somebody providing a service to me is provided these benefits - if you work 40 hours/week in "on demand" jobs, you should receive commensurate coverage from the safety net, and you should receive at least the mandated minimum wage. If you work 10 hours in a week, you should receive the pro-rated equivalents of those services. This is, of course, complicated - how do you account for people working two services at the same time, or the "uber on the couch" issue, or who pays for vehicles and other capital goods. But pretending that existing labor laws will cover the changing workforce is silly.

We hear all the time about how the nature of work, especially service work is changing. It seems like a logical consequence that the nature of how society classifies, supports, and regulates work should also change. Uber, et al, and their VC comrades have a huge opportunity to shape the future of how people work, and how the social safety net works - to effect real disruption.

Based on their actions, however, it is hard to conclude that Uber, et al are actually interested in this discussion, beyond the marketing rhetoric it enables. As far as I can tell, they view the friction between existing laws and their business model as a profit opportunity and not a leadership opportunity. And so the inefficient behemoth of government regulation will inevitably step in.

8
codecamper 6 hours ago 0 replies      
I'm surprised that Uber discouraging their drivers to drive for other providers was not called out.

From what I understand, if you are an Uber driver and you do not accept a call too many times, Uber will simply stop giving you ride requests. This effectively squashes a driver's desire to drive for other networks because if he/she is busy with another network's ride when an Uber request comes in, he cannot accept it. Do that that some unknown number of times, and you don't get more work from Uber.

9
jussij 9 hours ago 2 replies      
The only problem I have with Uber is they get away with not having to compete on a level playing field.

I live in Sydney Australia and catch a fair few taxis.

That taxi diver I use has to pay many $100,000.00 to buy a taxi plate just to work (or work for someone who has bought such a plate), but the same Uber driver does not have such an overhead.

Also, that taxi has to pay insurance in case I'm injured while I'm in their cab, another cost the Uber driver does not have to cover with an insurance policy.

So government has to decide, does it want to eliminate those costs and make it a level playing field, making it an effective free for all.

But why politicians will never do that is because the first crash with the resulting insurance claim will bring the industry to it's knees and from that point on all hell will brake loose.

At present the politicians just don't want to make a decision because it is just a little too hard.

10
steven2012 10 hours ago 1 reply      
The ruling is not unexpected at all. After the ruling against Microsoft back in the 90s on contracting, it's pretty clear that a business needs to be very careful how they hire contractors, so that they don't become implicit employees. Google has to jump through hoops so that their contractors aren't considered employees (only work for 1 yr max, etc).

I'm curious how much this will affect Uber and what it will do to their business model. If I had to speculate, it would be that it becomes unprofitable almost instantly, but they do have a gigantic warchest, so maybe they can fight the ruling or figure out another way to classify their drivers.

Maybe they can advertise fares and jobs ("This person wants to be driven from SFO to Mountain View") and drivers bid on it like an auction. I wonder if that might change the equation? But then it means that drivers will have a lot more friction in the process.

11
joshjkim 2 hours ago 1 reply      
You can estimate how much this will cost Uber in CA as follows: $0.56 multiplied by total miles driven by UberX drivers over all time.

Its almost as simple as that, since damages were given out almost entirely on those grounds.

I'll leave it to HN to figure out a guess on mileage =)

Some other interesting notes:

Plaintiff was engaged with Uber from July 23 to Sept 18, less than 2 months (p 2)

She worked for 470 hours in that time, so quite a bit (p. 6)

Damages broken down as follows: $0.56/mile reimbursement, for a total of $3,622, tolls for $256, interest of $274, for a total of $4,152 (p10)

Claims for wages, liquidated damages and penalties for violations were all dismissed (p11)

12
a-dub 9 hours ago 0 replies      
To be clear, this is not new regulation. This is a hearing that weighed the facts against the current set of laws as they are written. Under those laws, they're pretty clearly employees.

Changing the existing laws is a different issue entirely. There are serious pros and cons on both sides and the right answer is not obvious.

13
paulsutter 2 hours ago 0 replies      
If this sticks, it just means that Uber drivers will get paid less in cash, more in benefits, and lose the ability to take business tax deductions.

Is that really better for the drivers? Sounds worse to me.

I ask because many people have been claiming Uber is a bad actor for making drivers contractors, but it's not clear to me that it's a big win for the drivers to be classified as employees. Actually it seems worse in many ways.

14
mikeryan 11 hours ago 3 replies      
I think Uber can maybe weather this storm but I wonder how this will trickle down to the smaller personal service players like TaskRabbit/Caviar/Luxe etc who employ independent contractors.
15
nemo44x 10 hours ago 2 replies      
Just throwing a hypothetical out there. What if this sticks and the drivers decide to organize with a union and collectively bargain a living wage, benefits, etc?

What would be the value of Uber (and related businesses)? Would it stay in business even? How many VC's would lose fortunes over Uber going nearly to 0? Would this be the popping of what some suspect is a private equity bubble as the effects of this ripple throughout?

Regardless, it would be a very different business with a very different valuation.

16
bdcravens 10 hours ago 1 reply      
Won't this mean that every single "employee" covered by this will have to file amended tax returns?
17
kposehn 7 hours ago 0 replies      
From Uber:

> Reuters original headline was not accurate. The California Labor Commissions ruling is non-binding and applies to a single driver. Indeed it is contrary to a previous ruling by the same commission, which concluded in 2012 that the driver performed services as an independent contractor, and not as a bona fide employee. Five other states have also come to the same conclusion. Its important to remember that the number one reason drivers choose to use Uber is because they have complete flexibility and control. The majority of them can and do choose to earn their living from multiple sources, including other ride sharing companies.

18
shawnee_ 8 hours ago 0 replies      
Beekeeping analogy: both ber and lyft are hives. The California Labor Commission's ruling does more to preserve hives in general (and thus the well-being of bees (drivers) as a whole), rather than any specific hive. Yeah, it's making things a little harder on one specific hive right now, but maybe this just means more hives will be popping up. It's the right call.

The phenomenon in nature is for bees to switch hives if theirs is in demise. "Any worker bee that is bringing in food is welcomed." [source: http://www.beemaster.com/forum/index.php?topic=8374.0]

19
encoderer 10 hours ago 1 reply      
Drivers are just temporary anyway. Uber is going to be the company to beat when autonomous cars make Autos as a Service a huge business. I see that, and not some low-margin package delivery service, as the driver of their future growth.
20
chx 8 hours ago 0 replies      
You all downvoted my comment three weeks ago: Uber is constantly trying to run from the law but eventually the law will catch up with them and finish this farce. Good.

Well, there you have it.

21
sudioStudio64 5 hours ago 0 replies      
The regulations on taxi drivers exist for a reason. Uber found a way to skirt some of those regulations for a time. Avoiding that regulation created a revenue stream that they used to operate and grow. It was always in danger of regulators catching up to them.

If you read some of the driver's reports then it becomes hard to really buy their "big taxi" schtick. That being said, they obviously provided something that people want. Taxi companies will have to adjust to this. (In some places like SF they already are.) In the end, I think that Uber will go the way of Napster and the taxi companies will end up adopting their techniques the way that the big record companies did.

22
beatpanda 2 hours ago 0 replies      
I hope this spells the end of labor exploitation as an "innovation" strategy by "technology" companies. It's getting sickening.
23
randomname2 8 hours ago 0 replies      
Reuters and Techcrunch may have jumped the gun here. This ruling only applies to a single driver. Reuters has updated their headline accordingly now: http://www.reuters.com/article/2015/06/17/us-uber-california...

Uber's response:

"Reuters original headline was not accurate. The California Labor Commissions ruling is non-binding and applies to a single driver. Indeed it is contrary to a previous ruling by the same commission, which concluded in 2012 that the driver performed services as an independent contractor, and not as a bona fide employee. Five other states have also come to the same conclusion. Its important to remember that the number one reason drivers choose to use Uber is because they have complete flexibility and control. The majority of them can and do choose to earn their living from multiple sources, including other ride sharing companies.'

 Uber spokeswoman

24
codegeek 11 hours ago 11 replies      
Isn't Uber concept similar to AirBnB ? DOes this mean AirBnB users are also at risk of being classified as employees of Airbnb ? Uber, you drive your own car. Airbnb, you rent your own apartment.
25
jacquesm 7 hours ago 0 replies      
This is going to put a knife into a lot of 'modern' start-ups. The theme where the company sets the prices, controls the payments and so on and where the people doing the work are contractors without employee protection or benefits applies to many of the business models of the new middle men.
26
pbreit 7 hours ago 0 replies      
I tend to agree with Uber here: http://newsroom.uber.com/2015/06/clcstatement/

I don't think this ruling will have much of an impact on anything.

27
mayneack 11 hours ago 2 replies      
This makes all the R&D into driverless cars worth it. Driverless cars can't be employees.
28
Spearchucker 9 hours ago 10 replies      
Uber and Lyft are operating a model that clearly works for customers. And yet they rightly face legal issues.

The thing that perplexes me is that existing taxi companies, who are licensed and otherwise compliant with the law, don't adopt the best parts of Uber and Lyft?

Why can't I call a black cab in London the way I call a ride from Uber?

29
jleyank 3 hours ago 0 replies      
Stupid question, perhaps, but IANAL... How does the Uber situation differ from the contractor situation Microsoft dealt with 5-10 years ago? If there's not significant difference, isn't this all settled case law?
30
anigbrowl 4 hours ago 0 replies      
One of the downsides of so many tech companies being privately held due to the unfashionability of IPOs (per the Andreessen Horowitz slide deck the other day) is that we lose out on the price signals that a stock market listing would normally provide in response to something like this.
31
DannyBee 10 hours ago 2 replies      
This should be 100% not shocking to anyone (including Uber, i expect). Given their recent executive hires, i'm sure they saw this coming, and already have an appeal strategy.

From a legal standpoint, riding the edge rarely works. Look at what happened to Aereo.

32
smiksarky 9 hours ago 0 replies      
Control is the key factor when determining a 1099 vs W2 employee. FedEx has been doing this for many years...which is why they just got fined a shit-ton. I think Uber drivers will just unionize within the near future causing all of their future driving 'employees' to get paid more - thus cutting profits - resulting in either a new business model which is again bending the new rules, or back to the way things were in the good ol' yellow cab.
33
todd3834 11 hours ago 0 replies      
I was asking about this a little while back and no one chimed in: https://news.ycombinator.com/item?id=9551467
34
gregoryw 5 hours ago 0 replies      
Anyone who has hired contractors knows where the line is. The damning part is the drivers have Uber provided iPhones in their cars. You have to provide your own equipment to be a contractor.
35
yueq 9 hours ago 2 replies      
What are the impacts to Uber if drivers are employees?
36
pbreit 8 hours ago 0 replies      
I don't think this holds. Either Uber makes slight changes that comply, some other legal body re-evaluates or laws are changed slightly to accommodate.

This is too powerful of a concept to dismantle so easily. Being able to pick and choose when you work and still be able to make decent earnings is very useful to society.

37
randomname2 6 hours ago 0 replies      
HN mods:

Techcrunch have retracted their original headline as this ruling only applied to a single driver, could we get the HN headline updated accordingly to "Uber Driver Deemed Employee By California Labor Commission"?

38
marcusgarvey 8 hours ago 0 replies      
How long will Uber's appeal take?

What happens if they lose?

Can other jurisdictions use this finding to change the way Uber operates?

39
bhouston 11 hours ago 5 replies      
What is the expected cost to Uber of this change?
40
reviseddamage 9 hours ago 1 reply      
Despite this, Uber will still take over the taxi industry, albeit perhaps a bit slower. The quality control over all its components that Uber exercises will continue to bring market attraction to its services and will keep winning market share.
41
c-slice 10 hours ago 0 replies      
Who is Rasier LLC? This seems to be some sort of shell corp for Uber's insurance.
42
jellicle 10 hours ago 2 replies      
This was obvious from the beginning. There's really not the slightest doubt that all government authorities are going to classify Uber employees as employees, except perhaps a few that might be bribed/pressured into not doing so.

Uber controls every aspect of the business, from the fares charged (and how much profit Uber will take from each) to the route taken to the conditions of the vehicle to preventing subcontracting. It isn't even close or arguable. As the ruling points out, these people aren't independent drivers with their own businesses that just happen to have engaged in a contract with Uber, nor could Uber's business exist without them.

The short version:

http://www.irs.gov/uac/Employee-vs.-Independent-Contractor-%...

43
brentm 10 hours ago 0 replies      
This feels like an inevitability at their scale. Outside of the tech world it was always going to be hard to sell their labor pool as contract when put under the microscope.
44
bickfordb 9 hours ago 1 reply      
If all California Uber drivers become employees, wouldn't Uber be on the hook for reimbursing past Uber drivers for all past vehicle expenses?
45
6d6b73 9 hours ago 0 replies      
If the ruling holds, Uber is finished. It will have to play by the rules set by the same rules any taxi company has to follow.
46
jkot 11 hours ago 1 reply      
Does it not complicate things for drivers as well? As employees they may have to tax entire income from Uber, including car expenses, maintenance etc..
47
big_data 4 hours ago 0 replies      
Better hurry up with that IPO ...
48
hiou 10 hours ago 0 replies      
Curious to see what the public markets will do in response. Nasdaq is taking a bit of a slide as we speak.
49
dylanjermiah 11 hours ago 0 replies      
Terrible for drivers, and riders and Uber.
50
aikah 10 hours ago 1 reply      
Oops ... here goes Uber's competitive advantage ...
51
louithethrid 10 hours ago 1 reply      
Ueber always was a desert flower. The field it disrupted was scheduled to vannish with automated cars anyway.
52
phpdeveloperfan 11 hours ago 0 replies      
I'm surprised this happened, though I feel like it'll be appealed heavily.
53
zekevermillion 11 hours ago 1 reply      
surely Uber is prepared for this eventuality and has a strategy ready to go
54
rebootthesystem 9 hours ago 3 replies      
A point is being lost here in major way: We need to, as much as possible, get government out of our homes and businesses or they will continue to bury us under so much muck that we'll asphyxiate.

Seriously, what the hell does government have to do with the relationship between me and my source of income. I should be able to do whatever I want, whenever I want to and at whatever rate I choose to work for so long as it isn't illegal in some fundamental way (fraud, theft, murder, burglary, etc.). Beyond that they should stick to painting white and yellow lines on the roads and changing light bulbs on road signs, thank you very much.

It is just incredible to see how our own government looks for every possible angle they can find to destroy progress. I am not defending Uber and their practices. I've never used the service (troed but not available where I am). I am simply using them as an example of a fantastic innovative company trying to find a better way to do something and, instead of our government helping facilitate the exploration of solutions that could advance society and make life better, simpler, healthier, whatever, they become our own worst enemies.

Who the hell do they think they are? They work for US. We don't work for them. We are not their slaves.

Folks, wake the fuck up. Next election you need to send a solid message to everyone in government that they better truly start working for us or they are gone. The way you do that is to support moderate Libertarian candidates. Moderate is the key here, the extremists on any party are friggin insane.

WE NEED TO REDUCE GOVERNMENT TO A MINIMUM OR THEY ARE GOING TO KILL US OFF.

Look at what is happening here in California. We are going to BURN a hundred billion dollars (likely more) building a joke of a high speed train to nowhere and NOBODY is stopping it. Why? Because you are watching government greasing unions to gain votes and favors. The whole thing is sick beyond recognition.

55
astaroth360 7 hours ago 0 replies      
Please, please let Uber get pwned in the face for their general combative business practices :D

Seriously though, they give the "ride sharing" economy a bad name.

56
ThomPete 11 hours ago 2 replies      
Considering that Uber eventually is going to replace them all with self driving cars I think it's only fair that the people who help making Uber so valuable gets part of the spoils of being in a company that grows this fast. I am assuming this also means healthcare.
57
dylanjermiah 11 hours ago 3 replies      
"Uber is said to have more than a million drivers using the platform across the globe."

If this ruling sticks, many of those drivers will no longer have a position.

58
Karunamon 11 hours ago 5 replies      
So let me get this straight:

* People sign up with Uber

* They drive literally whenever they want

* Uber has no standards for their drivers other than "get good ratings" and "pass a background check"

..and they're considered employees? WTF?

LastPass Security Notice lastpass.com
542 points by jwcrux  2 days ago   305 comments top 47
1
jjarmoc 2 days ago 16 replies      
While LastPass seems to be responding well, I find their entire service exceeds my tolerance for risk.

If you don't use a password manager, you've got 99 problems, but a centralized store of your credentials for everything that's a huge target by virtue of having thousands of similarly centralized users ain't one.

Using a password manager (good idea) and then storing all your passwords on a 3rd party service of which you have no control seems inherently risky. Lastpass is a huge target, and while I believe they generally take reasonable security measures, for many the risk of compromise may be greater than an encrypted stand-alone password database. Use a password manager, please, but keep it offline and don't aggregate it with loads of other people's databases.

This is one area where I feel strongly that the conveniences of 'Cloud' are outweighed by the risks.

2
AdmiralAsshat 2 days ago 3 replies      
See quite a few nods to 1Password in here, which is good, although I tend to favor KeePass myself, given that it's FOSS.

It also has a way better Firefox add-on than any of the others I've seen (which is my main browser), and the Android apps, if unofficial, aren't bad either [0]. Importantly, they feature the ability to either pull from a local Keepass DB or to get it from a connected Google Drive account. I've taken to using the latter to make sure my database is synced across all my devices.

At this point it works fairly well across everything I use, with the one exception that trying to keep the database synced on my Windows box requires an extension that looked a tad shady to me [1], so I opted to simply manually upload a new version each time instead.

[0]: https://play.google.com/store/apps/details?id=keepass2androi...

[1]: http://keepass.info/plugins.html#kpgsync

3
LawnGnome 2 days ago 3 replies      
I don't use LastPass, but one thing that impresses me about their blog post: they didn't hide behind "your passwords are hashed" or something equally weaselly, but instead said exactly and clearly how passwords are hashed. Every online company should take note.
4
sroerick 2 days ago 2 replies      
No mention of pass?

http://www.passwordstore.org/

gpg password storage. Synchronization with rsync.

Beats the heck out of proprietary cloud hosted software.

5
robto 2 days ago 0 replies      
I've been using LastPass for a while now, but I was recently evaluating the landscape for something more open. I came across Mitro[0], and it looks like it fits the bill. Unfortunately it doesn't look like it has been much maintained since its open-sourcing last year.

Mitro checked a lot of boxes on my checklist, so it's a bit disappointing that it has a smaller community.

[0]: https://www.mitro.co/security-faq.html

6
tptacek 2 days ago 1 reply      
Do they know how they were compromised?
7
hawkes 1 day ago 1 reply      
I've learnt a lot reading this thread. Thank you all.

But I can't believe almost everyone here, talking about security, is talking about Dropbox even as a hypothetical cloud option for storing password related info.

- Dropbox (and most of the other cloud storage services) do not encrypt your data, or if they do now as they claim, with SHA256, I'd say they must be able to decrypt it whenever they want to, as they give you the "Did you forgot your password" option to change it, so they have to be able to decrypt it and encrypt it with your new password o whatever they use to encrypt) and they hired Condoleeza Rice! for their board of executives (she puts "national security" over any privacy so...), so you can count any worker at Dropbox can peep at everything you upload whenever they want to.

Of course you'll think: "I'm not a terrorist, I don't care." Well, if a worker can take a look, and you don't even know him... The threat is quite clear to me.

MEGA, for example, does encrypt everything you upload taking as seed some derivation of your password, but they DO NOT store your password, so they can't ever decrypt it for themselves. Probably no one could know even the names of the files you have uploaded unless they already had your password (of course, if you lose it, you lose all of the files uploaded!!! Beware!!!).

I rather trust MEGA than Condoleeza's (big-brother government) Dropbox, seriously.

There must be other cloud storage services which encrypt data not storing enough info to decrypt it without your input. I just stumbled upon MEGA and liked the synch app.

8
alexnewman 2 days ago 1 reply      
I'm now too paranoid for lastpass ever again.

Sandstorm made setting up a private gitlab about a 5 second thing. I'll just checkin gpg encrypted textfiles once more.

There's a bunch of shell scripts called pass http://git.zx2c4.com/password-store/ which know about gpg, git and this format of text files.There's browser and android plugin as well. Amusingly it has basic import/export from every other password manager. I exported from lastpass and now all I have to do is switch to a new gpg key and buy all new hardware

9
Someone1234 2 days ago 2 replies      
I just deleted, regenerated, and re-associated Google Authenticator and then altered the number of iterations from 10,000 to 10,001 (causing it to re-encrypt the database). None of this is really required but it has invalidated much of the information they could have stolen.

The thing that really bugs me about this, is the email address. I have a very low spam level on that account (sub-1 per day on average) and I want to keep it that way. Last thing I need is someone to dump this theft onto a Pirate Bay-like site and then to get spammed by everyone and the kitchen sink.

10
bcg1 2 days ago 0 replies      
11
redwards510 2 days ago 3 replies      
If you are using LastPass without 2FA (YubiKey, etc), people attacking LastPass itself is really the least of your problems. I'd be much more concerned about keyloggers grabbing your password. BeEF can pop up a LastPass phishing prompt if you just happen to load the wrong javascript file.

Using just one string of characters to protect ALL of your passwords is insane.

12
cheetos 2 days ago 6 replies      
Slightly off-topic: am I naive to believe that my personal system of password management is just about as good something like 1Password or LastPass? Hear me out. My passwords are generated as follows:

[Low|Med|Hi] + [Key] + [Initials] + [Number]

Low|Med|High = One of three keys based on how sensitive the site is. High: banking / work / email, Low: I don't trust the site, Med: other.

Key = Random string that only I know, with the most important accounts having a unique string

Initials = Initials of site name based on domain name + TLD, with the initials moved up x letters (for example, capitalone.com -> COC -> DPD)

Number = One of three random sets of numbers I use. Sometimes I forget which number I use for each site, but I can figure it out after a few incorrect attempts.

This means a unique password for every site generated by a system that only I know with no central storage except my brain.

What is wrong with this? What would be the advantage to using 1Password / LastPass over this?

13
itaysk 2 days ago 1 reply      
Password reset page is down:

"Oops! Our servers are a bit overloaded right now.

Please try your password change again shortly, we will catch up soon."

14
Tomte 2 days ago 2 replies      
Oh great, just the day before yesterday I finally jumped to LastPass (because obviously WinKee is not compatible to my new Lumia phone), using my best password (long, no real syllables, memorized).

It sounds like the password is still safe enough, but it's a very unfortunate, inconvenient timing indeed.

15
spacko 1 day ago 0 replies      
Schneier's Password Safe is the real deal:

http://passwordsafe.sourceforge.net/

I use it on:

- Ubuntu

- Windows

- Android

Synchronisation of the password db files is accomplished by storing a master file on Google Drive (Multi-Fac Auth here). I only change passwords on Ubuntu - upload to Drive and download to Android and Company.

16
kenjackson 2 days ago 1 reply      
Why not just use KeePass? It seems to work great. A bit less convenient, but overall a nice option.
17
MarkMc2412 2 days ago 1 reply      
Hi, creator of StrongBox Password Safe (https://itunes.apple.com/us/app/strongbox-password-safe/id89...) here. I think LastPass have done a pretty good job of being upfront and honest about their techniques and have a handy little product. Comments above mention the centralised nature of storage and indeed it is an issue as it becomes a real bullseye for hackers. Ultimately its a tradeoff between convenience and security. For what its worth my app uses the standard Password Safe format (http://passwordsafe.sourceforge.net/), designed by Bruce Schneier. It can store your encrypted password databases locally on device or on Dropbox or Google Drive. This can be easily exported or imported. An added bonus is you can store other tidbits of information in there, notes of any kind, not just passwords. Might be useful for those of you with more stringent security in mind, or more general encryption requirements. Its also free.
18
Asparagirl 2 days ago 0 replies      
Title should be edited to be more specific:

"[W]e have found no evidence that encrypted user vault data was taken, nor that LastPass user accounts were accessed. The investigation has shown, however, that LastPass account email addresses, password reminders, server per user salts, and authentication hashes were compromised."

So, a breach of LastPass itself but not a breach of its users' non-LastPass per-website passwords/data.

19
eyeareque 2 days ago 1 reply      
Now I don't feel so out of touch for not using last pass. It always seemed like a bad idea to put all of your trust in a single point.
20
sarciszewski 2 days ago 0 replies      
The LastPass blog won't let me post any comment that mentions KeePassX, so I'm mentioning it here.

Other security folks might recommend other password managers that they prefer (e.g. 'tptacek likes 1Password). Generally, you should listen to them over me.

KeePassX is open source and NOT cloud based, so if those are two points on your mental checklist, it's worth checking out.

21
pgrote 2 days ago 0 replies      
I found out from an article on Lifehacker. Still have yet to get an announcement in email, extension or app from LastPass themselves.

While the blog post was nice, it would have been better to directly let subscribers know.

I am a premium subscriber with 2fa enabled.

Just received the announcement at 6:54pm CT:

Dear LastPass User,

We wanted to alert you that, recently, our team discovered and immediately blocked suspicious activity on our network. No encrypted user vault data was taken, however other data, including email addresses and password reminders, was compromised.

We are confident that the encryption algorithms we use will sufficiently protect our users. To further ensure your security, we are requiring verification by email when logging in from a new device or IP address, and will be prompting users to update their master passwords.

We apologize for the inconvenience, but ultimately we believe this will better protect LastPass users. Thank you for your understanding, and for using LastPass.

Regards,The LastPass Team

22
Zaheer 2 days ago 6 replies      
Thoughts on LastPass vs 1Password?
23
kriro 1 day ago 0 replies      
On a related note...I'm using KeePass+Yubikey but am a bit worried that the project is still hosted on sourceforge. The devteam seems to think it's no problem at least that's the impression I get from reading the forum.
24
SpendBig 1 day ago 0 replies      
"LastPass strengthens the authentication hash with a random salt and 100,000 rounds of server-side PBKDF2-SHA256, in addition to the rounds performed client-side. This additional strengthening makes it difficult to attack the stolen hashes with any significant speed."

I wouldn't mention that if your data has just been compromised. Although it makes is hard to handle that data, it is more info about how the data is encrypt.

25
maxtaco 2 days ago 0 replies      
Plug for https://oneshallpass.com. Open source. Your site-specific password is an HMAC; the key is your password and the payload is the site you're logging in to. Works perfectly offline. You can optionally store an encrypted list of the sites you use (and parameters like number of symbols) to the server.
26
systematical 2 days ago 0 replies      
I switched to lastpass a year ago for all non-critical accounts, basically everything thats not email or my personal finances. Its still a bit of a risk, but this way I only need to remember about 5 passwords. I guess I'll slowly be updating all the passwords on my lastpass sites and coming up with a new master password today.

In short, more major sites need to implement a Google Authenticator style service.

27
moepstar 2 days ago 1 reply      
I commend them for their honesty, so thanks for the heads up :)

One thing i noticed: They used quite a few german words ("dennoch", "jedoch", "dann") which i haven't seen used elsewhere up to now.

Is that common? I know that quite a few words are used commonly in English like "kindergarten" for instance, but this is the first time i've seen those in an english blog...

28
wstrange 1 day ago 0 replies      
Time to kill the password.

Federated login using OpenID Connect seems like a far better solution. I can't fathom why so many web sites want the awful responsibility of storing your password. Why not leave that to Google, Facebook or Microsoft? Or you bank for that matter...

And yes - you should secure your IDP login with multi-factor authentication.

29
foobar81 2 days ago 0 replies      
30
guylepage3 2 days ago 0 replies      
Wow! More and more centralized services are being hacked. Time for something more decentralized.
31
JoshTriplett 2 days ago 0 replies      
Things like this are why I prefer Firefox Sync. Works across all my devices (home laptop, work laptop, Android phone), and uses client-side encryption, so a compromise of the Sync server provides the attacker with nothing of value.
32
Kelly2 2 days ago 0 replies      
I don't understand the use case for LastPass/Dropbox/FTP storage of password, 1Password (and probably others) allow to sync through wifi, isn't that enough? Why would you need to do it over the cloud?
33
rtz12 2 days ago 0 replies      
I have a German system from a German IP and some of the words in the article are German. Weird. Do they have some kind of auto translation that kicks in even though they didn't translate the whole article?
34
AndrewDMcG 1 day ago 0 replies      
This is what I recommend to non-technical users:http://www.amazon.co.uk/Silvine-Executive-Pocket-Notebook-14...

I use a hand-rolled gpg + git + owncloud for myself, but that's not convenient if you don't routinely have terminal windows open.

35
crusso 2 days ago 1 reply      
Does LastPass keep the encrypted copy of the password file for non-premium accounts? For accounts that don't sync and just use it from a single browser?
36
dbs 2 days ago 0 replies      
Strange fact: changed my master password a few min ago. But there's a message saying it was changed 23 hours ago.
37
magoon 1 day ago 0 replies      
iCloud Keychain doesn't store your passwords on Apple-controlled servers when you do not configure an iCliud Keychain storage PIN; you can use it in a mode that simply syncs keychains across devices, all of which allow password-based encryption.
38
Gonzih 2 days ago 0 replies      
And now they are under heavy load because of people changing their master passwords. Can't change mine :)
39
HaoZeke 1 day ago 0 replies      
Wouldn't the solution be something akin to enpass?
40
tomjen3 2 days ago 4 replies      
Any good tricks on how to generate a new master password that is a) secure enough and b) I can memorize?
41
h43k3r 2 days ago 1 reply      
One another incident that reminds me, why 2 factor authentication is absolutely necessary for important information.
42
Animats 2 days ago 0 replies      
"LastPass simplifies your online life by remembering your passwords for you."

You had one job. And you blew it.

43
bernadus_edwin 2 days ago 2 replies      
The year is 2015 and they still dont have mobile site to change password. Amazing
44
kolev 2 days ago 0 replies      
How many times does LastPass need to screw up before you guys flee it? Pick your security vendors carefully!
45
oneJob 2 days ago 0 replies      
you had one job. one.
46
fredsted 2 days ago 1 reply      
I've always had the feeling that LastPass was held together by sticks and duct tape, especially the frontend.
47
joshstrange 2 days ago 1 reply      
Perhaps this isn't the thread to discuss this but I feel like the state of access in 2015 is dismal at best...

Every option out there either sucks ass on mobile or only integrates with a TINY percentage of apps and on desktop they aren't much better. How does Chrome (on iOS and OS X) blow every other PW manager out of the water? It "Just Works (tm)" while every other PW manager makes me just through a shit ton of hoops... I want to be safe but I can't be the only one who feels "chore" doesn't even begin to describe what maintaining and using a PW is like. My "Master" PW is secure but I'm not typing that thing every 5 minutes, 1Pass got better with Touch ID but it still makes me want to smash my phone every time I have to use it (Also, 1Browser, yeah how about FUCK NO).

DuckDuckGo on CNBC: Weve grown 600% since NSA surveillance news broke technical.ly
590 points by wnm  17 hours ago   220 comments top 35
1
vixsomnis 14 hours ago 8 replies      
Yes, the search results aren't that good, but they're good enough. A single search almost always gets what I'm looking for on the front page or the entries immediately visible, which is impressive considering how little DDG knows about me.

Add in the !bang feature for searching most websites (classics like !w - Wikipedia, !g - Google, and stuff like !gh - GitHub, !aur - Arch User Repository) and my favorite "define X" keyword that links straight to Wordnik, and my search experience is better than Google.

The !bangs also function as bookmarks, so if I ever want to go to GitHub, I can just search !gh and it'll take me there. It's like having a set of search engines stored universally, accessible from any device with web access.

And of course if I need Google, say for word etymologies, it's just a !g away.

2
click170 11 hours ago 1 reply      
I switched my default search to DDG and haven't thought twice about it.

It's maybe once every couple days that I have to use "!g" to get google results, for everything else DDG works excellently. Even the times when I have to use "!g", it's often a hint that I'm searching for an unpopular phrase, and find that if I rephrase my search results I get much better results out of both search engines.

I remember there was a story on HN once a few months back where a kind soul from DDG posted an email address that one could submit notes to wrt highlighting poor search results so that they could address them, I don't recall the email and haven't been able to find it. If this is still available with DDG could someone please re-post that email here? I would very much like to help improve the quality of DDG to make it better for everyone but I can't find anywhere to suggest improvements on their website.

Edit: I was able to find their Feedback page, but I much prefer email personally: https://duckduckgo.com/feedback

3
simias 13 hours ago 4 replies      
I've been using ddg as my main search engine for close to a year I think. It's definitely not as good as google but it's good enough most of the time.

My main concern is that it's still a free service and I really don't see how it'll be sustainable in the long term without compromising privacy in their current model. If you're not the customer you're the product etc...

I'd gladly pay $10 a month for a "premium" search engine with strong privacy garantees. I'm definitely not going to enable ads in ddg and I can't imagine that the average duckduckgo user thinks differently.

4
joelrunyon 1 hour ago 1 reply      
This might be a random point, but I'm curious if DDG will have to change their branding to be accepted beyond the tech space into mainstream searching.

I feel like "duck duck go" is too long for the avg american to grasp or use in an ongoing convo when compared to "google", "bing" or "yahoo."

I can't see people saying "just duck duck go it." Maybe something like DDG or "duckduck"instead?

Maybe that's just me...

5
VMG 15 hours ago 6 replies      
As a governmental spy organization, why wouldn't you just put surveillance on the search engine that is used by people that have "something to hide" (in their mind) and also put a gag order on the operators of that service?
6
finnjohnsen2 12 hours ago 2 replies      
I should use DuckDuckGo, but when I need to search I'm too much in some mindset and context I can't allow it to get broken by the poor search results DDG gives me. So my life has come to the point where I'm aware that I give all my search data to someone I know spies on me. 24/7/365.
7
mrweasel 14 hours ago 0 replies      
>"If you're not collection user information, how are you going to make money? How are you going to become a big brand that people can trust long term?

I know that the interviewer has to ask at least the first part to get the interview going, but it also highlight everything that wrong in the thinking of online ads/marketing.

If you need to collect user information to make money, them perhaps your product isn't that great to begin with (unless you're an ad company like Google, but the we get into the argument who the user is).

Also collection information, so you can grow to become a big brand, I would argue that you've thrown trust out the window a long time ago.

8
aidos 15 hours ago 1 reply      
As always, you can see the actual ddg traffic numbers on their website

https://duckduckgo.com/traffic.html

9
antris 13 hours ago 4 replies      
DDG is based in the US, therefore it is entirely possible that they have been ordered to keep logs and track everything the users do on that site with a gag order.

Being based in the US is a dealbreaker for privacy.

10
zawaideh 12 hours ago 1 reply      
The only thing stopping me from using DDG is the inability to limit search results by date. I want search results from a week ago, a month ago, a year ago.

I know they have sort by date, but this just sorts by date without taking into account how relevant the result is.

11
brianzelip 6 hours ago 1 reply      
Where is DDG located?

I can't watch the video, about which the text reads "The news anchor just cant resist a little jab about DuckDuckGos location choice".

12
factorialboy 15 hours ago 2 replies      
IMHO 600% is meaningless number.

What's the estimate market share of DuckDuckGo today, that's the real question.

Do they dominate a niche? I think they have significant market share among HackerNews users.

13
castell 12 hours ago 1 reply      
Does DuckDuckGo still use Yahoo BOSS search API? (based on Bing)

$1.80 / 1000 queries:

https://developer.yahoo.com/boss/search/#pricing

14
josefresco 12 hours ago 1 reply      
One feature that would allow me to eventually move to DDG would be a "toggle" of sorts within my browser that would allow me to switch to DDG results (from Google).

Deciding before my query to use DDG is a hard habit/practice to employ. However, when shown results from Google, if DDG results were just a click away (maybe already rendered in another "tab") it would make A/B testing easier and seemless - which would be essential to eventually moving away and changing my "default" search engine.

Just my $0.02

15
newscracker 8 hours ago 0 replies      
DDG is my default on some browsers and machines, but it still lacks a lot in relevant results. I find myself using startpage.com or even Google (the latter for better image searches) very often. The lack of a date based search is a huge disadvantage since I use that very often in other engines.

Lately, DDG has also been quite slow for me and doesn't load at all for several seconds. Overall, I love the privacy part, but it's not as useful as a search engine ought to be for my usage. So I'm unable to quit the other alternatives, even though I badly want to.

16
akhatri_aus 15 hours ago 21 replies      
What is HN's opinion on the quality of the search results?
17
nvk 13 hours ago 0 replies      
DDG's search results have substantially improved since a few months ago. I now use it as my primary search engine.
18
FrankenPC 7 hours ago 0 replies      
I love what DDG represents and I try it first just to show my support. But, if I can't find what I'm looking for I activate VPN, open an incognito window and search with Google. Civilian OPSEC is painful. But I refuse to give up my freedom. I wish there was an application/URL level VPN option. That would solve a lot of problems.
19
rurban 14 hours ago 2 replies      
They do log the queries, as google does. You can only trust a search engine when they do stop logging.

Any NSL can order them to hand over the logs in certain regimes (bulk or per IP? We know what happened), but they cannot force them to write logs in the first hand. Without logging it will also be ~10% faster.

20
pwenzel 7 hours ago 0 replies      
As a Minnesotan, I'm not going to use this service until is named a more suitable Duck Duck Grey Duck.
21
blerud 5 hours ago 0 replies      
Is it possible to append all regular ddg searches with !g? I'd like to use Google for my searches but still be able to use the ddg bangs.
22
luckydude 10 hours ago 1 reply      
This is just a me too comment, but one of my guys told me two days ago that duckduckgo is good enough. So I switched to it and so far I'm liking the results.

And is it just me or is it actually faster when you click on one of the results? Whenever I do that with google it seems like there is a delay while google does some analytics or something.

23
chjohasbrouck 6 hours ago 1 reply      
I choose Google because cntrl+t g o <enter> flows better than cntrl+t d u <enter>.

Every time I've tried to switch to DuckDuckGo, this has been the primary stumbling point.

24
dude_abides 5 hours ago 0 replies      
I wish the headline was s/600%/from 1.5M to 7M in 2 years/

No less impressive, and so much more informative!

25
shmerl 11 hours ago 0 replies      
I use DDG, and it works very well most of the time, but for some obscure and very targeted searches Google still beats it by a big margin (especially since Google has time filtering and etc.). So in such cases I just add a !g :)
26
k2enemy 12 hours ago 0 replies      
Does anyone happen to know how to make it so that the preview of a DDG search result is not a link to the result? I often want to copy and paste something from the preview, but having it as a link makes that difficult.
27
SalesHelp 9 hours ago 0 replies      
Way to go Gabriel! A unicorn who is a really nice, genuine guy who wants to help start-ups.
28
nfoz 9 hours ago 0 replies      
People should not have to change their behaviours like this in order to avoid the intrusion of their own government.
29
ljk 10 hours ago 0 replies      
what's hn's opinion on this screenshot from 4chan that suggests to not use DDG? http://a.1339.cf/xaikik.png

been using DDG since almost the beginning so i'm kind of conflicted...

30
whoisthemachine 13 hours ago 1 reply      
Once I learned about the !bangs, I switched immediately. Usually the results are "good enough", and when not, I try a !bang.
31
tiatia 13 hours ago 0 replies      
It has gotten faster. I somehow like DDG but finally got stuck with www.startpage.com
32
bane 14 hours ago 0 replies      
The privacy aspects are not that important to me, I just want to get out of Google's increasingly irrelevant search results. I tried switching to DDG a couple years ago and it was a pretty meh experience, so I went back to Google.

But, I tried again just a couple months ago (I went whole hog and changed the search engine in chrome to ddg) and have been very impressed with it. It's been continuously worked on enough that it now serves about 90-95% of my daily search needs without any fuss and I actually prefer the way it presents images, semantic search and videos in search results over google's. It does a much better job at returning results for what I'm actually searching for and that's awesome.

For example in Google, if I search for "Mad Max" I get showtimes for "Mad Max: Fury Road" at the top and and imdb-like bit of information for "Mad Max: Fury Road" on the right (neither of which I searched for) and then a list of search results which these days are increasingly just links to Wikipedia's take on whatever I'm searching for (this time for "Mad Max" and "Mad Max (franchise)") followed by news on "Fury Road" the "Fury Road" video game, IMDB links to "Mad Max (1979)" and "Mad Max:Fury Road (2015)", etc. then trailers on youtube for both movies and links to the movie sites etc.

It's okay, I suppose, but Google first assumes I'm looking for "Mad Max:Fury Road" and fills the results for that, then I get links to WP and IMDB on the same topic (I could have just gone to those), except for WP it's not for Fury Road. And why no love for Thunderdome?

Guess what happens when I search ddg? I get a list of possible meanings, the first of which is "Mad Max" not Fury Road (that's #2), then a list of other possible meaning (which include Fury Road, the videogame, the Franchise, etc.). This is awesome, it's not assuming which meaning I want, and thereby getting it wrong like Google and the list of possible meanings is better ordered. Then the search results are better too, of course the prerequisite IMDB and WP links are there, but the top 4 results are for "Mad Max" (or the franchise) and not for "Fury Road"...I'm actually getting results for what I searched for, not for what it thinks I searched for. The mix of results after that is also "better" to my eyes, it includes a large fan site, which Google doesn't ever seem to get around to, Amazon, ebay, games, non-WP fan wikis, reviews, and so on.

Google seems intent on shoving the latest thing that the film studio marketing departments are currently pushing, while DDG provide links to information on what I actually searched for.

I've found this to be true for most of my searches. DDG is actually finding what I want instead of what Google wants.

About the only times I'm finding I'm using Google any more is in two cases:

* I've exhausted DDG's results and want to see if Google's bigger index has something else.

* Google's more sophisticated time constraints on searches. DDG just lets me order results, but Google let's me slice out results between time ranges, which I often find more useful for research purposes.

Bonus: Privacy, again not my main interest, but it's nice that it's there. !Bang syntax. I don't use many of them, but I find them useful (it's also how I execute google searches, just put a !g before my search in ddg).

Wishes: time-slice for search, someway to make it my default in mobile chrome on my android devices

33
adwordsjedi 12 hours ago 0 replies      
So are they up to like 600 or 1200 users now?
34
callum85 13 hours ago 1 reply      
The NSA news broke a long time ago and 600% over that period doesn't sound like outlandish growth for a startup. Or maybe it is, I don't know. It would be good to see this figure over time on a chart, then we could see if there is any change in growth rate.
35
jister 15 hours ago 6 replies      
The rest of the world doesn't care about NSA surveillance so 600% doesn't really mean anything and is meaningless. Hell, a lot of people don't even know what Bing is!
How to receive a million packets per second cloudflare.com
531 points by _jomo  1 day ago   84 comments top 14
1
adekok 1 day ago 2 replies      
Nice, except recvmmsg() is broken.

http://man7.org/linux/man-pages/man2/recvmmsg.2.html

 The timeout argument does not work as intended. The timeout is checked only after the receipt of each datagram, so that if up to vlen-1 datagrams are received before the timeout expires, but then no further datagrams are received, the call will block forever.
Which makes it useless for any application that wants to service data in a short time frame. The only way around it is to use a "self clocking" method. If you want to receive packets at least every 10ms, set a 10ms timeout... and then be sure to send yourself a packet every 10ms.

I've done similar tests with UDP applications. It's possible to get 500K pps on a multi-core system with a test application that isn't too complex, or uses too many tricks. The problem is that the system spends 80% to 90% of its time in the kernel doing IO. So you have no time left to run your application.

Another alternative is pcap and PF_RING, as seen here: https://github.com/robertdavidgraham/robdns

That might be useful. Previous discussion on robdns: https://news.ycombinator.com/item?id=8802425

2
danpalmer 16 hours ago 2 replies      
> Last week during a casual conversation I overheard a colleague saying: "The Linux network stack is slow! You can't expect it to do more than 50 thousand packets per second per core!"

> They both have two six core 2GHz Xeon processors. With hyperthreading (HT) enabled that counts to 24 processors on each box.

24 * 50,000 = 1,200,000

> we had shown that it is technically possible to receive 1Mpps on a Linux machine

So the original proposition was correct.

3
edude03 1 day ago 6 replies      
Hmm, I might be missing something here, but don't most high performance network applications skip the kernel for this exact reason? (IE http://highscalability.com/blog/2014/2/13/snabb-switch-skip-...)

Makes me wonder how often bypassing the kernel is used in production networked applications.

4
shin_lao 1 day ago 1 reply      
It's an interesting post.

If you really want to squeeze out all the performance of your network card, what you should use is something like DPDK.

http://dpdk.org/

5
jedberg 1 day ago 14 replies      
A joke answer and a serious question:

A: "Use BSD"

Q: Why is there such a strong focus on trying to get Linux network performance when (I think) everyone agrees BSD is better at networking? What does Linux offer beyond the network that BSD doesn't when it comes to applications that demand the fastest networks?

ps. I think the markdown filter is broken, I can't make a literal asterisk with a backslash. Anyone know how HN lets you make an inline asterisk?

6
chx 1 day ago 3 replies      
Perhaps because I am not really a low level programmer it strikes me odd that "receive packet" is a call. I would expect to pass a function pointer to the driver and be called with the packet address every time one has arrived.
7
brobinson 1 day ago 0 replies      
Why even have netfilter ("iptables") loaded in the kernel at all? Won't those two rules still have to be evaluated for each packet even if the rules are saying not to do anything?

There are additional things at play here, too, including what the NIC driver's strategy for interrupt generation is and how interrupts are balanced across the available cores, whether there are cores dedicated to interrupt handling and otherwise isolated from the I/O scheduler, various sysctl settings, etc.

There's further gains here if you want to get really into it.

8
nikropht 14 hours ago 0 replies      
Actually the Linux kernel is rather fast if used right.The Mikrotik CCR1036 series routers have a 36 core tile CPU with each core running at 1.2ghz it can cram out 15 million pps. https://www.youtube.com/watch?v=UNwxAjJ4V4A

RouterOS is based on the Linux kernel see https://en.wikipedia.org/wiki/MikroTik

9
netman 1 day ago 0 replies      
The Automattic guys did some testing a few years ago with better results on SolarFlare. I wonder where their testing ultimately ended up. https://wpneteng.wordpress.com/2013/12/21/10g-nic-testing/
10
zurn 1 day ago 1 reply      
Where does the funny 50 kpps per core idea in the lead-in come from? This would mean falling far short of 1 gigE line rate with 1500 byte packets! This is is trivially disproven with everyday experience of anyone who's run scp over his home lan or crossover cable?
11
bitL 1 day ago 0 replies      
Excellent article! Thanks for sharing! I am glad to learn something new today! ;-)
12
samstave 1 day ago 0 replies      
I was curious to see how many pps our servers are handling...

We have an app server that currently handles 40K concurrent users per node. I get ~63pps only:

TX eth0: 42780 pkts/s RX eth0: 64676 pkts/s

TX eth0: 41570 pkts/s RX eth0: 63401 pkts/s

TX eth0: 41867 pkts/s RX eth0: 63697 pkts/s

TX eth0: 41585 pkts/s RX eth0: 63187 pkts/s

TX eth0: 40408 pkts/s RX eth0: 61912 pkts/s

TX eth0: 41445 pkts/s RX eth0: 63299 pkts/s

TX eth0: 41119 pkts/s RX eth0: 63186 pkts/s

TX eth0: 41502 pkts/s RX eth0: 63153 pkts/s

TX eth0: 40465 pkts/s RX eth0: 62118 pkts/s

TX eth0: 42105 pkts/s RX eth0: 63986 pkts/s

But this is utilizing 7 of 8 cores on each node... with CPU util very low.

13
known 19 hours ago 0 replies      
man ethtool
14
floridaguy01 1 day ago 0 replies      
You know what is cooler than 1 million packets per second? 1 billion packets per second.
Chromium unconditionally downloads binary blob debian.org
520 points by fractalcat  1 day ago   169 comments top 16
1
jimrandomh 1 day ago 2 replies      
The binary blob in question is hotword-x86-64.nexe with sha256sum 8530e7b11122c4bd7568856ac6e93f886bd34839bd91e79e28e8370ee8421d5a.

This is labelled as being a "hotword" implementation, ie, something that will monitor the microphone until someone says "OK google", then start listening and transmitting the following words for a search. However, there is no guarantee that it does what it says it does; in particular, it might instead accept instructions to transmit audio from particular parties that Google wants to spy on.

I understand there are likely to be many uninvolved engineers within Google who have access to the source code. It would do a lot to restore trust if a few such engineers could take a look through the source code and find out whether it has a remote trigger, and whether the source code in Google's repo matches the file that's being distributed.

This is not the first time Google has taken an open-source project and added closed-source components to it. They did the same thing to Android, twice: once with the "Play Service Framework", which is a collection of APIs added to Android but theoretically independent of it, and again with Google Glass, which ran an entirely closed-source fork. In the case of Glass, I did some reverse-engineering and found that it would send all photos taken with Glass, and all text messages stored on a paired phone, and transmit them to Google, with no feasible way to stop it even with root. This was not documented and I don't think this behavior was well understood even within Google.

2
spdustin 1 day ago 1 reply      
So, if the article was titled "Chromium downloads and activates closed-source eavesdropping software on all its devices, bypassing any OS alerts", would that be too wordy? It's meant to be a little tongue-in-cheek, admittedly, but it seems to me that's exactly what they did.

Isn't Chromium behind the enterprise chromebox/chromebook stuff too? And does this mean that Chrome itself may, or has already, install eavesdropping software and activate it without my knowledge?

Edit: I see from a sibling comment that OS X has this eavesdropping software installed, so that leads me to believe that everyone running chromium devices will have this activated, and that it's going to be part of Chrome soon, if it isn't already.

I know it's hyperbole to call it "eavesdropping software", but I also know how many people here were unsettled by "OK Google" and "Alexa!" (Amazon Echo), and I really do want to understand how folks here feel about the intrusion.

3
belorn 1 day ago 2 replies      
A bit surprised that there is no security CVE report attached. Debian policy is that binaries are vetted by a debian developer, sorted into Main, Contrib and Non-free, cryptographically signed and later verified by the client package system. The bug could allow arbitrary code to be installed and run without any of the above process if someone MitM the connection between the binary file and the client.
4
Animats 1 day ago 3 replies      
Note that although this bug report was forcibly closed, the fix is "This change adds an "enable_hotwording" build flag that is enabled by default, but can be disabled at compile time."

Consider what this backdoor does. It listens to any conversation in the vicinity of the phone and reports it to a remote site. You can't see its keyword list. You can't tell when it's transmitting to the mothership.

Has anyone filed a US-CERT report with Homeland Security on this?

5
AndrewDMcG 1 day ago 3 replies      
From the comments on the debian bug, this appears to have been fixed in Chromium. https://code.google.com/p/chromium/issues/detail?id=491435
6
jameshart 1 day ago 4 replies      
In a web browser implementation with NaCl support, downloading and executing arbitrary binary blobs is very much a feature, not a bug. The issue here seems to be that Chromium was configured, by default, to download and execute a particular Google-provided binary blob. And now it isn't.

Note that as soon as you go to ANY WEBSITE using Chromium, you are entrusting that site to download you arbitrary data, which could include NaCl binaries, which you're then going to trust Chromium to execute.

7
josteink 1 day ago 0 replies      
So when I've called Google Chrome for "spyware" in the past I can now add Chromium to that list.

Google's not even trying to not be evil these days.

8
golergka 1 day ago 0 replies      
https://code.google.com/p/chromium/issues/detail?id=491435

This fix is an opt-out with a compilation flag. Also, I don't know much about Chromium development process, so it might be irrelevant, but I only see source updates, without any updates in the documentation.

9
kekebo 1 day ago 1 reply      
I opened a ticket in Chromium's Google Code repo, feel free to jump in: https://code.google.com/p/chromium/issues/detail?id=500922
10
zoner 1 day ago 2 replies      
Switched to Firefox as the primary browser just to be sure :)
11
lasermike026 1 day ago 1 reply      
The only microphone I trust is the one that is not there.

How sad.

12
fla 1 day ago 1 reply      
Any idea what is the executable doing ?
13
shit_parade2 23 hours ago 0 replies      
Since these things can be opaque, archlinux updated to disable "hotword":

https://projects.archlinux.org/svntogit/packages.git/commit/...

thanks to the maintainer and the FOSS community in general.

14
longsleep 1 day ago 3 replies      
Another reason to switch to Iridium Browser. It has Google search disabled by default and even if you switch search to Google, Voice search and hot-words stay off until you manually enable it.

https://iridiumbrowser.de/

15
samwillis 1 day ago 2 replies      
The binary blob is targeted at Native Client and so only runs in the google chrome sandbox. There is no security issue here.
16
rockdoe 1 day ago 5 replies      
I advise not reading that bug, some of the later comments will give you brain cancer.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=786909#51

Downvotes? So you agree with this?

"I seriously consider the good faith of an such upstream which does these kinds of things"

"But basically secretly downloading it leads to the question of possible malicious intent (and everyone knows that Google&Co. do voluntarily and/or forcibly cooperate with NSA and friends)."

"while I haven't looked at the code, I wouldn't even be surprised if the downloading itself is done insecurely."

"Worse, chromium isn't the only such rootkit-downloader,... e.g. FF which secretly downloaded the OpenH264 blob."

Really if you condone this attitude then I can only say...well I won't say it but it isn't nice. Not only that, everyone seemingly ignores the: "Note that the binary blob is executed throught native client, which is not enabled by default" part.

You people are so beyond reasonableness I find myself defending Chrome/Google. I can't believe this.

U.S. Tech Funding Whats Going On? a16z.com
459 points by randomname2  2 days ago   190 comments top 24
1
thorfish 2 days ago 11 replies      
"And the tech IPO is basically dead. The tech IPO market is at early 1980's volumes. For most of the 90's the majority of tech funding was public. This has reversed. It used to be routine to hit $20 million in revenues and go public. Not anymore."

It's interesting how it seems that inequality is an unintended consequence of Sarbanes-Oxley. Before an engineer might vest after four or five years, just as the company is going public at a modest valuation. But if the company stays private, the employees are forced to go double or nothing. Either the company continues to grow, and there is a Google or Facebook like outcome with hundreds of employees getting rich. Or the company goes sideways and the stock ends up diluted to nothing. Furthermore the general public would have shared in the growth in the 1980's, but now most of the value has accrued by the time the company goes public. So for the few that make it, all the wins go to the founders and VC's, rather than having the general public get in early.

2
timr 2 days ago 3 replies      
Note that they use the valuations of public companies to argue that the market isn't overvalued, then spend 90% of the presentation arguing that all of the value is being "created" in the private markets, and that IPOs are dead.

Moreover, they're basically arguing that it's logical for investors to pile into these late-stage deals, because waiting around for IPO is a losing strategy.

If you believe this data, it doesn't tell you that there isn't a bubble. It says that if there is a bubble here, it's mostly happening off the books, and depends on the huge public exits of a handful of mythical creatures.

Also, slide 38 is an argument for the "No Exit" way of looking at startups: we've got a boom in low-cost, early-stage deals (2x growth since 2009), coupled with an ever-more-ruthless culling of the herd, where most of the aggregate funding goes into fewer (<20) hot deals than ever. Investors are taking a cheap call option on your youth.

3
sharkweek 2 days ago 1 reply      
I thought Dan Primack had a solid response:

http://fortune.com/2015/06/15/andreessen-horowitz-why-were-n...

"Andreessen Horowitzs presentation treats the relative lack of tech IPOs as a sign of market health. As I wrote last week, there is a much less charitable way to view it. Moreover, the lack of IPOs also means that the public markets have yet to validate many of these unicorn valuations."

4
michaelvkpdx 2 days ago 3 replies      
My takeaway- the VC's have leveraged the money from their successes to create a vortex that sucks in money from consumers, into privately owned companies, back into VC pockets, and back into more companies that get more people to spend more money.

The tech vortex that is sucking away quality of life from the middle class and padding the billionaires (and large company) bank accounts. Throwing out a few bones on occasion (fewer and fewer) to entrepreneurs to keep the vortex going.

Vortex is the opposite of bubble, but it does the same thing to the life of the average person.

5
narrator 2 days ago 2 replies      
Here's my current map of where the money is coming from and going to.

Fed buying trash MBSs with QE -> Investment Banks -> Stock Market -> Big Tech Companies -> Acquisitions -> Venture Capitalists -> Tech Companies -> Startup Employees -> San Francisco Landlords and Fancy Toast Restaurants.

6
ThomPete 2 days ago 1 reply      
As I have mentioned in another thread.

We don't have a tech bubble we have a Silicon Valley valuation bubble, one a16z is part of themselves.

The discussion isn't whether tech companies are under or overvalued, they are most likely in general undervalued.

The discussion is whether the kind of investments that companies like a16z and other VC companies make are over valued or even valuable.

In other words, they are setting up a straw man about tech funding in general but the very issue is that it's not the tech sector in general that is having insane valuations tied to it but a small but important subset.

7
ripberge 2 days ago 2 replies      
When Andreesen talks about "tech funding", what do they mean by this?

A lot of VC funding that used to go to "tech" companies is now going into much less profitable types of businesses that should not be considered "tech". Businesses that most VC's don't really have a lot of experience with.

For example, their investments in Soylent, Walker and Co, Dollar Shave Club. It is REALLY hard to make money in these types of businesses when compared to software. They could be in for a rude awakening...

8
j_baker 2 days ago 2 replies      
Robert Shiller had an interesting analysis of the current stock market's "frothiness": http://www.businessinsider.com/robert-shiller-stock-market-b...

Basically, the stock market is a bit overvalued, and people expect that trend to continue. However, peoples' level of confidence in the stock market pricing is very low. To me, if there's a coming crash, it's going to be because investors are overly anxious rather than because valuations are so stratospheric.

9
codemac 2 days ago 0 replies      
If anyone had trouble seeing the embedded flash of slideshare, here's the direct link to the slideshare:

http://www.slideshare.net/a16z/state-of-49390473

10
jackgavigan 1 day ago 0 replies      
I think a lot of people don't grasp the extent to which the cost of founding a tech company has fallen since the dot-com boom. I remember forking out tens of thousands of pounds for physical hardware (which then had to be hosted somewhere) and software licences for things like Oracle and Checkpoint firewall, which I then had to install, set up, admin and maintain.

These days, with Google, AWS, Rackspace, Heroku, there's none of that. You can spin up a new server in minutes and scale up as required. All the technical infrastructure is already there, so you can focus on the product and market.

11
TTPrograms 2 days ago 2 replies      
This doesn't really directly address the issue of valuations for the unicorns. P/E valuations are probably insane by most metrics - the type of user base growth required to get them in line (P/E wise) with other companies is on the order of double-digit percentages of the global population IIRC.

This doesn't really mean there's a "tech bubble", though. It's possible we'll see a massive correction to those companies, but it will likely be isolated, and thanks to the weird structuring of these private equity deals I can't imagine that the VCs will be much worse off.

12
MichaelCrawford 1 day ago 0 replies      
I've been adamantly opposed to the public trading of tech companies for fifteen years now:

The Valley is a Harsh Mistress

http://www.warplife.com/tips/business/stock/venture/capital/...

Investment yes, Wall Street no.

The reason to seek investment is to grow one's company so that one can grow one's business in ways that would not be possible to fund out of one's current revenue.

One of the very wealthiest people I have ever met founded "The Nation's Largest Sperm Bank" in the early 1970s with $2,500.00 of his own money, along with just one other partner, mostly for liquid nitrogen dewars, medical lab equipment as well as pr0n.

13
pbreit 1 day ago 1 reply      
1999-2000 was insane.

Regulators killed the IPO market such that all the gains are being made by venture investors and the public is totally missing out.

14
kamilszybalski 2 days ago 0 replies      
"Its Carlota Perezs argument that technology is adopted on an S curve: the installation phase, the crashbecause the technology isnt ready yetand then the deployment phase, when technology gets adopted by everyone and the real money gets made" - http://www.newyorker.com/magazine/2015/05/18/tomorrows-advan...
15
sgwealti 2 days ago 0 replies      
The slide about e-commerce only making up about 6% of total retail sales is only relevant if there are a lot of "unicorns" which are in that market. According to Fortune's Unicorn list there are only 3 unicorns that are retail e-commerce based businesses.
16
mkagenius 2 days ago 1 reply      
Who were those companies which had funding > 1bn within a year in the year 1999?
17
kosigz 1 day ago 0 replies      
The takeaway for me is that investing in tech companies during the growth stage has gone from being mostly for rich people to being pretty much exclusively for very very rich people.
18
cdnsteve 2 days ago 1 reply      
Why are IPOs no longer viable? Too much red tape? It seems that investors and their money have a better chance underground (private) where public stocks are either too slow to get a return or the return amount would be much less.

Why does it seem all the money is in the US and not in Canada? More investors? More money? Taxes?

19
BinaryIdiot 1 day ago 0 replies      
Is there a video of someone giving this presentation? It's interesting.
20
foobarqux 2 days ago 0 replies      
Is a16z focused on the seed stage now? That seems to be what they are selling in these slides.
21
iblaine 2 days ago 0 replies      
IPO's are dead and so are options.
22
jgalt212 1 day ago 0 replies      
Maybe the tech IPO is dead because Andressen Horowitz (et al) invest in Series B, C, and D at geometrically increasing valuations and have pushed valuation levels to where the public markets are skeptical.
23
graycat 2 days ago 0 replies      
Didn't see where the OP explainedwhy the tech IPO market is dead.
24
foobarqux 2 days ago 1 reply      
Lot's of points to dispute:

Slides talk about S&P IT but no one is concerned with IT public market valuations (at least relative to the rest of the public market). The concern is with private tech market.

Slides talk a lot about how the amount of funding is justifiable but the question is whether the valuations are. Lower amounts of funding do suggest there is less at risk, however.

How do you reconcile slide 37, which suggests that fund raising is as difficult as ever, with the widely held view that money is flowing freely today.

You can't own an index of unicorns (slide 32)

etc.

Boffins reveal password-killer 0days for iOS and OS X theregister.co.uk
465 points by moe  15 hours ago   130 comments top 19
1
tptacek 9 hours ago 4 replies      
It's not bad work but it looks like The Register has hyped it much too far. Breakdown:

* OSX (but not iOS) apps can delete (but not read) arbitrary Keychain entries and create new ones for arbitrary applications. The creator controls the ACL. A malicious app could delete another app's Keychain entry, recreate it with itself added to the ACL, and wait for the victim app to repopulate it.

* A malicious OSX (but not iOS) application can contain helpers registered to the bundle IDs of other applications. The app installer will add those helpers to the ACLs of those other applications (but not to the ACLs of any Apple application).

* A malicious OSX (but not iOS) application can subvert Safari extensions by installing itself and camping out on a Websockets port relied on by the extension.

* A malicious iOS application can register itself as the URL handler for a URL scheme used by another application and intercept its messages.

The headline news would have to be about iOS, because even though OSX does have a sandbox now, it's still not the expectation of anyone serious about security that the platform is airtight against malware. Compared to other things malware can likely do on OSX, these seem pretty benign. The Keychain and BID things are certainly bugs, but I can see why they aren't hair-on-fire priorities.

Unfortunately, the iOS URL thing is I think extraordinarily well-known, because for many years URL schemes were practically the only interesting thing security consultants could assess about iOS apps, so limited were the IPC capabilities on the platform. There are surely plenty of apps that use URLs insecurely in the manner described by this paper, but it's a little unfair to suggest that this is a new platform weakness.

2
andor 14 hours ago 5 replies      
Quick summary of the keychain "crack":

Keychain items have access control lists, where they can whitelist applications, usually only themselves. If my banking app creates a keychain item, malware will not have access. But malware can delete and recreate keychain items, and add both itself and the banking app to the ACL. Next time the banking app needs credentials, it will ask me to reenter them, and then store them in the keychain item created by the malware.

3
MagerValp 13 hours ago 1 reply      
That paper is rife with confusing or just plain wrong terminology, and the discussion jumps between Android, iOS, and OS X, making it really hard to digest. I think these are the bugs they have discovered, but if anyone could clarify that would be great:

The keychain can be compromised by a malicious app that plants poisoned entires for other apps, which when they store entries that should be private end up readable by the malicious app.

A malicious app can contain helper apps, and Apple fails to ensure that the helper app has a unique bundle ID, giving it access to another app's sandbox.

WebSockets are unauthenticated. This seems to be by design rather than a bug though, and applications would presumably authenticate clients themselves, or am I missing something?

URL schemes are unauthenticated, again as far as I can tell by design, and not a channel where you'd normally send sensitive data.

4
SlashmanX 14 hours ago 1 reply      
The paper in question: http://arxiv.org/abs/1505.06836
5
therealmarv 14 hours ago 8 replies      
So Apple was aware of this for 6 months and are doing NOTHING, not even communicating?! How serious do they take security and fixing it (at least within 6 months) ?
6
StavrosK 14 hours ago 4 replies      
"Boffins"? Isn't that rather dismissive, as in "oh, look at what those crazy boffins cooked up now!"?
7
drtse4 14 hours ago 1 reply      
From the paper:

> Since the issues may not be easily fixed, we built a simple program that detects exploit attempts on OS~X, helping protect vulnerable apps before the problems can be fully addressed.

I'm wondering if the tool is publicly accessible, couldn't find any reference to it.

8
dodongogo 9 hours ago 2 replies      
It sounds like a temporary fix for the keychain hack on iOS would be to just never use the SecItemUpdate keychain API, and always use SecItemDelete followed by SecItemAdd with the updated data which according to http://opensource.apple.com/source/Security/Security-55471/s...:

> @constant kSecAttrAccessGroup ...Unless a specific access group is provided as the value of kSecAttrAccessGroup when SecItemAdd is called, new items are created in the application's default access group.

If I understand this correctly that would always make sure that when an existing entry is updated in an app, the 'hack' app would again be restricted in being able to access the entry's data. It could still clear the data, but wouldn't be able to access the contents.

The paper seems to note this as well:

> It turns out that all of [the apps] can be easilyattacked except todo Cloud and Contacts Sync For GoogleGmail, which delete their current keychain items and createnew ones before updating their data. Note that this practice(deleting an existing item) is actually discouraged by Apple,which suggests to modify the item instead [9].

9
coldcode 11 hours ago 0 replies      
The first defense they can perform is to change the automatic checks in the App Store review process to identify the attack in a malicious app and stop it from being approved. This could be fairly easy, of course Apple doesn't tell anyone what they do in this process so we have no way to verify it. Still you have to identify how the process could be hidden but since it uses known API calls in an uncommon way, I think this is quite doable.

The second defense is more complex, changing the way Keychain API works without breaking every app out there is much more complex. Not knowing much about this is implemented it might take a lot of testing to verify a fix without breaking apps.

The last thing they can also do is to build a verified system tool that checks the existing keychain for incorrect ACL usage. You can't hide the hack from the system. This way Apple could fix the ACL to not allow incorrect usage and not give access where it doesn't belong. I think this is fairly easy to do since it will break very little.

This is why building security is hard no matter who you are and everyone gets it wrong sometimes. At least Apple has the ability to readily (except for point 2) repair the problem, unlike Samsung having to have 800 million phones somehow patched by third parties to fix the keyboard hack.

10
0x0 14 hours ago 2 replies      
Anyone have any more information about (or even a source for) "Google's Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level"?

Is this going to happen in an upcoming stable release? What is it being replaced with?

11
w8rbt 13 hours ago 3 replies      
The fundamental design flaw of all of these compromised password managers, keychains, etc. is that they keep state in a file. That causes all sorts of problems (syncing among devices, file corruption, unauthorized access, tampering, backups, etc.).

Edit - I seldom downvote others and the few times I do, I comment as to why I think the post was inappropriate. What is inappropriate about my post?

Few people stop and think about the burden of keeping state and the problems that introduces with password storage. Many even compound the problems by keeping state in the Cloud (solve device syncing issues). It's worth discussing. There are other ways.

12
jusio 13 hours ago 1 reply      
Oh God. After reading the paper I wouldn't expect a fix from Apple anytime soon :(
13
jpmoral 12 hours ago 0 replies      
Okay, so if a user only retrieve Keychain items manually (unlock keychain, view password, type/paste into app/website) and never allow apps to access it, is s/he safe?
14
gchp 14 hours ago 1 reply      
Well, shit. Finally I feel justified for never (read: rarely) using the "Save password", feature in my web browser.

Does anyone know if Apple have done anything towards resolving this in the 6 month window they requested? Slightly worrying now that this has been published without a fix from Apple. I don't really download apps very often on my Mac, but probably won't for sure now until I know this has been resolved. Annoying.

15
ikeboy 13 hours ago 1 reply      
I wonder if the new "Rootless" feature prevents this, and if it was developed because of this.
16
wahsd 13 hours ago 0 replies      
Bravo, Apple. Humongous security hole and you don't address it in six months?

I hope it's being readied for inclusion in 8.4. We all know how it bruises Apple's ego to have to patch stuff without acting like it'a a feature enhancement.

17
glasz 10 hours ago 0 replies      
they've known for half a year and still, just 2 weeks back, cook is cooking things up about their stance on encryption and privacy [0]. you've gotta love the hypocrisy on every side of the discussion. it's so hilarious that it makes me wanna do harm to certain people.

[0] http://9to5mac.com/2015/06/02/tim-cook-privacy-encryption/

18
vbezhenar 12 hours ago 0 replies      
Don't run untrusted apps outside of virtual machines. Too bad that web taught us to trust the code we shouldn't trust. Noscript must be integrated in every browser and enabled by default. Sandboxing was and will be broken.
19
josteink 13 hours ago 3 replies      
Once again goes to show that Apple is mostly interested in the security of its iStore, platform lock down and DRM.

I'm not exactly shocked.

Just for kicks... Does anyone remember the I'm a PC ads, where macs were magically "secure", couldn't get viruses or hacked or anything? Turns out, with marketshare they can! Just like Windows. Strange thing eh?

The Art of Command Line github.com
453 points by zalzal  2 days ago   130 comments top 29
1
scrollaway 2 days ago 6 replies      
> To disable slow i18n routines and use traditional byte-based sort order, use export LC_ALL=C (in fact, consider putting this in your ~/.bashrc).

Do not. Having a non-utf8 locale means you won't be able to handle utf-8 sanely ("that's why it's faster") and it will break at the most inexplicable times. Any non-latin1 character appearing in your prompt or command line with this will mess its spacing up for example. Do not do not do not.

Hell I even check for it in my .zshrc: https://github.com/jleclanche/dotfiles/blob/master/.zshrc#L3...

Good post otherwise.

2
cmrx64 2 days ago 2 replies      
> StrictHostKeyChecking=no

And welcome to MITM haven. This is awful advice if you care about the first S in SSH.

3
bdamm 2 days ago 5 replies      
If you learn bash, and you learn vi, then the next most glorious addition is:

set -o vi

Then you have vi keys in your shell. And it is marvelous.

4
jwcrux 2 days ago 2 replies      
> Fluency on the command line is a skill that is in some ways archaic...

I don't see this to be the case at all. Plus, having this be the opening line may cause people to equate archaic=I don't need this.

I'd suggest (would pull request but afk) that you remove "is a skill that is in some ways archaic".

5
agumonkey 2 days ago 0 replies      
Just throwing this http://mywiki.wooledge.org/BashGuide it's listed 2 indirections deep so will probably be missed)

Every time you have an issue with the shell, go there.

6
michaelvkpdx 2 days ago 0 replies      
I learned a dozen new things in the first 100 lines or so, that will save me hours, and I've been living in bash for 20 years.

Thanks for this!

8
MachinaX 2 days ago 1 reply      
The Linux Command Line (free pdf): http://linuxcommand.org/tlcl.php
9
gitaarik 1 day ago 1 reply      
One really useful shortcut I learned in bash is Ctrl-/ which is a sort of undo for bash when composing a command. Can't really explain how it works, you should try it out yourself.

Also really handy is Ctrl-Y, which pastes the last cut characters by Ctrl-W, Ctrl-U or Ctrl-K.

10
webnrrd2k 1 day ago 0 replies      
If you're into the command line, I'd recommend getting a physical copy of Unix Power Tools [1] and spending some time with it. This is a nice article, but Unix Power Tools is better in almost every way for learning the basics (and more) of the Unix command line. This article mentions a few more modern tools, but Unix Power Tools has a far better explanation of what's going on.

[1] http://www.amazon.com/Unix-Power-Tools-Third-Edition/dp/0596...

11
seri 1 day ago 1 reply      
Unix tools are very powerful, but they are not as intuitive as I would like. It would be nice if we can have an httpie for process management, one for text manipulation, another for system stats, and so on.
12
karmicthreat 2 days ago 7 replies      
So this is tangentially related by why do people seem to push Vi? Nano is a perfectly serviceable editor. If I need to do extensive editing I always end up loading it into Atom, Sublime or some other editor anyway.

I've never really had to use a console editor for more than 10s of lines.

13
stinos 1 day ago 1 reply      
Fluency on the command line is a skill now often neglected or considered archaic

By whom? (honest question: even the most GUI oriented people I know reckon the power of command line skills)

14
barely_stubbell 2 days ago 1 reply      
I'm glad the author took time to mention sl - every sysadmin should make sure that its installed on all the boxes they manage.
15
craneca0 2 days ago 0 replies      
The new csysdig curses CLI [1] unifies and replaces a bunch of the system debugging tools listed here.

[1] https://github.com/draios/sysdig/wiki/Csysdig%20Overview

16
MarcScott 1 day ago 1 reply      
I'd never even thought about commenting a line I was half way through writing, when suddenly realising I'd forgotten the correct arguments.

I normally end up opening another terminal and using man there.

17
AdieuToLogic 1 day ago 0 replies      
Much, much more can be found at:

http://www.commandlinefu.com/commands/browse

18
hamburglar 2 days ago 1 reply      
The author should prepare to catch some hell for advocating 'ForwardAgent=yes' in ssh configs.
19
mediocrejoker 1 day ago 3 replies      
> To locate a file by name in the current directory, find -iname something . (or similar). To find a file anywhere by name, use locate something (but bear in mind updatedb my not have indexed recently created files).

I think this will glob unless you escape the asterisks. Also on my system (debian 8) I need to put the directory to search first, or not at all:

find . -iname \something\

find -iname \something\

edit: hackernews ate all my asterisks

20
ryenus 2 days ago 1 reply      

 cat hosts | xargs -I{} ssh root@{} hostname
glad to know this one :-)

 echo y | xargs -Ix echo x
how tricky!

21
cbd1984 2 days ago 3 replies      
There's no reason to learn vi if you know Emacs. If you claim that Emacs isn't installed everywhere, guess what: Neither is vi.

If you want an "installed everywhere" editor, learn ed. If you're willing to take an editor with you (or otherwise make sure a specific editor is everywhere you are), there's no reason it has to be vi.

22
hobarrera 1 day ago 0 replies      
Fun fact: alt+# is impossible on a Spanish keyboard, because alt+3 = #. So you actually need to hold alt to type the character in the first place.
23
widdershins 2 days ago 0 replies      
Seems like a good introduction. I'm an amateur who's been using IDEs up until now. I'm just starting to become dimly aware of the power at the command line, and slowly learning in a haphazard way. This could be very helpful.
24
natch 1 day ago 1 reply      
Another one for obscure but useful: jot

# print 10 values from 1 to 10, with leading zeros:

for n in `jot -w "%04d" 10 1 10`; do echo $n; done

25
joelthelion 1 day ago 1 reply      
No mention of autojump? :-p
26
Wonnk13 2 days ago 1 reply      
pretty good writeup. Learning nohup was a life changing experience for me.
27
mozumder 1 day ago 1 reply      
Man so I guess tcsh/csh lost the shell wars?
28
Ardebou 1 day ago 0 replies      
Before that I was not so familiar with cmd. After that I was.
29
cbd1984 1 day ago 1 reply      
There's no reason to learn vi.
UBlock Origin mozilla.org
415 points by grflynn  2 days ago   318 comments top 21
1
decasteve 2 days ago 4 replies      
From the horses mouth [1]:

Main reason I published on AMO is because a feature which I think is important was removed from uBlock (per-site switches). That both versions diverged significantly enough so soon is not in my control.

When ABP added "acceptable ads" in their fork, they also created a demand for a version uncompromised by the "acceptable ads" principle, hence ABE happened. When uBlock removed theper-site switches, a demand was created for a version of uBlock with the per-site switches.

This is the reality of GPL: anybody can fork and create their own flavor if they disagree with the pre-fork version. This should not be seen as wrong when it happens, it's expected. In the big picture, users win.

As far as trust is concerned, both versions can be trusted -- that should not be an issue in either case: the development and source code is public in both cases (every single code change can be easily browsed on github).

Edit: Notice that I still contribute fixes to uBlock since the fork, and also try to deal with filed issues (those issues which are relevant to both versions), so it's not like I am ignoring uBlock to the advantage of uBlock Origin -- I also want uBlock to work fine for whoever uses it, I just strongly disagree with the removal of the per-site switches feature.

[1] https://github.com/gorhill/uBlock/issues/38#issuecomment-966...

2
tzs 2 days ago 4 replies      
Is there a good ad blocker that can be set to NOT block by default, and that provides an easy, one button or so, interface to turn blocking on for the site currently being viewed? I want to operate under a policy of giving new sites I visit a chance to show me that they can advertise responsibly and blacklist them if they show that the cannot.

All the ones I've tried so far (AB, ABP, uBlock) are strongly oriented toward blocking everywhere by default and whitelisting sites that you do not want to block on.

I suspect that most people who use an ad blocker do so not because of some moral objection to the very concept of advertising to pay the bills so that a site can provide free content to the general public. They use an ad blocker because they got tired of sites whose ads do obnoxious things like block the content, move the content around [1], make noise, put distracting animation in your peripheral vision, and so on.

By blocking all ads by default, the current ad blockers break the feedback loop that should be pushing sites toward ads that don't have the problems mentioned in the previous paragraph.

[1] moving the content around is what got me to install an ad blocker. Gocomics.com started doing ads that slide in from the left side, pushing the comic you are reading to the right. If you have zoomed in to make the comic more readable, this could push the right panel of the comic off the screen. Since the slide in ads did not run on every page (and when they did run, it was with a delay of a few seconds), you could not anticipate them and position the zoomed comic appropriately.

3
fridek 2 days ago 20 replies      
Honest question - what business model for free content do you see other than ads? I understand all the privacy and distraction issues related, but increasingly many news sites I read feature only paid content. I suppose it's connected to the rise of ad blocking.

At the moment I'm not hosting any content of such kind myself, but I wanted to publish a game and I'm facing the same question. Should I sell my soul to the devil and work on freemium, coins, exploit OCD and rich-parents kids, or host ads and risk not earning a dime because every single gamer I know is tech savvy enough to have an ad blocker?

4
nickysielicki 2 days ago 3 replies      
You should all check out umatrix if you have 15 minutes to spare.

Made by the same guy, it's adblocking and noscript done exactly how you want it done. Block pulled-in third-party sites by default, accept all on the primary domain you're looking at, and especially block from domains on a blacklist.

It breaks on a few sites, but it's not in my way as much as noscript and it's a 5 second job to get most any website to work. If you don't know how the web works, you'll be frustrated. If you understand how the modern web works, you'll wonder how you ever did without.

5
mrmondo 2 days ago 9 replies      
How is this different from the normal / existing uBlock?

https://github.com/chrisaljoudi/uBlock/releases

6
StavrosK 2 days ago 4 replies      
I've tried uBlock a few times, but it's always been inferior to Ghostery. I want to choose what I block on each page, e.g. sometimes I want to load Disqus, sometimes I don't, etc. uBlock doesn't allow me to do any of that, does anyone know a lighter alternative to Ghostery that will still have sane lists and allow me to unblock elements on a per-page basis?
7
Kurtz79 2 days ago 0 replies      
Yes, it's great and I have been using it for quite a while (on Chrome as well), but it's nothing new, unless I am missing something.

It would be helpful if the submitter also wrote a comment about the reasons of the submission, when they are not immediately apparent.

8
skrowl 2 days ago 0 replies      
Love uBlock Origin. I upgraded from Ad Block Plus to uBlock (and finally uBlock Origin) and was amazed at how much faster it is.
9
gluegadget 2 days ago 0 replies      
uMatrix for Firefox seems to be released as well: https://addons.mozilla.org/en-US/firefox/addon/umatrix/

Policeman was the closest thing that I know of to uMatrix for Firefox users, but at least for me Firefox is always complaining that Policeman is slowing down the browser. And also, it's nice that you can easily import your Chrome uMatrix rules to Firefox.

10
jedberg 2 days ago 1 reply      
If people who made ad blockers were ethical, they would make their software easily detectable by the websites, so those websites could choose not to service those users.
11
snissn 2 days ago 0 replies      
The eff released their own ad blocker type tool available here https://www.eff.org/privacybadger and it's great. It doesn't come with a singular list of sites to block, but instead blocks domains that are seen across many domains.
12
iamcreasy 2 days ago 0 replies      
Why it removing the youtube logo on the top left corner? Just because it's showing that youtube is streaming E3? :/

How do I let the extension know that some adds are part of the page?

13
of 2 days ago 1 reply      
It's better to download ublock from their source repo here: https://github.com/chrisaljoudi/uBlock/releases

> Due to Mozilla's review process, the version of uBlock available from the Add-ons homepage is currently often outdated. This isn't in our control.

14
fapjacks 2 days ago 0 replies      
The amount of vitriol by pro-advertising forces in this thread are pretty hilarious. As if the planet would stop spinning if people used adblock.
15
hnama 2 days ago 1 reply      
i know ublock is on safari. but will we see a ublock origin on safari?(specially after per-site switches
16
kissickas 2 days ago 2 replies      
I've seen comments on reddit saying that often legitimate "Pay Now" buttons etc. are blocked by this add-on. Can anyone with recent experience weigh in? I don't really feel like switching from ABP which I'm perfectly happy with unless this is 99% kink-free.
17
DaFranker 2 days ago 0 replies      
Heh. Gotta love how the default filter already blocks ||sourceforge.net^
18
abrowne 2 days ago 1 reply      
Another option for Firefox is the built-in "tracking protection". It is off by default, but can be enabled via about:config (set privacy.trackingprotection.enabled to true). Works on Android, too.
19
leke 2 days ago 0 replies      
I have uBlock installed from Allex Vallat. What is the difference?
21
alexnewman 2 days ago 0 replies      
This is the main reason I've switched to Firefox on android
I Do Not Agree to Your Terms mikeash.com
391 points by ingve  2 days ago   96 comments top 21
1
declan 2 days ago 3 replies      
It would be one thing if all Apple did was send publishers email saying: "We love your writing. We plan to include your RSS feed in Apple News unless you opt-out (here's how to do it). We will respect your copyright and will only display fair use excerpts."

But Apple didn't do that. Instead their email includes language about "indemnifying Apple" from legal liability. It also includes open-ended language about "placing advertising next to or near your content...without compensation to you" in a way that might ordinarily require a license. I wonder if this language would be viewed as legally binding if Apple can demonstrate that the recipient, say, logged into News Publisher to check their settings but chose not to opt out.

On the other hand, as a practical matter, I suspect bloggers will not be lining up to sue Apple in federal court...

[By way of disclosure, I'm a founder of https://recent.io/, which is in the same space as Apple News. We plan to submit our iOS version for Apple's TestFlight beta testing program this week. Fingers crossed!]

2
mcv 2 days ago 1 reply      
I really doubt that mail from Apple is in any way legally binding, but just in case it is, send them a mail (preferably at a different email address) stating that unless they reply with "NO", they agree to pay you $10,000 per month for including your feed in their news.
3
notahacker 2 days ago 1 reply      
Look at it from the company lawyers' perspective. They've watched Google have all sorts of legal trouble with companies that objected to Googlebot picking up their news content despite the fact they could have opted out by modifying their robots file (if they didn't want to have their cake and eat it by receiving traffic from Google indexing their headlines whilst suing Google for having the temerity to index their headlines...)So even though Apple are going after a market where a presumption of "right to excerpt" is even less of a grey area since RSS's raison d'tre is to allow third parties to syndicate site-owner controlled excerpts of content, they're still covering their ass by publicising their intent and giving feed owners an easy avenue to opt out. Additionally they point out they might slap banner ads on the aggregation service, and you might want to sort out any copyright/legal issues in advance because they're not accepting any responsibility for it other than passing the message on. Apple aren't doing anything any other aggregations service doesn't do without publicising any opt-out process they have, and they're not imposing any burden on the author they don't already face.

Sure, the tone could use a little work, as could everything written as a legal document ever, but they appear to have managed to do cover their ass in an email which (unlike standard EULAs, especially Apple's) is actually short enough to be readable. I'm not really sure this is one for HN to be up in arms about.

I have more issue with the companies that provide RSS feeds whilst simultaneously displaying terms and conditions prohibiting creating hyperlinks to their website without written permission, which is the sort of weird illogic of the legal departments of traditional publishers this dubiously-drafted message is really aimed at.

4
ErikAugust 2 days ago 4 replies      
Let me get this straight:

Apple comes out with an Ad Blocker for Safari. Then it comes out with Apple News, which places ads next to RSS content, and doesn't compensate the author?

Come on, man. That is infinitely upsetting.

5
granos 2 days ago 1 reply      
You can't agree to a contract by inaction.

I suspect that whether Apple violates his copyright would come down to exactly how they implemented the system. If they pulled posts directly from his servers they would probably be ok. If they cache/host the content themselves it could get a bit murkier.

6
nothrabannosir 2 days ago 1 reply      
> I don't like the idea of showing ads next to my content in this situation, but I'm pretty sure I have no right to control that.

You have every right in the world to control that. It's called copyright. If you choose a license for your content that prohibits putting ads next to it, then there you go: you used your right to control that.

Not saying it's practical, or business wise, or good PR, or, ... But definitely your right.

7
dhimes 2 days ago 1 reply      
Brilliant response. They send him a letter declaring a legal contract on his rss feed unless he specifically declines, he declines via rss feed- which the lawyers will probably not read.

I wish I was this clever.

8
AnimalMuppet 2 days ago 1 reply      
Disclaimer: IANAL.

But I seem to recall there was something like this with book clubs, that they'd send you a piece of mail, and you had to reply to opt out. Otherwise they would start sending you books, and charging you for them.

That kind of garbage became illegal in the 1970s or thereabouts. I can't imagine that doing it via e-mail makes it any more legal.

The only leg that Apple has to stand on is that it's going to be really difficult (expensive) to sue them. I'm pretty sure that the actual law is not on their side here.

9
markbnj 2 days ago 1 reply      
IANAL but I can't see how not replying to their email could possibly bind the publisher to anything. In effect Apple is using the content absent any agreement, and simply trusting that most people will be thrilled and agree out of hand.
10
brillenfux 2 days ago 3 replies      
It's funny how they don't even give a damn about your copyright. If you did this with Apples content you would get shit left and right, but since it's Apple doing it to you this time they can just assume your consent by implication.

Glorious!

The state of copyright. Right, Fucking, There!

11
ChuckMcM 2 days ago 0 replies      
The terms, as stated, are ridiculous. If you write an article that gets picked up for the NY Times, they pay you for it. If you write an article for a magazine, they pay you for it. Once they do, and you've granted them a partial copyright (in this case most likely second serial rights) then every time they pull an article to include in their "news" they should pay the author.

The much better way to do this would be, "We'd like to include your RSS feed in our News application, if anyone reads an article that you have provided to us we will make a one time payment to you of $X (say $250), in exchange for that payment you give us the right, in perpetuity, to continue to provide access to that article though our News application."

Basically if you're writing is bringing people to the News application they should pay for that. And once they have, they can monetize there end of it just like newspapers and magazines by putting ads on the site or adding other junk.

It sounds like they are trying to construct some novel theory[1] where if someone managed to find your blog and read it they wouldn't have paid for it, so Apple doesn't need to pay for it if they are using it to attract people to their app.

So lets look at it from an information economics perspective, the author's article derives its value from the author's artistic expression. Nobody else is going to write it just like they did, or how they did. The author "spends" that value to attract readers to the blog, and readers "pay" for that value by by exposed to advertisers who have paid the author to be on their page. So you get that?

The reader "buys" their content by agreeing to be exposed to advertising.

The author "sells" their content by providing access to their readers to advertisers.

The advertiser "buys" access to the content by paying the author when a reader interacts with the ad.

Apple is trying to step in, get the content for free, bring readers to their News App and then sell access to their readers to the Advertisers. You can see that without the Author's content there is no "value" to readers in the news app. Authors who agree to this are simply dropping money into Apple's lap.

[1] It isn't really novel people try to rip off artists by offering them 'exposure' all the time.

12
cjensen 2 days ago 3 replies      
I'm not seeing a problem here. There is tons of precedent for taking a RSS feed and displaying advertising next to it. Google Reader did it. NetNewsWire did it.

Having an RSS feed implies that you are okay with someone copying what is found in the feed and displaying it to users. Since there is no technical mechanism defined to flag allowable uses, there is presently no way for an RSS author to signal intent or add conditions like "may not be commingled with advertising."

It seems to me that Apple is just providing a heads up to the RSS author what is going to happen, and giving them the chance to opt-out ahead of time. Apple could have done this without asking.

The only weirdness is making it seem like the RSS author has accepted terms unless they respond. That was a poor choice. They should have said "this is how we will run things; to stop us from using your RSS feed, click this link."

13
tempodox 1 day ago 0 replies      
This is evil strong-arm practise. Apple is obviously counting on the odds of publishers not being able to sue them over this patent abuse. All you could achieve is probably some form of cease & desist, but no damage payments (IANAL).

Is Apple losing it? I can't imagine them performing a stupid fuck-up like this five years ago. This botch job just makes them look like evil bullies to everyone involved.

14
HelloMcFly 2 days ago 3 replies      
Honest question here: if you're pushing your content out via an RSS feed, does every RSS aggregator have to get permission to use it in their aggregation? How does that work with the thousands of other RSS aggregators?
15
rspeer 2 days ago 1 reply      
In all the WWDC hype I didn't actually hear about Apple News.

This is interesting. Is Apple going to actually revive RSS as a tool for reaching the masses? (As opposed to a few persistent geeks, of which I'm one but I can tell there aren't that many overall?)

Bullshit legalese aside, that sounds like a great thing.

16
bonaldi 2 days ago 1 reply      
Does the RSS spec even include the ability to include machine-readable terms/licencing in a feed? Given its source (and the gentler world in which it and Atom were born) I'm guessing not, but it seems like a big omission.
17
an_account_name 1 day ago 0 replies      
No meeting of the minds, no contract. Unless you take some positive action to confirm your acceptance of Apple's terms, there's no contract.

(I am not a lawyer, and this is not legal advice).

18
DannyBee 2 days ago 0 replies      
Generally, this kind of agreement has to be signed and in writing :)So this is not going to go well for Apple.I suspect they know this though.
19
benmanns 2 days ago 1 reply      
> But, of course, the lawyers have to get involved.

The nonsense that follows would suggest that the lawyers were not involved.

20
huuu 2 days ago 1 reply      

 if(User-Agent == "Apple") { showNonsenseNewsFeed(); } else { showNewsFeed(); }
But serious: wouldn't it be enough to place a use policy or terms of use on mikeash.com restricting the use of the feed for commercial reasons?

21
glenra 2 days ago 2 replies      
I'm sensing poor reading skills. This seems to be the offending bit:

"If we receive a legal claim about your RSS content, we will tell you so that you can resolve the issue, including indemnifying Apple if Apple is included in the claim."

That does not ACTUALLY SAY "you agree to indemnify Apple." It's a conditional with a lot of preconditions but even if they were all met it obviously doesn't constitute a legal requirement. It's saying "if it happens to come up that somebody sues us, we'll help you resolve the issue, which may involve us AT THAT TIME asking you to indemnify Apple as part of some sort of agreement we haven't yet put forward."

It's a heads-up. At some point in the future IF legal exposure becomes a problem they expect that they will have a policy about it which would involve the publishers deliberately and explicitly agreeing to an ACTUAL CONTRACT in which both sides get some tangible benefit. This email is not that contract.

Early vs. Beginning Coders zedshaw.com
387 points by jessaustin  13 hours ago   155 comments top 24
1
schoen 8 hours ago 7 replies      
Is there a book somewhere that tries to set out all of the things that experts know about computing that they don't remember learning?

(In Zed Shaw's conception, this might correspond to "learn computing the hard way".)

I see his examples and other examples here in this discussion, and it makes me wonder about the value (or existence) of a very thorough reference.

I've also encountered this when working with lawyers who wanted to have a reference to cite to courts about very basic facts about computing and the Internet. In some cases, when we looked at the specifications for particular technologies or protocols, they didn't actually assert the facts that the lawyers wanted to cite to, because the authors thought they were obvious. I remember this happening with the BitTorrent spec, for example -- there was something or other that a lawyer wanted to claim about BitTorrent, and Bram didn't specifically say it was true in the BitTorrent spec because no BitTorrent implementer would have had any doubt or confusion about it. It would have been taken for granted by everyone. But the result is that you couldn't say "the BitTorrent spec says that" this is true.

Another example might be "if a field is included in a protocol and neither that layer nor a lower layer is encrypted with a key that you don't know, you can see the contents of the field by sniffing packets on the network segment". It might be challenging to find a citation for this claim!

So we could also wish for a "all our tacit knowledge about computing, programming, and computer networking, made explicit" kind of reference. (I'm not sure what kind of structure for this would be most helpful pedagogically.)

2
SCHiM 11 hours ago 4 replies      
This is good. I've had problems that were somewhat related to what the author talks about.

When I was learning C# and was already quite fluent in C/C++. I had a big problem with the C# type system/management. I'd been reading guides that were in the first category the author mentions, eg. "not really a beginner, but new to this language".

I was trying to retrieve the bytes that a certain string represented. I was looking for ages and everywhere everyone mentioned that "this shouldn't be done", "just use the string", etc. A stack overflow answer mentions a way to use an 'encoding' to get the bytes and this seemed to be the only way.

How strange I thought, I just want access to a pointer to that value, why do I have to jump through all these hoops. None of the guides I was reading provided an answer, until I found a _real_ beginners book. This book, helpfully starting at the real beginning of every language: the type system, finally gave me the answer I was looking for:

.net stores/handles all strings by encoding them with a default encoding. It turned out that the whole notion of 'strings are only bytes' that I carried over from C++ does not work in C#. All those other helpful guides gleefully glossed over this, and started right in at lambdas and integration with various core libraries. Instead of focusing at the basics first.

3
danso 10 hours ago 5 replies      
I've been teaching coding to beginners for the past year now...and even after having done coding workshops/tutorials for many years previous, I've found I can never overestimate how wide the knowledge gap is for new coders.

Yesterday I was talking to a student who had taken the university's first-year CS course, which is in Java...she complained about how missing just one punctuation mark meant the whole program would fail...While I can't passionately advocate for the use of Java in first-year courses (too much boilerplate, and the OOP part is generally just hand-waved-away)...I've realized that the exactness of code must be emphasized to beginners. And not just as something to live with, but something to (eventually) cherish (for intermediate coders, this manifests itself in the realization that dynamic languages pay a price for their flexibility over statically-typed languages).

Is it a pain in the ass that missing a closing quotation mark will cause your program to outright crash, at best, or silently and inexplicably carry on, at worst? Sure. But it's not illogical. Computers are dumb. The explicitness of code is the compromise we humans make to translate our intellectual desire to deterministic, wide-scale operations. It cannot be overemphasized how dumb computers are, especially if you're going to be dealing with them at the programmatic level...and this is an inextricable facet of working with them. It's also an advantage...predictable and deterministic is better than fuzziness, when it comes down to doing things exactly right, in an automated fashion.

I think grokking the exactness of code will provide insight to the human condition. While using the wrong word in a program will cause it to fail...we perceive human communication as being much more forgiving with not-quite-right phrasing and word choices? But is that true? How do you know, really? How many times have you done something, like forget to say "Please", and the other person silently regards you as an asshole...and your perception is that the transaction went just fine? Or what if you say the right thing but your body (or attire) says another? Fuzziness in human communication is fun and exciting, but I wouldn't say that it's ultimately more forgiving than human-to-computer communication. At least with the latter, you have a chance to audit it at the most granular level...and this ability to debug is also inherent to the practice of coding, and a direct consequence of the structure of programming languages.

4
jordanpg 9 hours ago 5 replies      
The only important trait I see that matters for either of these groups is a willingness to try things, push buttons, see what happens.

A beginner worries about breaking the computer and doesn't yet understand that any question they have can be typed into a search engine verbatim and will probably be answered with 20 SO posts and 50 blogs posts. And early programmer is stumbling down this road.

I don't know that this ethos can be communicated with a book.

I would also recommend that beginners/early programmers learn 1 programming language really well, and ignore the din of people on the internet who claim to effortlessly, expertly jump among 10 languages as part of their day-to-day.

5
kazinator 8 hours ago 1 reply      
I can still visualize what it's like to know nothing, because when I saw a BASIC program for the first time when I was ten, I thought the = signs denoted mathematical equality (equations). How the heck can X be equal to Y + 1, if in the next line, Y is equal to X - 2?

Later, I tried using high values for line numbers just for the heck of it. Can I make a BASIC program that begins at line 100,000 instead of 10? By binary search (of course, not knowing such a word) I found that the highest line number I could use was 65,000 + something. I developed the misconception that this must somehow be because the computer has 64 kilobytes of memory.

6
mcgrootz 10 hours ago 1 reply      
Zed Shaw is a natural when it comes to teaching beginners. I recommend his "Learn The Hard Way" books to everyone who is interested in learning to code because they make zero assumptions and start at the VERY beginning. It's stupidly hard to find great books for complete noobs.

I'm totally behind this distinction, and I hope more content publishers adopt something like this.

7
top1nice1gtsrtd 11 hours ago 4 replies      
I actually worked on teaching my 71 year old father Python using this book. One point of difficulty that struck me during that exercise was that I as a programmer had completely internalized the idea that an open paren and a close paren right after a function is a natural way to invoke a function with zero arguments (e.g.: exit() exits Python's prompt. exit doesn't.). The whiplash I felt from finding the questioning of the convention silly to finding the convention silly was amusing to feel. Like it makes sense to a parser but not to a flesh-and-blood contextual-clues-using human. We don't vocalize "open paren close paren" whenever we say an intransitive verb. We just "know" that it's intransitive. Anyway, great article.
8
rday 10 hours ago 2 replies      
I was bitten by this as well, I thought the book was for an "early programmer" not a total beginner.

Hindsight and all, it seems the book would have better titled "Learn to Program the Hard Way (using Python)". Or "Learn to Program the Hard Way (using Ruby)". A total beginner is really trying to learn how to build a program, not trying to learn a particular language (whether they know that or not).

9
smilefreak 2 hours ago 0 replies      
Great article.

I am a instructor for Software Carpentry[1] , the goal of these workshops from my experience is to try and help mostly scientists get started on the journey to becoming early programmers.

In biological sciences with more and more data becoming available, the Expert blindness Zed speaks of is a major problem. We need to invent better systems and actually take heed of research based teaching methods as SW does if we wish to improve this situation.

https://software-carpentry.org/

10
morganvachon 8 hours ago 5 replies      
I feel like I'm perpetually stuck between what the author describes as "beginner" and "early". I understand what programming is, I can write a bash script that does what I want it to (granted, I have to read a ton of man pages to make sure I understand what it is I want to accomplish), I can write simple programs in Visual Basic or Python or Javascript that do simple tasks. I understand program flow, logic, and all the basics of high-school level algebra.

The problem is, I can't wrap my head around many of the concepts I read about here in the HN comments and elsewhere on programming blogs and such. No matter how much I try to understand it (and by understand it, I mean fully grasp what the person is talking about without having to look up every other word or phrase), I can't seem to put it all together. Things like inverted trees, functional programming (I've heard of Haskell and I'd love to learn it, but I have no head for mathematics at that level), polymorphism, and so on.

Maybe I need to just practice more; maybe I need to pick something interesting from Github and dive into the code to try to understand it better (preferably something well documented of course). Or maybe I need to just stop, and accept that I can whip out a script or simple web thingy if I really need to, and stick to being a hardware guy, which I'm actually good at.

11
huuu 11 hours ago 3 replies      
This is a nice article.

I think it took me three years to understand what a variable was.And I still don't know why it took me so long to understand and why I suddenly understood it.

It's not that I didn't know that assigning '1' to 'a' would result in 'a' having a value of '1', but I didn't understand the concept and workings behind it. I just thought it was magic.

12
uniclaude 9 hours ago 0 replies      
IMHO, Zed is right. I have been looking for books targeted to beginner programmers so I could recommend them to my friends, but most books unfortunately fail on this point.

A Notable exception I found is "Learn you a Haskell for Great Good!". It is as good for beginning coders as it is for early (or advanced) ones.

The author made the effort to describe some relatively basic things, and it was simple enough (okay, with a few calls to me here and there) for an Art major friend of mine to start with programming, and with Haskell. I can't recommend this book enough.

13
merrickread 7 hours ago 0 replies      
Last fall I went through a coding bootcamp in Toronto. It was 9 weeks of hard work sprinkled with lots of frustration and lots of feel good successes.A main takeaways I had was everyone comes in with a different background and everyone has a unique approach to learning.

The problem expressed in this article is a fundamental bottleneck of education. The communication between teacher and student is often misinterpreted at both ends and the subject matter is never perfectly conveyed or received.

I feel what really lacks in the learn to code community is teaching one how to actually learn.Lay a positive attitude towards failure and a framework of problem solving first, the content and understanding of a language will come after.

14
pbreit 7 hours ago 1 reply      
No. Just change the name of the book to "Learn Programming The Hard Way (Python Edition)". By putting the language in the title it sounds like it is for an experience programmer learning a new language, not for learning how to program.
15
orbitingpluto 3 hours ago 1 reply      
The article reminds me of how math textbooks/topics are labelled:

 Elementary Differential Equations (third year math) Elementary Symbolic Dynamics (grad-level)

16
VeejayRampay 12 hours ago 2 replies      
Programming is a frustrating job, you're pretty much doomed to be a beginner forever. It's part of what makes it exciting day in and day out, but it can also be overwhelming.
17
MarcScott 5 hours ago 2 replies      
"My favorite is how they think you should teach programming without teaching coding, as if thats how they learned it."

I often wonder about this. In the UK, with the drive to get every child 'coding', there are a large number of teachers that constantly talk about how the main skill that we should be teaching is 'Computational Thinking'.

I wax and wane back and forth over this topic, in a very chicken and egg way. However, I usually end up coming to the conclusion that learning computational thinking is great, but you need to know how to code (i.e. learn the basic syntax of a language) before you can possibly learn how to think computationally.

I would be very interested to hear the opinions of actual developers, as to their opinions on the topic.

18
NTDF9 7 hours ago 0 replies      
Part of the problem is that there are too many things each language can now do. Every single language wants feature parity with every other language. Every single language wants to do everything.

This means, an expert in one language is going to be "Beginner" instead of "Early" in some some ways...but "Early" instead of "Beginner" in other ways.

Anecdotally, as a software engineer working with C++, I had to spend a whole months trying to understand event-driven programming of other languages. I didn't really need tutorials on loops and recursions but I sure as hell needed to understand how a typical program in that language works.

19
r0mbas1c 12 hours ago 2 replies      
I have been thinking this for years.... though I would consider myself an "early coder" according to the article.

This stuck out to me as being just the beginnings of the quintessential issue:

 A beginners hurdle is training their brain to grasp the concrete problem of using syntax to create computation and then understanding that the syntax is just a proxy for how computation works. The early coder is past this, but now has to work up the abstraction stack to convert ideas and fuzzy descriptions into concrete solutions. Its this traversing of abstraction and concrete implementation that I believe takes someone past the early stage and into the junior programmer world.
But why stop at just "beginner" "early" and "advanced". All of the books I have on programming are either truly "beginner" or blankly labeled as a programming guide, when in actuality it is quite "advanced"...nothing in between.

If, as the article states, 4 is the magic number of languages to learn up front, perhaps there should be a 4th level of programming guides....one for the journeyman who knows the syntax, can articulate the complex algorithmic issues that need to be addressed, but isn't quite at that "mastery" or "advanced" level.

20
cafard 9 hours ago 0 replies      
A useful distinction.
21
natural219 9 hours ago 0 replies      
This is a fantastic article and is another great example to pile on as to why Zed Shaw is the king of programming teaching.

One area I struggle with in tutoring is how to inspire/invoke/detect disciplined motivation. What I mean is, whenever I sit down to show someone something, I'm constantly questioning myself "wait, do they actually want to learn this level of detail, or am I just giving too much information that's going in one ear and out the other?" If someone is definitely motivated to learn that's great (and really inspiring for me as a teacher to do better at explaining things precisely).

If this nomenclature were more understood, I would like to say something like "Sorry, what you're trying to do is more of an early/junior task, and right now you need to stick with the Beginning basics". I just don't know how to phrase that without sounding condescending.

22
OAR 6 hours ago 0 replies      
I tried to contact Zed about a month back, to ask him this question.

I tried though his blog comments, and at the help email he has for his HTLXTHW courses, but never got a response.

I dunno if he just never noticed it, or if he's actually ignoring me for some reason, but having already typed out this question with all the necessary context, I figure I may as well post it a public place where it's relevant, so here:

Hi Zed,

So, I found [this comment of yours on HN](https://news.ycombinator.com/item?id=1484030) by googling:

> site:https://news.ycombinator.com zedshaw engelmann

and was pleasantly surprised find you explicitly mentioning Siegfried Engelmann and Direct Instruction.

Here's the story:

I learned about your "Learn X the Hard Way" series through a friend who had learned Python from your course.

He told me he heard you knew about Zig and DI.

I immediately said something like:

> Nah, pretty much nobody has heard about DI, much less properly appreciates it.

> Probably Zed just meant lowercase "direct instruction" in the literal, non-technical sense of "instruction that is somehow relatively "direct"".

> He's probably never heard of uppercase "Direct Instruction" in the technical sense of "working by Engelmann's Theory of Instruction".

But then I googled, and yeah, aforementioned pleasant surprise.

(I am just not going to say anything, outside of these brackets, about "blasdel" there.

If medicine was like education, the entire field would be dominated by the anti-vaxxers.

Hey, blasdel! supporting "Constructivism" is morally at least as bad as supporting anti-vaccination!

Bah, whatever. Okay, got that out of my system. Anyway. xD )

So now I'm really curious:

You said you "learned quite a bit about how to teach effectively from [Zig and Wes]".

But how did you learn from them?

You haven't slogged your way through the "Theory of Instruction: Principles and Applications" text itself, have you?

I have, and wow was that a dense read... Which is frustrating, because as you're reading, you can see, abstractly, how they could've meta-applied the principles they're laying out to teaching the principles themselves --[the open module on Engelmann's work at AthabascaU](http://psych.athabascau.ca/html/387/OpenModules/Engelmann/) includes a small proof-of-concept of that, after all-- but apparently they just didn't feel it was worth the extra work, I guess...?

(Zig did [say that the theory is important for "legitimacy"](http://zigsite.com/video/theory_of_direct_instruction_2009.h...) --ie, having a response in the academic sphere to the damn "Constructivists" with their ridiculous conclusion-jumping-Piaget stuff and so on-- and that's the only practical motivation I've ever heard him express for why they wrote that tome in the first place.)

Have you read any of the stuff he's written for a "popular" audience, like these?:

- [Could John Stuart Mill Have Saved Our Schools?](http://www.amazon.com/Could-John-Stuart-Saved-Schools-ebook/...)

- [Teaching Needy Kids in Our Backward System](http://www.amazon.com/Teaching-Needy-Kids-Backward-System/dp...)

- [War Against the Schools' Academic Child Abuse](http://www.amazon.com/War-Against-Schools-Academic-Child/dp/...)

(god has Zig got a way with picking titles...)

But basically what I really want to ask you, which all that was just to establish context for, is just:

In developing your Python course, how did you use your knowledge of DI?

23
alexashka 10 hours ago 0 replies      
Someone's having a bad day :)

If in the world of programming - the biggest issue you're running up against is 'this is too basic', then great :)

If it's too basic, go read something else, no problem. If you're going to get anywhere in this world, you'll have to know how to research. Skimming and figuring out if something is useful or not is a valuable skill - now more than ever. So whoever complains about a well written book not suiting their fancy - it is their problem, not yours.

24
rilita 12 hours ago 3 replies      
tldr:

- Books written for "beginners" target people who already know how to code

- Author's book targets people before that

- Most programmers are bad at teaching people how to code

- Recommends some arbitrary phraseology to differentiate levels of ability

- Until someone learns the basics of 4 languages they don't really know how to code

- Demands people only use the term "beginner" for people who can't code, and "early" for those who can.

This is great and all, but it comes off mostly like a whiny complaint about how most development books are aimed at a group of people who already have a basic knowledge of coding.

The has already been addressed by the so called "dummy" series of books. They were aimed directly at the audience the author is saying are being left behind.

I'm not sure I am seeing a real issue here. Go to the bookstore, browse through the books, pick the one you can comprehend and seems to be aimed at whatever your level is. Done.

KeePass questionable security
356 points by sdrapkin  1 day ago   208 comments top 34
1
tptacek 1 day ago 1 reply      
I don't know that an HN thread is the best venue to discuss crypto design flaws (you might be better off writing a POC of some kind and then publishing that), but yes, it is a little disquieting to see a sensitive application using AES without an authenticator.

To the many readers of this thread who believe they don't care about the integrity of their password vault, just its confidentiality:

The problem is you can't necessarily have confidentiality without integrity.

Sound cryptosystems that provide integrity checking rule out chosen ciphertext attacks against the cipher: in order to submit a ciphertext to such a system, you have to get past a cryptographically secure integrity check.

Without that check, attackers can feed a victim systematically corrupted ciphertexts, which the victim will dutifully decrypt, and observe the behavior of the victim in handling them. This is the basis for a whole family of "error oracle" side channel attacks.

You generally don't want to trust the confidentiality of a cryptosystem that doesn't check ciphertext integrity and rule out manipulated ciphertexts.

As the poster points out: this might matter a lot less for a system that runs purely offline. Or it might not. I lean towards "not a super plausible attack vector". But who knows? Why be OK with bad crypto?

2
xenophonf 1 day ago 5 replies      
"On The Security of Password Manager Database Formats" (https://www.cs.ox.ac.uk/files/6487/pwvault.pdf) was a good review of KeePass, Password Safe, and others. As I understood it, only Password Safe provided both secrecy and data authenticity.
3
thomp 1 day ago 4 replies      
What about pass (http://www.passwordstore.org/)? No "funky file formats" -- just GPG and a convenient CLI.
4
FractalNerve 1 day ago 4 replies      
What about KeePassX? That's what I've been using for a long time now. It's not written in C#, but C++

EDIT:source: https://github.com/keepassx/keepassx

5
orahlu 1 day ago 0 replies      
These has been a security audit, ordered by the french ANSSI (French government IT Security agency). This audit resulted in a "CSPN" certificate, which basically means that 35 days were spent by a competent auditor (Thales), and no important vulnerabilities were found in KeePass 2.0 Portable.

Report: http://www.ssi.gouv.fr/uploads/IMG/cspn/anssi-cspn_2010-07fr...

6
sdrapkin 1 day ago 1 reply      
To those who don't see a problem with leaking timing data:

KeePass goes to great lengths to do in-memory encryption of data. I'm not saying these attempts are properly done, but there is certainly no lack of trying.

The only reason to even bother is assume that this memory can be accessed by an attacker. So either you subscribe to that attack vector and thus must also accept the necessity of avoiding timing attacks, or you reject this threat vector and must question why KeePass engages in all kinds of memory-obfuscation security circus/theater.

7
negus 1 day ago 5 replies      
Ok, your password database was affected by malicious modification. So what? How it can break the confidentiality of your data?Update: By the way, what's wrong with the bytearray compare code snippet?
8
TimWolla 1 day ago 0 replies      
Apparently someone reported this thread to the author. You might want to follow the SourceForge issue: http://sourceforge.net/p/keepass/discussion/329220/thread/2e...
9
sdrapkin 1 day ago 0 replies      
1. It would be nice if someone like CodesInChaos (ie. someone with both crypto and .NET expertise) were to casually audit the KeePass 2.x codebase and do a write-up.

2. It would be nice to create a kdbx 3.0 (ix. next-gen) storage format, which does proper AEAD.

10
deltaecho1338 1 day ago 0 replies      
Thanks for your remarks on KeePass; I have at times been a heavy user. I've often wondered about its security (especially the security of its ports) but I don't have the expertise to evaluate it myself. I'm not aware of any audits or systematic analyses as it hasn't received the attention that mobile password managers have.

The truly paranoid keep their KeePass database in an encrypted volume used solely for that purpose.

11
vixsomnis 1 day ago 1 reply      
For anyone who has wanted to switch to KeePassX (to avoid mono dependencies, for instance), but needed the integration with keepasshttp, this project is active: https://github.com/Ivan0xFF/keepassx

I haven't switched yet to Ivan0xFF's port yet (I've been using the auto-type based on window title). I may not actually switch, as the Pass project some others have posted here looks very good as a cross-platform solution (e.g., there is an android app and Firefox plugin) and there are scripts for converting existing databases to the new keystore.

12
yc_Paul 8 hours ago 0 replies      
KeePass from version 1.24 & 2.20 (in 2012) use header authentication to prevent data corruption attacks.http://keepass.info/help/kb/sec_issues.html

cheers, Paul

13
15
Globz 1 day ago 2 replies      
I have been a KeePass user for many years and I always used this in conjunction with a TrueCrypt container meaning that I keep my kdbx file inside the container.

Yes TrueCrypt isn't "safe" but at this point it will take one highly motivated attacker to steal my "important" passwords.

Sadly I am not aware of any audits related to KeePass but I would be happy to read one!

16
indutny 1 day ago 0 replies      
Hello!

Nothing about KeePass, but recently I was wondering if I could write a software for deriving the keys from the master secret and seed (i.e. domain name or whatever).

Here is what I have came with:

* https://github.com/indutny/derivepass* https://github.com/indutny/scrypt

It is using dump scrypt implementation (see the second link), and should be pretty easy to verify by cross-reading the source and the spec. Also, there is a boilerplate iOS application which is using `derivepass`'s derivation function and `scrypt` too.

Please let me know if you have any questions!

17
INTPenis 20 hours ago 0 replies      
Keepass was never meant for corporate use, that much I am positive about. So personally I use gpg through the pass(1) script.

However, for corporate use I must recommend siptrack, a Django-based webbapp, with a xmlrpc api, that tries not so much to replace keepass but rather racktables and keepass.

So it's much more than password management but it uses pycrypto and doesn't try to re-invent encryption. Future plans have it moving to pynacl too.

18
Aloha 1 day ago 0 replies      
This is why for work at least, I've kept to a spreadsheet on my workstation, the workstation uses full disk encryption, so I feel this is reasonably secure. For home, I'm nearly 100% apple, so I'm using keychain.
19
stephengillie 1 day ago 0 replies      
I suppose that using a random Android Keypass app is just asking for trouble.
20
Ciantic 19 hours ago 1 reply      
What are the alternatives really? I'd love to get rid of KeePass, it's GUI is awful, it really doesn't support OS X (unless some really technical person installs it). I'm unwilling to use commercial closed, cloud based password databases.
21
clark800 1 day ago 1 reply      
Open source, open standard, generative password manager with "two-factor" security using both a passphrase and a private key file: http://rampantlogic.com/entropass/

It only uses the industry standard pbkdf2-sha512 hashing algorithm, with no encrypted database, so it is much simpler and isn't susceptible to these kinds of issues.

22
g5411704 16 hours ago 0 replies      
If you are such a great expert .NET programmer what sees others errors, stop complaining and help him. You have his git and you can pull request.
23
Grazester 1 day ago 1 reply      
Honest question. What's wrong with the function? I have a similar function to ironically enough compare Hmacs in an encryption program I wrote in Java and C#When I release the source code for the java version I replaced my function with java's own Arrays.equals though
24
kevinSuttle 20 hours ago 0 replies      
25
Itsameee 1 day ago 0 replies      
From my point of view, an authenticator (e.g. HMAC) is only necessary in case of a protocol-based transmission. -> And no, I don't mean to put the file (that is completly read first) on the dropbox. An authentication between the main memory and the CPU is obviously not required.
26
RRRA 1 day ago 0 replies      
What about KeePass 1.x?

And considering you can freely copy the database and someone corrupting your own is "only" going to result in you not being able to login, is that really a threat model that is more important with just encrypting everything so they can't be read?

27
littlestitious 1 day ago 2 replies      
what is the problem with the singleton?
28
Spooky23 1 day ago 0 replies      
Does this affect the older version of the format and KeePassX?
29
AlfaWolph 1 day ago 0 replies      
Any opinion on Mitro?

https://www.mitro.co/

30
rhaps0dy 1 day ago 4 replies      
And I thought I was safe using Keepass on Dropbox.

Any recommendations for password managing?

31
voltagex_ 20 hours ago 0 replies      
Wargh, I use KeePass.
32
snowwrestler 1 day ago 0 replies      
Well it sounds like you just did a security audit of KeePass, albeit an incomplete and cursory one. But as a small open-source project, that's probably better than they have now.

Have you considered submitting this analysis to the KeePass team? Or even better, analysis plus suggested code to fix the problems? As a user of KeePass this would be in your interest.

(And as a user of KeePass myself, it is in my interest to encourage experts to help that project out.)

33
wumbernang 1 day ago 3 replies      
It's better than nothing and likely better than something without source.

Using the CLR which has no guaranteed memory zeroing and has immutable strings and GC and an exposed profiler and debugging APi is a larger concern IMHO.

34
tiatia 22 hours ago 0 replies      
It is so annoying. It must be something you know (Passcode), you are (Iris) or something you have (Key).

In case of Passwords, it is something you know. With limitations to the site (at least X Characters, small and capital, one number but not at the beginning, no number at the end, at least one special...). Some Banking sites even only allow a scary limited amount of characters (I think Schwab allows only 7 and no special characters).

Regarding a PW Manager: I found them all annoying and one has corrupted the PW database several times, resulting in a loss of the passwords.

My solution: A plain textfile with all my passwords. (I use Linux with an encrypted partition). If this is not secure enough for you, encrypt it with GPG.

ECMAScript 2015 Approved ecma-international.org
353 points by espadrine  9 hours ago   75 comments top 14
1
fintler 7 hours ago 9 replies      
If you've been focusing on another language for a few years, you might not recognize JavaScript anymore. It's pretty awesome now.

Here's an example of what it looks like: http://pastebin.com/raw.php?i=yEB4mrty

As someone who usually works with C, Scala, and Java -- I'm currently working on a small app built on ec6/7 babel, npm, jspm, system.js, aurelia, gulp, etc. It's been a great experience so far.

2
jtempleton 8 hours ago 2 replies      
FYI, ECMAScript 2015 is also known as ES6.
3
crncosta 8 hours ago 0 replies      
They are providing a official HTML version, alongside the PDF version.

http://www.ecma-international.org/ecma-262/6.0/index.html

4
lewisl9029 4 hours ago 0 replies      
For anyone interested in using ES2015/ES6 in production, I'd highly recommend checking out jspm and SystemJS.

It handles all the transpilation work for you (at runtime for development, or during a manual build/bundling for production) using either Babel, Traceur or Typescript, and allows you to seamlessly use ES6 everywhere in your code and even load third party code on Github and NPM as ES6 modules.

https://github.com/jspm/jspm-cli

https://github.com/systemjs/systemjs

EDIT: Some more info copied from another post:

SystemJS (jspm's module loader) has the following main advantages compared to competing module loaders:

- Able to load any type of module as any other type of module (global, CommonJS, AMD, ES6)

- Can handle transpilation and module loading at runtime without requiring a manual build step

However, jspm itself is primarily a package manager. Its main advantages over existing package management solutions include:

- Tight integration with the SystemJS module loader for ES6 usage

- Maintains a flat dependency hierarchy with deduplication

- Ability to override package.json configuration for any dependency

- Allows loading of packages from just about any source (local git repos, Github, NPM) as any module format

5
rememberlenny 8 hours ago 3 replies      
Can someone explain what this means for the browser ecosystem? What are the next steps to integration?
6
brndn 8 hours ago 1 reply      
What does it mean for a spec to be approved? Is it like a peer-review?
7
cel1ne 4 hours ago 0 replies      
This is a good overview in my opinion: https://babeljs.io/docs/learn-es2015/
8
markthethomas 33 minutes ago 0 replies      
but...whatever happened to es6? ;)
9
MagicWishMonkey 2 hours ago 1 reply      
How long before we see widespread browser support?
10
wallzz 6 hours ago 1 reply      
Can someone make a resum ?
11
Stephn_R 2 hours ago 0 replies      
Today marks an important day for us all :)
12
brianzelip 6 hours ago 1 reply      
oh brother does their (`<table>` based) web layout need an update!
13
muraiki 7 hours ago 3 replies      
Sorry, I accidentally downvoted you with a misclick :(
14
sirsuki 8 hours ago 2 replies      
It's about fricken time! Talk about procrastination!

Now I have to wait for browsers to get off their little snowflake asses and update. Oh wait then there is all those paranoids who use WinXP with IE8. Damn it, I'll be dead by the time this stuff is available universally.

Microscopic footage of a needle moving across the grooves of a record dangerousminds.net
344 points by batbomb  1 day ago   57 comments top 16
1
bkraz 19 hours ago 3 replies      
I am stoked to see my work on the front page of HN! Let me know if you have any questions. I've lurked on this forum for a long time, but rarely post.
2
nate_meurer 1 day ago 5 replies      
Wow! I never knew the needle move side-to-side! I always assumed phonograph needles moved up and down.

Edit: Ben explains this in the video; there are actually two axes of movement, sort of diagonal to the plane of the disk, each of which encodes one channel of a two-channel stereo recording. I'm sure many LP fans already know this, but it's a revelation to me.

BTW, Ben Krasnow is a heck of a guy. A true polymath and a generous teacher.

3
xigency 1 day ago 2 replies      
The only thing is, the needle isn't moving here. This is an animation of what a needle would look like moving across a record, but taken in stop-motion style. I wonder if the use of an electron microscope is really needed? The grooves in a record are hardly small enough to escape visible light...
4
jokr004 1 day ago 0 replies      
I highly recommend everyone check out other videos on Ben Krasnow's youtube channel [0]. I've been following him for about a year now, really awesome stuff!

[0] https://www.youtube.com/user/bkraz333

5
daniel-levin 19 hours ago 0 replies      
The YouTube channel [1] where this video comes from is a treasure trove for the intellectually curious, and it's one of my favourite things on the internet. The guy behind it, Ben Krasnow, is an engineer at Google. From explaining and demonstrating (reverse) spherification to encoding information in fucking fire and picking it up with his oscilloscope, this channel will interest and delight most folks who enjoy HN for hours.

[1] https://www.youtube.com/channel/UCivA7_KLKWo43tFcCkFvydw

6
vinkelhake 1 day ago 1 reply      
This is just blogspam. Link to directly to the video instead. There are a lot of videos worth watching on Ben's channel.
7
amelius 1 day ago 1 reply      
> You would think that if you have an electron microscope and a record player, youre most of the way there to being able to record close-up footage of a needle traversing the grooves of a long-player record.

Actually if you would have only an electron microscope, you could play the track without even needing the record player.

8
Panoramix 1 day ago 2 replies      
This guy is a real jedi. Did he build his own Ag evaporator?
9
ianphughes 1 day ago 1 reply      
Am I the only one who could listen to him narrate just about anything for hours at length?
10
afandian 17 hours ago 1 reply      
Why was an electron microscope necessary? Surely at this scale a conventional light microscope would have done the job, wouldn't have needed all these workarounds, and could have recorded live footage of a needle actually playing a record?
11
kitd 17 hours ago 0 replies      
A bit OT, but this reminds me of an inspirational mathematics teacher I had, who loved to present the subject as a series of applied problem-solving exercises, rather than the usual learn-by-rote.

One of his problems was: if a 33 1/3 rpm record is 12 inches in diameter and plays for 25 minutes, how wide is the groove?

I had a clear visualisation in my head of the problem, including the groove cut into the vinyl. Watching this is like listening to my teacher speaking to me again.

12
JabavuAdams 1 day ago 0 replies      
Funny, I was just watching this last night. I'm in awe of the Applied Science guy, and am really grateful for the information he's shared.

Having said that, I was shocked to see his thermite BBQ video. I don't know whether the people in that video realize how close they came to being maimed.

13
InclinedPlane 1 day ago 0 replies      
The "root post" is just the youtube video itself.
14
udev 1 day ago 1 reply      
The guy's diction is flawless.
15
agumonkey 1 day ago 0 replies      
First time I get to see one of this legendary video vinyl ... Thanks.
16
bluedino 1 day ago 0 replies      
Next up - laser hitting a compact disc.
Favicon bug github.com
308 points by inglor  2 days ago   93 comments top 22
1
inglor 2 days ago 2 replies      
I'm the author of the GH repo and I just want to give full credit for figuring out Chrome does this to this guy: https://twitter.com/a_de_pasquale

All I did was think "if it works for 65mb why not more?" and write a quick proof of concept. Gets to 10gb on my 4gig laptop and then crashes. (MBA, OSX 10.10).

2
tomjepp 2 days ago 2 replies      
Even better - this works with compressed favicons.

Take http://pastie.org/10242118 and create yourself a favicon file:

dd if=/dev/zero of=favicon.png bs=1M count=4096

gzip -9 favicon.png

you can now crash lots of browsers with minimal bandwidth usage :)

demo: http://dev.tomjepp.uk/

3
yincrash 2 days ago 3 replies      
What prevents someone from doing this with a 1 px junk image that is many GB large?

edit: to clarify, i'm talking about in the page itself rather than the favicon.

4
protomyth 2 days ago 4 replies      
So a screwed up favicon can eat a wireless user's whole bandwidth for the month with them knowing it. Great...
5
tiglionabbit 2 days ago 1 reply      
This reminds me of a bug I found in an internal tool. We only had about 200 users, but somehow Apache would run out of workers because they were all busy serving some kind of request. Turns out our Apache configuration was wrong specifically in the case of serving the favicon. If you requested the favicon, it would get in a redirect loop, increasing the length of the url a little more each time until it hit the maximum url length and gave up. A new job like this would get kicked off every time any user visited a page, and it'd take several minutes before it finally gave up. So every user was unwittingly opening tons of long-running requests, with no indication that they were doing anything.
6
drivingmenuts 2 days ago 2 replies      
I pray once or twice a month that someone somewhere will just end the favicon.

Or at least make it's absence a non-logworthy event.

Or replace it with something sane, like a .png.

7
Someone1234 2 days ago 2 replies      
Just doing some quick back of napkin maths, even assuming Apple's 128x128 app icons, and 48 bit colors (32 is more typical), we're still talking about a sub-100 kb max for a favicon.

So if Chrome set it even at 20 Mb that would still be orders of magnitude more than any favicon should be for at least the next five or so years (even assuming 256x256 became common for app icons).

8
fao_ 2 days ago 0 replies      
In reply to both this and the original discovery[1], I wonder if this is a valid (albeit morally dubious) method of backing up (or at least ensuring the survival of) information that is at risk from government/<insert 'bad person' here> censorship/removal/other.

NB: Not that I condone this.

[1]: https://twitter.com/a_de_pasquale/status/608997818913665024

9
fugyk 2 days ago 2 replies      
Why do browser people always forget about favicons? Most browser saved favicon on private browsing and then this.
10
TD-Linux 2 days ago 1 reply      
Did you file bugs for either of these two browsers?
11
jakeogh 1 day ago 0 replies      
I like Surf's approach:

 if(g_str_has_suffix(uri, "/favicon.ico")) webkit_network_request_set_uri(req, "about:blank");
http://git.suckless.org/surf/tree/surf.c#n237

12
underwater 2 days ago 0 replies      
You should try and use ServiceWorkers to generate the file -- then you can have the client DOS itself.
13
gtk40 2 days ago 1 reply      
How do other browsers behave?
14
tomswartz07 2 days ago 2 replies      
This seems like an active exploit, as pointed out in the OP's inspiration Twitter post.

`tar` up an entire WordPress install, save it as `favicon.ico` and then easily pull the files from the server.

This would be a good idea to get fixed very soon, I would assume.

EDIT: I stand corrected in terms of exploit-ability, but I still assert that crashing a browser and chewing tons of bandwidth are pretty big issues.

15
ubanholzer 2 days ago 0 replies      
it's also working on iOS 8.1.2. As soon as you tap on the share icon, the download starts: https://github.com/benjamingr/favicon-bug/pull/2
16
rjcz 2 days ago 2 replies      
I've noticed it several months ago by going through my server logs - hadn't included 'favicon.ico' in the HTML and yet, the browser - Chrom(e|ium) - tried downloading it every time.

Thanks for reminding me to report it ;^)

17
empyrical 2 days ago 0 replies      
Should this have been done as a private disclosure to browser vendors first before going public, or is that more for security problems?
18
hartator 2 days ago 1 reply      
Just tested on Safari 8.0.6, doesn't seem to download the favicon at all. Weird, even after closing the tab, Chrome still keep downloading the favicon.
19
firlefans 2 days ago 0 replies      
Way to ruin the fun for the rest of us ;)
20
voltagex_ 1 day ago 0 replies      
Anyone tested Firefox/Chrome/Webview for Android?
21
vezzy-fnord 2 days ago 1 reply      
Interesting to see server-side JavaScript being used as an exploit language.
22
nodesocket 2 days ago 0 replies      
Seems like an easy and logical fix for browsers.

 function fetchFavIcon() { if (favicon.fileSize > 1MB) { return false; } ... }

Ask HN: I have ssh, they have ssh, how can we chat?
325 points by biturd  3 days ago   121 comments top 54
1
JamesMcMinn 3 days ago 8 replies      
Just use tail -f and a bash function.

Put this in your .bashrc file:

 function talk { echo "$USER: $@" >> talkfile; }
then run:

 tail -f talkfile &
The & puts tail into the background so it continues running, and "talkfile" needs to be a file that both of you have write access to.

You can both communicate simply by using the talk function like any other bash command:

 talk whatever you want and it'll be written to talkfile
This works on Linux, not sure about Mac.

It's nice because it records what you say, so there's no need for the other person to be logged in to get your message, and you get a printout of the last few lines of conversation when you "login" (run the tail -f command). There's nothing extra to install either.

(edit, apparently say is already installed on OS X, so I renamed the function "talk")

2
kragen 3 days ago 1 reply      
I wrote this for use on a client project for a client I'm pretty sure won't consider this a breach of anything.

 #!/bin/bash # Simple chat system for when Skype is fucked. nick=${1?Usage: $0 nickname (e.g. $0 biturd)} chan=/tmp/yapchan echo "^D to exit chat." >&2 tail -F "$chan" & tailpid=$! trap 'kill "$tailpid"' 0 while IFS='' read -er line; do echo "<$nick> $line"; done >> "$chan"
If you're running this with multiple accounts, you may need to chmod a+w /tmp/yapchan, and if you're using MacOS on the server, you may need to use a different filename since MacOS has a per-user private /tmp.

3
rickr 3 days ago 2 replies      
talk has been around for like...30 years now:

https://en.wikipedia.org/wiki/Talk_(software)

4
tunesmith 3 days ago 1 reply      
I actually really miss the user experience of 'talk' and 'ytalk'. Split-screen so people really could type at the same time, and touch-typists could type at the same time as reading what the other person was writing. Character-by-character, too, which helped improve the feeling of connection.
5
wyc 3 days ago 1 reply      
A lot of nice solutions have already been posted for talking. If you want to show some code, a shell session, your dwarf fortress, etc., you can look at screen/tmux with a shared guest account:

 # you type: $ screen -S session1 vim file.txt # and they can type (as the same user) $ screen -x session1 # or with tmux, you type: $ tmux new -s session1 vim file.txt # they type (as the same user) $ tmux a -t session1
You can try it out with two terminals on your own.

6
alcari 3 days ago 0 replies      
I seem to recall this being on HN a few months ago:https://github.com/shazow/ssh-chat
7
2ton_jeff 2 days ago 3 replies      
While linux x86_64 only, this is precisely why I built sshtalk -- https://2ton.com.au/sshtalk, or just ssh 2ton.com.au to see for yourself, I leave it open as a public/free service
8
Animats 2 days ago 2 replies      
Mandatory XKCD: https://xkcd.com/949
9
rbc 3 days ago 2 replies      
WRITE(1) is pretty much ubiquitous. Even Mac OS X has it. Very old school.
10
rdl 3 days ago 1 reply      
There should be a service which does throwaway accounts for ssh or ssh-via-web access with some extremely limited functionality, like talk, to keep the multiuser UNIX dream alive.

Maybe even spin up VMs on demand based on new hostname (if not seen before). First to claim = own. Some rate limiting function.

Shell accounts largely went away due to ease of use, but also local user exploits and abuse, but enh. If you virtualized the network (so you could reroute through a new IP on abuse, or let users own the IP) and restricted functionality it wouldn't be as bad.

No practical purpose, just fun.

11
giis 2 days ago 0 replies      
We do this all the time, at-least for past 5 years or so while implementing our opensource project.

We login to same server to do :

#create a screen

screen -S chat

#Both of us will join the screen:

screen -x chat

#Now we can see each other typing. Make typing easier do:

write pts/<id> username

#Make sure there is another login and use that pts <id> above.

After we typed our lines, to indicate I'm waiting for his response. I'll adding 2 or 3 newlines.

That's it ! :)Simple chat over ssh.

12
stock_toaster 3 days ago 0 replies      
As pwg@ mentioned, talk is worth looking at. Check the man pages for `mesg` and `talk` and `write`.
13
fstutzman 5 hours ago 0 replies      
Once you get ytalk running, the next step is to install colossal cave adventure.
14
ised 3 days ago 0 replies      
When you say "they to me" it makes it sound like you want a peer-to-peer connection. Unless your internet service allows unsolicited incoming connections, then you will need to do NAT piercing. And if you are behind the same NAT (e.g., same ISP) then you will have to forward traffic through some third host who is not behind the NAT.

But when you mention "wall(1)" it makes it sound like you want to connect to some internet accessible UNIX host via ssh and chat to others who are also connected to that host.

Option 2 would be less complex.

Depending on what software is installed on the host you connect to, there are many possibilities. Back in the old days, talk(1) could be used for split screen chats. Today, tmux(1) would be my choice. Anything that uses UNIX domain sockets could work.

Proof of concept:

Does Darwin have logger(1), syslogd(8) and /etc/syslog.conf(5)?

Decide where to log the messages, e.g., /var/log/messages

Edit /etc/syslog.conf

Start syslogd

logger "your message"

less /var/log/messages

less -F /var/log/messages

tail -f /var/log/messages

Messages have date, time, priority (if any) and hostname.

You said "something basic"; this is about as basic as it gets.

15
atxhx 3 days ago 1 reply      
I remember netcat being installed on OS X by default, you could ssh in and run it or pipe the port through ssh.
16
NeutronBoy 3 days ago 0 replies      
Not sure if it's installed by default on OSX, but you could use screen/byobu/tmux to share a terminal session and type to each other.
17
fsniper 2 days ago 1 reply      
write anybody?? I must be too old for this shit then.

just write <systemusername> <tty|ptsname>enter message and quit with ctrl-d

from man page:

 DESCRIPTION The write utility allows you to communicate with other users, by copying lines from your terminal to theirs. When you run the write command, the user you are writing to gets a message of the form: Message from yourname@yourhost on yourtty at hh:mm ... Any further lines you enter will be copied to the specified user's terminal. If the other user wants to reply, they must run write as well. When you are done, type an end-of-file or interrupt character. The other user will see the message EOF indicating that the conversation is over. You can prevent people (other than the super-user) from writing to you with the mesg(1) command. If the user you want to write to is logged in on more than one terminal, you can specify which terminal to write to by specifying the terminal name as the second operand to the write command. Alter natively, you can let write select one of the terminals - it will pick the one with the shortest idle time. This is so that if the user is logged in at work and also dialed up from home, the message will go to the right place. The traditional protocol for writing to someone is that the string -o, either at the end of a line or on a line by itself, means that it is the other person's turn to talk. The string oo means that the person believes the conversation to be over.

18
atsaloli 2 days ago 0 replies      
man write

http://ss64.com/bash/write.html

You would need to run two instances of write : you would write to your friend and your friend would write to you.

Just press Enter twice when you are done with your turn (I.e. "over" as in, transmission over)

19
danielhunt 2 days ago 0 replies      
http://www.redbrick.dcu.ie/~c-hey/

 1. Install that on a machine. 2. Both SSH to the that machine machine. 3. Type: `hey <username>`, press enter. 4. Enter your (optionally multiline) message to your friend. 5. CTRL+D (on windows, at least) to send the message. 6. ???? 7. Profit.

20
astazangasta 2 days ago 0 replies      
I highly recommend ytalk, available as a package in most repos, as the solution here. Not only does it support multi user chat across the network (you can send talk requests to user@host), it has a shell escape feature which means you can open a vim buffer inside your chat session for collaborative editing.
21
philprx 3 days ago 0 replies      
Use Paramiko (Python) to code a quick server based on :demo_server.pyShare what is typed between clients using fifo, shared file (bad!) or SQLite.See: https://github.com/paramiko/paramiko/blob/master/demos/demo_...
22
chrisper 3 days ago 0 replies      
You can also create a screen session. The other user logs into your ssh server and uses screen -x to attach to your terminal.
23
brajesh 3 days ago 0 replies      
What's wrong with using 'wall'
24
andrewchambers 3 days ago 1 reply      
it could be done with a fifo, I believe the command "write" also does this.
25
joshu 3 days ago 0 replies      
ssh to the same machine.$ mesg y$ talk <otherusername>
26
pwg 3 days ago 0 replies      
man talk
27
erikb 2 days ago 0 replies      
I'm a huge fan of not using the system chat tools for this but to write your own chat client. But instead of a simple text file I'd use a small database like sqlite or a logger because I want to make sure that the chatters don't run into the trouble of fighting for the write access to that file. Also if you do this a few weeks the file might get that big, that you would like a database engine to parse it anyway.

PS: Huge kudos for the question, btw. This is the kind of stuff that really improves your ability to use your system well.

28
cornellwright 3 days ago 0 replies      
An easy way is to just create a new screen (see the Unix command "screen") and then open a text editor in it. The second participant then joins the screen (via screen -x) and now you both can type into the same editor.
29
lisper 2 days ago 1 reply      
I'm working on a secure chat client that runs in a browser (so nothing to install). If you're interested in being a beta tester send me your email address and I'll send you an invite.
30
agartner 3 days ago 0 replies      
It might not be exactly what you're looking for but you might take a look at https://github.com/DSUOSS/unix-chat.
31
thirdreplicator 3 days ago 0 replies      
User A logs into user B's machine. User A or B types

screen -S chat

The other user types

screen -xr chat

32
nailer 2 days ago 0 replies      
Lots of custom solutions in this thread, but there's already inbuilt commands for talking installed by default on every Unix box.

Log into a box, use 'who' to see which terminal they're using, and use 'write' to send a message there.

Or be lazy and just use 'wall' (write all) like I do.

33
ninjakeyboard 2 days ago 0 replies      
Despite the fact that you both have SSH, you can still pick up the phone and call them. :)+1 for write. http://www.computerhope.com/unix/write.htm
34
silverwind 2 days ago 2 replies      
netcat to the rescue!

You run:

 nc -l 5000
They run:

 ssh [yourhost] nc localhost 5000
The only issue I see is that you apparently can't get netcat to only listen on localhost so others could join in in theory.

35
c22 2 days ago 0 replies      
36
userbinator 3 days ago 1 reply      
I'm not sure about OS X but most Linuxes have netcat installed.

There's also OpenSSL s_client/s_server for an encrypted connection, although you need to setup some certificates first.

37
enterx 2 days ago 0 replies      
check the following *nix utilities:

ssh one box to another, then:

who -uT // show who is connected to a machine and will they recieve message sent with write or wall command

write // sends a message to another user (tty). dont do this as it can confuse the other user by inserting the message in the middle of his current output.

wall //send a msg to all of the logged in users

talk & talkd // client and server. (old school rulz!)

38
sturmeh 2 days ago 0 replies      
Run an irc bouncer like ZNC, they usually have a partyline plugin/functionality that lets you chat as if you were on an irc server, but locally.
39
madaxe_again 2 days ago 1 reply      
I'm surprised nobody has mentioned "wall". Not great for chatting but damnably useful for unmissable comms with others on a box.
40
girish_h 2 days ago 0 replies      
You could ssh into another machine, run a screen session, launch a shell and start chatting inside the shell
41
resca79 2 days ago 0 replies      
try `write`:

>usage: write user [tty]

42
gko 2 days ago 0 replies      
If you are using the same account: emacs --daemon (once), emacsclient -t (for everyone).
43
plg 2 days ago 0 replies      
unix command: kibitz

kibitz - allow two people to interact with one shell

http://www.skrenta.com/rt/man/kibitz.1.html

44
strathmeyer 3 days ago 1 reply      
Have we forgotten about Zephyr?
45
peterwwillis 2 days ago 0 replies      
You can't connect ssh clients like you would modems. If the machines are on a public network you ssh to another person's sshd, then use terminal chat programs. If you're both behind a NAT, one of you needs to port-forward to your host's sshd, or use STUN, TURN or ICE servers, or maybe just IPv6.

Once connected to someone's host, use the Talk program (https://en.wikipedia.org/wiki/Talk_%28software%29), or the Write program (https://en.wikipedia.org/wiki/Write_%28Unix%29), or use Netcat (http://hak5.org/episodes/haktip-82) to open a two-way dialogue between terminals. Netcat is the simplest of them all because it just opens a two-way tcp session, and technically only one of you needs netcat while the other just needs a telnet client or equivalent.

46
yueq 3 days ago 0 replies      
talk user@host
47
kpcyrd 2 days ago 0 replies      

 apt-get install nmap ncat --chat -lp 1234

48
alinspired 2 days ago 0 replies      
a shared screen session ?screen -x with any editor or just 'cat' running
49
mayli 2 days ago 0 replies      
You can use write or screen/tmux.
50
vectorEQ 2 days ago 0 replies      
tunnel netcat through SSH :D for fun, but probarbly not profit!
51
Shalle 2 days ago 0 replies      
open up a screen session and type whatever you want.
52
db48x 2 days ago 1 reply      
install talk on one of your machines.
53
roka88 2 days ago 0 replies      
uyu
54
kichuku 2 days ago 1 reply      
This does not directly answer your question.But there is a way using third party software.

You can use "https://telegram.org" telegram messsenger.

It works flawlessly from the cli.

Yes, you cannot use it if you don't want your chats to pass through a third party server. But maybe you can try out the "secret chat" feature with auto destroy feature.

The traditional options have already been listed by others here. I just wanted to tell something which is easy to setup and also reliable.

AT&T fined $100M after slowing down its unlimited data washingtonpost.com
305 points by nvr219  8 hours ago   135 comments top 32
1
mangeletti 7 hours ago 7 replies      
To put this into perspective:

$100MM is 0.0759878% of AT&T's 2014 gross revenue, so less than 1/10th of 1%.

That's like earning a $100,000/yr salary, and then paying a $75.99 fine. It's basically less than your average speeding ticket.

2
istvan__ 8 hours ago 9 replies      
We should stop lawyers from re-defining words like unlimited. We should make sure that if somebody says unlimited in advertisment or product description it really means unlimited. I know, I am an idealist. :)
3
JoshGlazebrook 6 hours ago 1 reply      
I kind of saw this coming after the whole Verizon fiasco when they tried to throttle their LTE network and the FCC and media made it a frenzy and they backed down. But then again, Verizon's main wireless spectrum they use for their base layer of their LTE network has the open access rules attached to them that pretty much forbids the throttling of any devices using the spectrum and forcing them to allow any device on their network that is capable of using it.

I'm glad I still have my Verizon unlimited data plan. I renewed my contract (unlimited line is out of contract in August 2016), by using the transfer upgrade loophole last year. But they are the only carrier that does not throttle their LTE network at all, and also allow you to officially pay for unlimited tethering, something no other carrier has ever offered. On top of that the open access rules attached to the C block of the 700mhz spectrum they use lets me pop my sim card into a dedicated lte router, tablet, hotspot, etc. Even devices that Verizon stores refuse to activate for you like a T-Mobile bought iPhone, or any device that is not sold as "for verizon". It's unlocked and works on the network you can pop your sim card into it and it will just work.

4
bede 34 minutes ago 0 replies      
T-Mobile UK (now largely assimilated by the EE mothership) comprehensively denied the existence of an 18-hours-a-day 4mbps throttle placed on its unlimited plans [1] for several years before getting in trouble with the regulators. As far as I'm aware they weren't even punished, which is a shame given how blatantly deceptive their practices were.

This strikes me as a reasonable fine. Well done FCC.

[1] http://www.techradar.com/news/phone-and-communications/mobil...

5
lewisl9029 25 minutes ago 0 replies      
Any idea what the legal landscape for these kinds of issues is like in Canada?

Wind Mobile also advertises unlimited plans yet throttles starting at a mere 3GB...

https://www.windmobile.ca/plans-and-devices/plans

Granted, their true rates are still better than their big telecom counterparts, but I still find this distasteful as a marketing tactic.

6
baldfat 7 hours ago 0 replies      
GENERAL PUBLIC can be swayed into not knowing that Internet Data is not a commodity. People treat Data like it needs to be grown and a limited resource that the ISP must harvest and think it is unfair that you use more for data usage.

I try to explain that Data is more like a pipe and at certain times they can't get all the data through at the same time. So if this was about throttling for their network they would just do it during "Peak" times and not 24 hours a day. I still feel this is a move to charge per amount of data and not speed access.

7
japhyr 8 hours ago 2 replies      
The fine, which AT&T says it will fight, is the largest ever levied by the agency.

Does anyone know how likely this fine is to stick? It sounds like a significant fine to me, but I wonder if these kind of fines are often appealed down.

8
madaxe_again 7 hours ago 1 reply      
It'll be a cold day in hell before this sticks. They'll use every slippery tactic in the book to justify it and to fight it, they'll bribe^Wlobby the appropriate parties to legally define "unlimited" as "limited", and even if they are stuck with it, they'll just not pay.

I mean, what, are they going to arrest executives? Give me a break. There's no recourse either way.

9
rasz_pl 6 hours ago 0 replies      
There are countries where you cant simply LIE in a commercial/promotional material. I remember the case of Apple being fined and their ad campaign pulled when they tried to claim selling the world's fastest, most powerful personal computer (PowerPC times).

On the other hand in my country its ok for actors to lie about being doctors in commercials :/ ("Im a doctor and X is best for you")

10
Zekio 8 hours ago 1 reply      
Throttling speeds after a certain amount of data is not equal to unlimited... serves them right for using the "Unlimited" wrongly :)
11
fnordfnordfnord 5 hours ago 0 replies      
If this were a just world, in order to appeal the fine AT&T would have to first pay the fine, Net 30, and deal with the federal courts via an outsourced call-center in order to receive a credit on their account.
12
CRASCH 7 hours ago 2 replies      
I think this falls under a reasonable interpretation of the unlimited.

a reasonable person would understand that there are bandwidth limits both technological and environmental. A reasonable person would expect that the level of service they signed up for would continue or get better over time.

I see two issues.

One is that after a certain amount of data is used they limit bandwidth. If you limit something it is hard to call it unlimited.

The other issue is that early on throttling was not in place. They specifically added throttling to entice users to switch to more lucrative data plans.

13
jdlyga 7 hours ago 2 replies      
AT&T is still doing this as of yesterday. I just got a text that I've used 75% of my "unlimited" plan
14
ytdht 6 hours ago 1 reply      
I think AT&T should be fined (or be the target of class action lawsuit) for constantly lying to customers/future customers... the most common example being lying about u-verse being fiber-optic going to their customer's homes (while it only goes to a central box in the neighborhood).
15
negrit 6 hours ago 0 replies      
The issue with this kind of fine is that the profit is greater than the fine so they will continue to do shady things like this.

Also people in charge for approving this should be held accountable.

16
beambot 7 hours ago 2 replies      
[Sorta OT...] Ugh, now we just need goad Comcast into improving their peering.

It's pretty sad when the TV viewing experience is better via torrents than Netflix. Comcast is doing some serious throttling.... For me, the Netflix stream is all pixelated, yet we can pull the entire hour-long HD content via torrent in ~5 minutes. Something is amiss.

17
williesleg 1 hour ago 0 replies      
So that means our rates are going up again.

So sick and tired of these hidden middle-class taxes.

18
sschueller 7 hours ago 0 replies      
Swisscom in Switzerland sells unlimited data plans that are capped at different speeds depending on how much you pay per month. Just like a DSL or cable plan.

I find this a lot more fair than selling unlimited that isn't. Or killing grandfathered accounts by capping them.

19
rail2rail 7 hours ago 2 replies      
> But consumers are unlikely to receive any money from the fine, which will go instead to the U.S. Treasury, said the agency official.

Well why the hell not?? If we were the wronged party, should we not benefit from the settlement directly?

20
mamcx 6 hours ago 2 replies      
This is so sad.

The fine is pay to somebody else than the victim.

Is like when Intel get fine for screw AMD, and the money go to some EU institution: Why not pay it to the victim?

That is what this is stupid, and a no-justice.

21
revelation 7 hours ago 0 replies      
We need to stop calling this practice "slowing down" or "throttling". If you are slowed down, you'll be limited to 56kbit or less, by artifically induced packet loss. At this point, most websites and other internet services will just completely stop working as the massive packet loss suffocates any payload.

It's like advertising unlimited miles on a rental car, then slowing it down to 5mph after 200 miles. Sure, the car still moves, but you can't practically use it for anything.

22
newobj 7 hours ago 1 reply      
Umm, did they STOP throttling in addition to this settlement? It feels like they did. I was getting throttled like crazy in March and April (I have no internet at my house other than AT&T LTE for stupid reasons, so I have to tether all the time), but in May and June, I seem to mostly never get throttled anymore... or if I do it's much more modest. Anyone else notice a change?
23
codazoda 7 hours ago 1 reply      
T-Mobile throttles my "unlimited" family plan. The main number gets 3G and each additional gets 1G and then is throttled. Are they also on the radar or is it less of a problem for them because they give you the throttle data up-front (while still using the unlimited word). In reality, however, when you hit your limit it becomes almost unusable.
24
deegles 7 hours ago 1 reply      
My unlimited plan gets throttled after 5GB usage. From what I understand, a 30GB family data plan won't get throttled until the 30GB are used up. If this is still true, how is it that throttling at 5GB is for "network management"?
25
d0ugie 6 hours ago 0 replies      
By the way, go here if you'd like to request a Project Fi invite: https://fi.google.com/signup
26
allsystemsgo 7 hours ago 0 replies      
I received a text just the other week from ATT letting me know I reached 75% of the 5GB network management threshold, and that I may experience reduced data speeds. Anything I can do about this now?
27
twoodfin 7 hours ago 0 replies      
I'll be shocked if after this AT&T continues to grandfather in their "Unlimited" plans.

Which is too bad, because mine is a really great deal even treated as a 5GB/device plan.

28
random778 5 hours ago 0 replies      
I'd like the fine to be in the form of refunding affected customers for the period they were defrauded.
29
flippyhead 1 hour ago 0 replies      
So awesome.
30
calbear81 7 hours ago 0 replies      
What's the likelihood this will lead to some type of compensation for unlimited data users that were throttled?
31
dsp1234 7 hours ago 0 replies      
I'd like to see something like on the packages of food:

"No artificial limiters added"

32
ianstallings 6 hours ago 0 replies      
Can we classify this as a revenue generating legal briefing on the net neutrality issue? Or is that wishful thinking?
When Solid State Drives Are Not That Solid algolia.com
296 points by Shipow  2 days ago   118 comments top 26
1
ploxiln 2 days ago 4 replies      
Originally TRIM was an un-queued command; all writes had to be flushed, then TRIM executed, then writes could continue. This was bad for performance with automatic on-file-delete trim, so everyone wanted a trim command that could be put in the command queue along with writes. Many new drives have this.

It turns out that Samsung 8XX SSDs advertise they support queued trim but it's buggy. The old TRIM command works fine.

https://lkml.org/lkml/2015/6/10/642

There are in fact lots of "quirks lists" and "blacklists" in the kernel and virtually all computers require some workarounds in the linux kernel for some buggy hardware they have. Pretty amazing when you think about it.

EDIT: another closely related example is macbook pro SSDs and NCQ aka native command queuing. They claim they support it, but on many it's buggy. It gets better though; the linux kernel just starting trying to use such functionality by default relatively recently.

https://bugzilla.kernel.org/show_bug.cgi?id=60731

these sort of things are, as you can see, very confusing and frustrating to track down, identify, and find a general fix for

EDIT2: now that I actually read the kernel bugzilla entry further, it's more recently come to light the actual problem with recent macbook pro SSDs is MSI (efficient type of interrupts)

2
ChuckMcM 1 day ago 0 replies      
Nice debugging story. When I was at NetApp there were lots of times when drive firmware for the 'less used' options would fail. On the fiber channel drives the 'write zeros' command which was supposed to zero a drive was notorious in its in ability to achieve something that simple. When Google looked at (I don't know if they finally deployed it) the disk encryption technology it worked differently disk to disk and firmware rev to firmware rev. I think it was Brian Pawlowski at NetApp that said "You can count on two things working right in a hard drive, read, write, and seek." The joke being that you needed all three of them to work for reliable disk operation.
3
teraflop 1 day ago 1 reply      
Here's an Ubuntu bug tracker entry for what sounds like the same problem: https://bugs.launchpad.net/ubuntu/+source/fstrim/+bug/144900...

Linux 4.0.5 includes a patch that blacklists queued TRIM for the buggy drives. Windows and OS X apparently don't support queued TRIM at all, so they're unaffected.

4
jlebar 1 day ago 5 replies      
To me, this sort of thing brings home the value of not running your own machines. Sure, Amazon's/Google's clouds have quirks, but it's far less likely that you're going to have to debug faulty hardware in this way. It sounds like a team of more than one person worked on this at least part-time for weeks -- how much is that worth? It's not just the cost of hiring extra people to do the work; often small companies simply can't hire enough good people -- when you do find them, do you want to squander them twiddling servers?
5
MrBuddyCasino 1 day ago 0 replies      
Not directly related to TRIM, but AeroSpike has a nice test suite for SSDs, probing for IOPS and latency: https://github.com/aerospike/act

They share their test results for both physical and cloud-based storage, I figured this would be of interest:

http://www.aerospike.com/docs/operations/plan/ssd/ssd_certif...

6
cabirum 1 day ago 3 replies      
Strange, Samsung 840/850 evo/pro are considered [1][2] among the best consumer SSDs. The issues article mentions do not exist on Windows, the SSDs are very reliable there. I suspect it's not only Samsung fault. Are we sure Linux handling of TRIM operations is absolutely correct?

[1] http://techreport.com/review/27062/the-ssd-endurance-experim...

[2] http://www.anandtech.com/show/8216/samsung-ssd-850-pro-128gb...

7
madez 1 day ago 2 replies      
It feels like Samsung used the Linux community here as a free testbed.

Samsung knew that only Linux supported queued trim, so releasing it without proper testing is just externalizing the disproportionately increased cost of testing to the Linux community.

8
sandGorgon 1 day ago 1 reply      
I have this running on my Ubuntu Thinkpad with A Samsung 840 Pro as a weekly cron job. should I turn it off ?

 #!/bin/sh # call fstrim-all to trim all mounted file systems which support it set -e # This only runs on Intel and Samsung SSDs by default, as some SSDs with faulty # firmware may encounter data loss problems when running fstrim under high I/O # load (e. g. https://launchpad.net/bugs/1259829). You can append the # --no-model-check option here to disable the vendor check and run fstrim on # all SSD drives. exec fstrim-all

9
andmarios 1 day ago 0 replies      
Been there, done that. :|

Sometime around the end of 2013 I started getting frequently lost data and corrupted filesystems upon reboot.After much search and about 4-6 months into the issue, I found out that the culprit were the queued TRIM commands issued by the linux kernel to my Crucial M500 mSATA disk. The Linux kernel already had a quirks list with many drives, including some of the M500 variants, just not mine.

I added my model, compiled the kernel and the nightmare ended. I proceeded to submit a bug report and a patch. The patch got accepted (yay!) and the bug report turned to be very useful for other people with the same problem but different disk as I included the dmesg output that was specific to the issue. This meant that they could now google the errors and get a helpful result.

Such is the nature of free software; you are allowed to fix your computer yourself. :)

10
Aardwolf 1 day ago 2 replies      
"Samsung SSD 850 PRO 512GBrecently blacklisted as 850 Pro and later in 8-series blacklist"

That's what I have in my home computer, with ArchLinux.

Do you think this problem only is something particular in the servers of the author of that article, or should this be interpreted as:

linux + samsung 850 = you will lose your data?

Thanks...

11
notacoward 2 days ago 0 replies      
Pretty disappointing to see some of those Samsung drives on the list, because in some of the other tests/surveys I've seen they seemed to be among the better choices. Sigh I guess Sturgeon's Law applies to SSDs too.
12
cft 1 day ago 2 replies      
Using SAS SSD drives on a server is a bad idea for many reasons. One should use PCIe cards, that sit directly on the PCIe bus, such as FusionIO or SanDisk. They have been tested and retested (e.g. by Facebook), without the unnecessarily added complexity of SAS/SATA protocols. The I/O performance is also about 20x.
13
mrmondo 1 day ago 1 reply      
I've worked on some interesting SSD deployments / experiments a lot over the past 12 months. Quite honestly - I wouldn't go anywhere near Samsung products regardless of their 'PRO' labelling or otherwise.

We have had great success with both Sandisk Extreme Pro SATA and Intel DC NVMe series drives, we've also recently deployed a number of Crucial 'Micron' M600 1TB SATA drives that are performing very well and so far haven't given us any issues.

14
suprjami 1 day ago 0 replies      
What a wonderful story. I wish everyone was this diligent at troubleshooting. Then again, that would put me out of a job.
15
douglasheriot 2 days ago 3 replies      
Wow, that sucks. Another reason to use ZFS youd notice the corrupted files a lot sooner.
16
microcolonel 2 days ago 0 replies      
I've had issues with these samsung 8xx drives, unfortunately they all happened at once. I gave up on their RMA/warranty process because I was bounced back and forth between the same two numbers a few times. Either side said that the other was in charge of this process(samsung bought the SSD division from seagate... or was it seagate that bought the HDD division from Samsung? To this day I have no clue.).
17
bbcbasic 2 days ago 4 replies      
I have a Samsung SSD 850 PRO 512GB in my Windows PC. And I have TRIM enabled in Windows:

 > fsutil.exe behaviour query DisableDeleteNotify DisableDeleteNotify = 0
Should I be worried?

18
Aardwolf 1 day ago 2 replies      
I'm so sick of this TRIM. Constant configurations needed because of it, constant care like "this thing you better don't do on SSDs". And then problems like this.

Do you think there'll ever be SSDs that don't need it?

19
lvs 2 days ago 1 reply      
Can someone clarify the article's claim that these Samsung drives are really "broken" as such? We have a few of these on 3.13 and 3.16 kernels and ext4 with no problems. It seems that there must be something unique to their application in order to expose these trim failures.
20
stream_fusion 1 day ago 1 reply      
I have one of the affected drives mentioned in the article in my development laptop - the Samsung SSD 850 PRO 512GB.

As one of the most expensive SSD drives available on the market, it was disconcerting to find dmesg -T showing trim errors, when the drive was mounted with the discard option. Research on mailing lists, indicated that the driver devs, believe it's a Samsung firmware issue.

Disabling trim in fstab, stopped the error messages. However it's difficult to get good information about whether drive performance or longevity may be impacted without the trim support.

21
kbar13 2 days ago 5 replies      
if one machine failed and failover kicked in correctly, why was the engineer paged?
22
anigbrowl 1 day ago 0 replies      
Interesting! I sometimes work with SSDs as storage media for cameras (where Sandisk is the most popular brand by a mile) and I seriously doubt any camera firmware is doing drive maintenance. From what I know of digital imaging technicians, neither are they - if a drive starts acting up in any way, the usual policy is to just take it out of service immediately, recover anything that was on it, dump it, and buy a replacement.
23
sengork 1 day ago 0 replies      
Given how many Samsung drives are listed in their findings, I can only attribute this to the fact Samsung make their own SSD controllers.
24
Figs 1 day ago 0 replies      
How do you disable TRIM on common distros? Under Ubuntu, is it just preventing /etc/cron.weekly/fstrim from running, or is there more to it? What about CentOS, etc?
25
frik 1 day ago 0 replies      
What SSD do cloud hoster like DigitalOcean, Linode, Rackspace, Vultr, etc use?

I would some sites trade storage speed for more space (HDDs instead of SSDs).

26
Supersaiyan_IV 1 day ago 1 reply      
Undoubtedly the same issue happened to me on an 500GB 840 EVO with NTFS.

SSD zeroed out a part of the disk during runtime, as I watched this happen music was playing from this drive. It was mounted from Ubuntu MATE 15.04 and playing a music library through Audacious. Suddenly music glitched and IO errors began appearing. Rebooted to a DISK READ ERROR (MBR was on the EVO). Ran chkdsk from USB and it showed a ridiculous amount of orphaned files for ca. 1h. Once finished the most frequently accessed files had disappeared. Download folder, Documents folder, some system files. Of course, some of the files could've been recovered had I not ran chkdsk off the bat, bot nonetheless it's an approximate measure of failure impact.

I began being suspicious of 840 EVO when sorting old files by date became fantastically slow. If you have a feeling this has happened to you recently - buckle up for a shitstorm.

TL;DR Avoid 840 EVO.

Optimizing an Important Atom Primitive atom.io
293 points by mrbogle  1 day ago   88 comments top 18
1
jerf 1 day ago 7 replies      
You know, it's funny how it's 2015 and we're just dripping with raw power on our developer machines, yet, open a few hundred kilobytes of text and accidentally invoke a handful of O(n^2) algorithms and blammo, there goes all your power. Sobering.

Edit: We need a type system which makes O(n^2) algorithms illegal. (Yes... I know what I just dialed up. You can't see it, but I'm giving a very big ol' evil grin.)

2
martanne 1 day ago 1 reply      
A piece table[0] solves this rather elegantly. Since it is a persistent data structure, a mark can be represented as a pointer into an underlying buffer. If the corresponding text is deleted, marks are updated automatically, since the pointer is no longer reachable from the piece chain. Lookup is linear[1] (or logarithmic if you store pieces in a balanced search tree) in the number of pieces i.e. non-consecutive editing operations.

[0] https://github.com/martanne/vis#text-management-using-a-piec...

[1] https://github.com/martanne/vis/blob/master/text.c#L1152

3
drewm1980 16 hours ago 2 replies      
I really, really, don't get the whole "implement everything using web technologies" thing. As an outsider from that dev ecosystem it looks like the youtube videos you see of people implementing electronic circuits in Minecraft.
4
dunstad 1 day ago 2 replies      
I tried out Atom a few weeks ago. I loved the UI! Absolutely fantastic, beautiful, nothing but praise there.

But I had so many issues with stability, and really missed small but important features that were present in my other editors. I also found that most of the plugins worked either poorly or sporadically.

In the end, I decided that it was not worth either using Atom or spending time contributing to it when I have some "pretty close" solutions today. Definitely looking forward to the 1.0 version though, and hats off to all those spending their time contributing to it. I'm sure it's going to become something great!

5
Veedrac 1 day ago 1 reply      
I actually just retried Atom yesterday. Aside from the normal complaints (it's sloooww, undo doesn't affect markers or selections), one thing that struck me is that markers can't be zero-width. Well, they can but they won't show up. I'm wondering if this is related to the technique mentioned here - it's certainly been a pain to work around. Sublime Text even has multiple options for this (DRAW_EMPTY and DRAW_EMPTY_AS_OVERWRITE).

That said, I'm loving the API design. Coming from Sublime Text, it's a massive upgrade. The ability to embed literally anything a web browser can render in a well-designed framework is mindblowing.

6
twic 1 day ago 0 replies      
Didn't the Xanadu project solve this problem in 1972?

https://en.wikipedia.org/wiki/Enfilade_%28Xanadu%29

Solve it, keep it secret, and then fail to properly write about it to this day.

7
Erwin 1 day ago 0 replies      
If you thought this was an interesting article, here 's the obligatory link to just about the only book on crafting a text editor, "Craft of Text Editing": http://www.finseth.com/craft/
8
octref 1 day ago 0 replies      
Recently I learned all contributors will receive a gift for Atom pre-1.0, and when I asked a Github stuff when will I receive the gift (I'm moving during this summer) he mentioned it would be sent out in early July. I guess we can expect a pre-1.0 before August.

One of the main remaining functionality to be implemented is good support for large files. Looking at this issue [1], it seems Atom team is making some progress but there are still some problems to be tackled.

In 0.208.0 (released 7 days ago) they mentioned in the changelog Atom now opens files larger than 2MB with syntax highlighting, soft wrap, and folds disabled. We'll work on raising the limits with these features enabled moving forward. Little bit disappointed at the progress as you could open large file with these features disabled long time ago through a package "view-tail-large-files".

Just updated to 0.209.0 and using ember.js (1.9 MB) to test. Editing/scrolling has some delays but it's better than previous versions.

Good luck Atom team!

[1]: https://github.com/atom/atom/issues/307#event-325455529

9
revelation 1 day ago 1 reply      
Yet, the onKeyDown handler still takes 50ms. Are you kidding me? You can push a billion tris in that time.
10
ohitsdom 1 day ago 0 replies      
Appreciate the candidness of the team writing about their naive approach. Definitely would have been a simpler fix to just search the currently visible text, but I'm glad they fixed the root issue to make markers more efficient for all.
11
alexchamberlain 1 day ago 0 replies      
What is the data structure used for the text itself? A rope? The markers could be stored as offsets to the substrings themselves.
12
msoad 1 day ago 1 reply      
This kind of knowledge and experience exists in Microsoft campus for years thanks to Visual Studio team. That's why Code is much more efficient. I only wish if it was open source so I could totally move away from Sublime Text.
13
romaniv 1 day ago 1 reply      
This reminds me of how I re-implemented nested sets in relational databases as spans in a "coordinate" system.

 | Root | | Node | Node| | Node | Node |
I stored only "X" and "Y" coordinates for every node, so you had to read "next" node in a row to get current node's "size".

It was a bit more human-readable when looking at the data. More importantly, it reduced (on average) the number of nodes I needed to update on insert compared to nested set and gave an easy way of retrieving immediate children. But you still had to "move over" all the nodes "right" of the one you're inserting.

The structure in the article looks eerily similar. I wonder whether it's somehow possible to apply GitHub's optimization to this "coordinate" based schema and make it relative without messing up the benefits of column indexing. Hm...

14
imslavko 1 day ago 1 reply      
Vim also has a similar optimization: when a file changes, Vim only runs syntax highlighter on a visible part of the text + some buffer in both directions.
15
asQuirreL 17 hours ago 0 replies      
Hmmm... So the article seems to suggest that for every insertion of a character, a log time lookup is made. Is that really the case? If so, why is the leaf node that the cursor is in not saved? If you were to use a B+-tree implementation then you would already have access to neighbour pointers for rebalancing purposes, making the majority of incremental changes very cheap (constant time). This is just a thought, there may be good reasons why it's not possible.
16
caiob 1 day ago 4 replies      
Does it open files >2mb yet? My terminal vim does.
17
z3t4 18 hours ago 0 replies      
One thing I love about vanilla JS is that you can both set and get with the same property. I wonder if having both setters and getters is enforced by CoffeeScript or a design decision of the Atom team!?
18
baldfat 1 day ago 4 replies      
Atom is still a hog on my main programing machine. It makes it unusable for me still.

It is an OLD i3 Dell from 6 years ago desktop.

Y Combinator growth equity fund? sec.gov
279 points by kamilszybalski  1 day ago   67 comments top 19
1
xenophon 1 day ago 1 reply      
Another relevant article hinting at this development: http://www.businessinsider.com/y-combinator-raising-money-fo...

The impetus behind a growth equity fund, according to the article, would be to provide "long-term capital that allows startups to continue to operate in beta [sic, I assume -- they probably mean privately] without having to go public."

I can see why this approach would make sense for optimistic investors who are familiar with the impatience of public market investors with the kind of moonshot, long-term investments that are game-changing but don't pay off during next quarter's earning call.

That's one charitable interpretation of this decision, if it's true -- Y Combinator wants to counteract the abundance of hedge fund money pouring into this space (with attendant expectations of a near-term public liquidity event) with strategic capital and a longer time horizon.

2
softdev12 1 day ago 5 replies      
It would be interesting if Y Combinator attempted to convert to an entity that was able to publicly list on a stock exchange and sell shares to the average investor - much like private equity shops Blackstone and Carlyle have done by going public.

https://www.wsws.org/en/articles/2007/06/blac-j25.html

http://www.carlyle.com/news-room/news-release-archive/carlyl...

If there is a bubble valuation in the public-to-private market, YC could potentially arbitrage the valuation difference into cash for its LPs.

3
datashovel 1 day ago 4 replies      
My question is, long term how does SV not become just another large bureaucratic / corrupt power center like D.C. or Manhattan? Today it feels good to see those who deserve it get rewarded, but I imagine that's how people in NY felt about Manhattan when it was a fraction of what it is today. Same of course with D.C. shortly after (for example) the American Revolution.
4
staunch 1 day ago 1 reply      
Most of the money to be made in VC is by doubling down on successful investments. YC has foregone billions by not doing this. Competing with later stage VCs may incline them to compete with YC, which would be a very great thing for the world.
6
mwilkison 1 day ago 3 replies      
Is this a fund for follow-on investments in YC startups? In the past YC has indicated they dislike follow-on investments by accelerators since it sends a negative signal re: the startups they decide not to invest in.
7
gtirloni 1 day ago 4 replies      
I've heard that Delaware is 1) safe heaven for litigation and 2) most forward-thinking in terms of business-related bureaucracy. Is that true? What makes West Coast companies incorporate so far from SV?
8
sharemywin 1 day ago 0 replies      
YC has been talking more about working on bigger more ambitious projects/companies, so maybe its away to fund things that won't get funded by traditional VCs.
9
phantom_oracle 1 day ago 3 replies      
They sure do take a playbook from the innovation they try to foster and do new things (although it isn't necessarily "new" in the sense that other funds like this don't exist).

In the business of business-acceleration, I guess this makes YC the McKinsey or GS?

Thing is, they can't keep stretching the payout to investors.

Even a moonshot (as a business) needs to experience a liquidity event of some sort, so they're either inflating the so-called bubble with this or...

They're playing dirty with some of their first-to-market companies by helping them grow and stay cheap enough until they emerge as monopolies (-redacted- AirBnB come to mind mostly).

Edit: to my surprise, Uber isn't a YC company, edit made.

Edit 2: I am checking a list of YC companies and other big ones I see that have potential are:

- Disqus

- Heroku (exited so doesn't count)

- MixPanel

- Olark

- Embedly

- HomeJoy

- Stripe (of course!)

- Codecademy

- Firebase

I stopped at Summer 2011, but some of these are now so ubiquitous on the internet, that it makes you wonder...

10
rdlecler1 18 hours ago 2 replies      
YC: you probably want to run this by your legal counsel. By posting your open ended 506b filling for a proposed growth equity fund on Hacker News (which you own), you are engaging in general solicitation and advertising, which requires a 506c filling. You don't want to end up like Goldman Sachs when they tried to offer the Facebook pre-IPO fund to their private wealth clients. The SEC shut them down.
11
hkarthik 1 day ago 0 replies      
If I had to guess, YCombinator is starting to diversify it's funding strategies for early stage startups as it starts to diversity the types of startups that it invests in.

The same funding terms simply won't work for an e-commerce shop selling Jellyfish compared to one trying to commercialize nuclear power. This new type of fund probably allows them to fund the latter startups in a more appropriate way.

12
late2part 1 day ago 2 replies      
My Uber drive today was bemoaning how hard it is to get a job in the DC area with his name, Mohammed. I said that wasn't right, but there's no reason he couldn't go by Michael or Moe. It's interesting to see that on this form, the "Related Person"'s last name is "YC CONTINUITY MANAGEMENT I, LLC". I suppose it's not terribly remarkable, but it goes to show that many of these forms have ambiguous meanings that are wide to receive.
13
jackgavigan 1 day ago 0 replies      
It could be that they've invested all the money from their last fund and are just putting a new fund in place to continue making accelerator investments.
14
jaydub 1 day ago 0 replies      
Interesting that Kleiner Perkins is moving downstream http://www.nytimes.com/2015/06/17/business/dealbook/kleiner-...

Related?

15
andy_ppp 1 day ago 0 replies      
"Pooled Investment Fund Interests"[1] is checked which mean it's a fund for shares in multiple companies? Not sure if they are going full on VC?

Anyway I'll buy some :-)

[1] More info: https://www.moneyadviceservice.org.uk/en/articles/what-are-p...

16
pbiggar 1 day ago 1 reply      
`Does the Issuer intend this offering to last more than one year? No`

Does this mean that they're only offering entry into the fund in the next year, or that the money will all be distributed over the next year?

If the latter, this would imply that this is a single investment vehicle. Though the wording does imply the former, I would think.

17
nphyte 15 hours ago 0 replies      
i wish more companies innovated like these guys! the world being a happier place would actually be possible.

Good on you'll

18
marincounty 1 day ago 0 replies      
So why not start up a pooled venture capital fund? It will be one risky fund, but they all seem risky?

I am eagerly awaiting the day Janet Yellen raises interest rates! There's too much free money being given out, and it's not going to the poor, or middle class.(I thought stricter banking regulations were good after the crash, but boy was I wrong!)

These investment entities(hedge, venture, etc.) have too much Monopoly money to throw around. Why shouldn't Y Combinator get in on the Party? Actually, they late to the Party? 'Let's get the best loans, and while we are at it snag the reluctant Retail Investor who's 2008 wounds are starting to close, and just might give up their bloody wad of cash siting in that horrid CD?'

19
adoming3 1 day ago 0 replies      
"Just shut up and take my money" - every investor
Dynamics.js, a JavaScript library to create physics-based animations dynamicsjs.com
286 points by michaelvillar  2 days ago   47 comments top 15
1
drostie 2 days ago 1 reply      
Very cool. There is one thing that I didn't see here which was either a bug or a clever tuning of the numeric parameters: overdamping. When solving the equation:

 x''(t) = -2a * x'(t) - k * x(t)
(spring force k, linear friction a), the solution is generally a sum of solutions x = C exp(w t) for some arbitrary constant C and w = w(a, k). Plugging this in produces:

 w^2 + 2 a w + k = 0 (w + a)^2 + k - a^2 = 0 w = -a sqrt(a^2 - k)
For `k > a^2`, the system is "underdamped" and you see sinusoidal oscillations, and increasing `a` will make the system relax to equilibrium faster. But for `k < a^2`, the system is "overdamped" and increasing `a` will make the system relax slower. (If you find this hard to imagine, think about what happens as `a` tends to infinity: there is so much friction that the spring just barely crawls to its final destination. Comparisons involving molasses and other high-viscosity substances might be apt.

When turning the frequency all the way down, I couldn't find a point where the friction started to make the relaxation to equilibrium slower rather than faster.

2
valgaze 2 days ago 1 reply      
If this library is too much muscle & you just need a serviceable animation library, check out Daniel Eden's animate.css: https://github.com/daneden/animate.css

Demo: http://daneden.github.io/animate.css/

Of course there's always a the danger of "overdoing it" w/ ridiculous animations but if you can avoid that temptation it's a very handy tool.

3
pimlottc 2 days ago 1 reply      
Neat. Couldn't this be implemented as a generator of easing functions, allowing the animation code to be handled by another project?

http://easings.net/

4
agumonkey 2 days ago 5 replies      
Anybody feeling this kind of animations aren't bringing a lot to the table ? I was thrilled to watch lollipop material design (and even kitkat project butter), but it quickly faded (sic). Instant, simple interfaces are needed in many tasks where I don't enjoy or even have the luxury to waste time on distracting animations.
5
zecho 2 days ago 2 replies      
As a hacker on Hacker News I'd have to agree with the others that animations are a Very Bad Thing. They are distracting to me, an easily distracted person, who is currently avoiding work by reading Hacker News this afternoon.
6
drcode 2 days ago 1 reply      
Please don't show this to anyone at Apple... I don't want the buttons on my next iPod to do a little giggly dance every time I press them.

(BTW- This is great work, I just hope UI devs don't overdo it...)

7
talmand 2 days ago 2 replies      
It's a nice library but I'm failing to understand the "physics-based" part.

I'm not seeing anything "physics" in the traditional sense; just manipulating the matrix3d in a transform property of an element in real-time to provide difficult or impossible effects with CSS alone.

8
th0ma5 2 days ago 1 reply      
Amazing library! I question it being targeted towards UI elements. I guess I'm thinking about the "condescending UI" arguments. On phones at least we can mostly turn it down, at least for system-level animations.
9
lacker 2 days ago 1 reply      
Interesting that the npm is accessed with npm install dynamics.js. Looks like npm install dynamics is a different library. I wonder if we are running out of npm namespace.
10
iraldir 1 day ago 0 replies      
Yeah, I don't really see the point compared to GreenSock's GSAP which is really performant and has some cool plugins.
11
html5web 2 days ago 0 replies      
Awesome, thanks for creating this!
12
z3t4 2 days ago 1 reply      
For anyone wanting to do graphics in web pages I would recommend looking into the canvas tag. Example code:

 with (context) { beginPath(); moveTo(100, 100); lineTo(100, 300); lineTo(300, 300); closePath(); stroke(); }

13
tomphoolery 2 days ago 1 reply      
OK so how do I make this control an LFO in my music? ;)
14
johnalxndr 2 days ago 0 replies      
excited to play with this, cheeers
15
denniskane 2 days ago 4 replies      
For everyone who loves the infinite possibilities of client-site JavaScript, and who at the same time does not love the blink-taggy smell that this kind of library evokes, I present to you the world's first distributed, web-based operating system: see https://www.urdesk.net/desk

This has been under very active development for going on 3 years now, and it is just starting to get to the point where it can stop being called a prototype or experiment.

U.S. Bans Trans Fat bloomberg.com
257 points by adventured  1 day ago   318 comments top 30
1
cowpig 1 day ago 8 replies      
> The U.S. market size for palm oil is 2.6 billion pounds (1.2 billion kilograms) annually, he said. He expects that to increase by half a billion pounds a year once trans fats are eliminated.

This is really, really bad news.

http://www.saynotopalmoil.com/Whats_the_issue.php

2
LordKano 1 day ago 9 replies      
Does anyone else remember when the Center for Science in The Public Interest pressured everyone to move away from saturated fats to trans fats?

The people leading the charge on this are the primary reason why we went to trans fats in the first place!

3
azdle 1 day ago 1 reply      
> I dont know how many lives will be saved, but probably in the thousands per year when all the companies are in compliance, said Michael Jacobson, executive director of the Center for Science in the Public Interest.

Ironic since CSPI is one of the major reasons that we are using trans-fats today.

4
Vraxx 1 day ago 2 replies      
Thank goodness. This will finally put an end to companies including trans fat in their products but labeling 0 grams of trans fat due to serving size and other stupid tricks. I think consumers as a whole can get over pie crust not feeling the same for a little bit, as well as a slightly different texture for frosting.
5
danso 1 day ago 0 replies      
This story is a great reminder that there's a lot of potential for studying Regulations.gov API and writing an interface (and heuristic) for surfacing interesting rules and regulations...the trans-fat rule has been up for comment for long while now:

http://www.regulations.gov/#!docketDetail;D=FDA-2013-N-1317

Maybe I'm in the minority of people who hadn't heard that this transfat regulation was soon to be implemented (as a former New Yorker, it caught me by surprise)...but there are probably many upcoming rules that are worth knowing about before they get published.

6
tswartz 1 day ago 0 replies      
Pros and cons to everything. Replacing trans fat with palm oil will likely increase the negative impact some of that industry has on the environment.

https://news.vice.com/article/indonesia-is-killing-the-plane...

7
mhurron 1 day ago 3 replies      
> I dont know how many lives will be saved, but probably in the thousands per year when all the companies are in compliance,

I doubt it. People are going to continue to eat way to much of things they shouldn't, or too much of things that should be only eaten in moderation. It's not like Trans-Fats are the only thing people are eating too much of that is killing them.

8
jwally 1 day ago 14 replies      
By these standards, how is alcohol still legal?

Alcohol-Related Deaths:Nearly 88,000 people (approximately 62,000 men and 26,000 women) die from alcohol-related causes annually, making it the third leading preventable cause of death in the United States.

-http://www.niaaa.nih.gov/alcohol-health/overview-alcohol-con...

EDIT:I'm not advocating that alcohol be illegal. I couldn't care less to be honest.

I guess my comment was more of a question of how can you ban substance "A" because "Its bad for you"; but not ban substance "B" which kills 90k people in the U.S. annually? It just seems cherry picked is all. If the gov't is going to ban bad things, how does substance "B" get a pass?

9
leonardicus 1 day ago 2 replies      
That's nice, but I think will be ineffectual in practice. As soon as you start cooking with any unsaturated fats, a portion will isomerize to trans fats. I doubt major food manufacturers are using only trans-fat sources in today's climate of health conscious (or hypervigilant) practices.
10
nsxwolf 1 day ago 0 replies      
The food police were wrong and killed a lot of people with trans fats. Can we have our lard back now? Oreo cookies taste like crap with whatever they've replaced the trans fats with.
11
niuzeta 1 day ago 4 replies      
I still don't quite get it; couldn't they have just mandated the usage/inclusion of trans-fat in product, and let the consumers avoid it? By the same standards, we're still letting people smoke and drink.
12
cheshire137 1 day ago 1 reply      
Maybe I'll finally be able to find snack cakes and biscuits without partially hydrogenated oil. That stuff is everywhere, even when the label proudly claims '0 trans fat!'. Such a crock.
13
thescriptkiddie 1 day ago 1 reply      
But vegetable shortening necessarily contains large amounts of trans fat. Does this mean that currently vegetarian foodstuffs will switch back to lard?
14
hasenj 1 day ago 2 replies      
Shouldn't there be some kind of regulation on products that use a lot of sugar? Specially those sold in large quantities ..
15
TorKlingberg 1 day ago 1 reply      
There is something about nutrition that always sparks a 500 comment thread on HN. I am not sure why.
16
kdamken 1 day ago 0 replies      
It's strange that they're putting these measures in place when sugar is significantly more damaging to people. High fructose corn syrup would have been a much better target.
17
protomyth 1 day ago 1 reply      
Since this is by regulation and not statue, what's to stop the next administration from simply changing its mind?
18
jokoon 1 day ago 0 replies      
I live in france and apparently there is no law to force manufacturers to indicate those. I often see "hydrogenated oil", but I don't know if it's possible to get away with those tricks mentioned in the article.

Got coronary disease in my family...

19
NoMoreNicksLeft 1 day ago 0 replies      
> I dont know how many lives will be saved, but probably in the thousands per year when all the companies are in compliance, said Michael Jacobson, executive director of the Center for Science in the Public Interest.

"Saved" in this context means what, living 3 months longer because your heart disease is slightly milder than it would otherwise have been?

20
geoffbrown 1 day ago 0 replies      
ADM, you are going to have to pry the coconut oil out of my cold, dead, greasy hands!
21
neosavvy 1 day ago 1 reply      
But what about biscuits!!!

Fried Chicken!!!

Alton Brown recommends using Crisco, he's like a scientist.

22
a8da6b0c91d 1 day ago 2 replies      
Does this mean the end of no-stir peanut butter? I hate that stuff you have to mix all the time.
23
moron4hire 1 day ago 2 replies      
This issue is the most important health and safety issue in the country right now. Add up all of the deaths due to obesity-related diseases and it's an obvious low-hanging fruit for improvement. For all the yellow-jacket journalism surrounding cars or drugs or strangers or vaccines or guns, they are all drops in the bucket compared to heart disease.

But that said, I don't think the FDA or CDC are doing anything functionally significant towards improving the issue. There is a greater underlying question that still needs to be answered: why has the problem continued to get worse, despite ALL efforts otherwise?

There is a gun against the head of 2/3rds of Americans and nobody is asking how it got there and why it's still there. I'm afraid it's because we keep arguing about the shape and color of the bullets and how you should really be better at dodging bullets of you want to stay alive.

24
ageofwant 1 day ago 0 replies      
And so dies the last of Indonesia's remaining natural forests. Covered by cultivated palm trees.
25
logicallee 1 day ago 2 replies      
wow - that is amazing. An outright ban. No, "strict labelling laws", no "strict regulation" in the amount or limitation on the kinds of foods that can still have it. Ban.

I'm surprised!

26
anon3_ 1 day ago 0 replies      
I thought this stuff was already banned in the 90's.
27
notNow 1 day ago 0 replies      
"As for frying, palm oil is expected to be a go-to alternative"

That's like jumping out of the frying pan into the fire, pun intended!

28
bamie9l 1 day ago 0 replies      
"As for frying, palm oil is expected to be a go-to..." uhoh
29
rm_-rf_slash 1 day ago 0 replies      
Huh, so apparently processed foods will be harder to make.

What a shame.

Mathematicians Are Hoarding a Type of Japanese Chalk gizmodo.com
241 points by curtis  2 days ago   106 comments top 21
1
hkmurakami 1 day ago 1 reply      
I remember news of the demise of the company hitting Japanese media outlets a couple of months ago. Here are some tidbits from that article that weren't covered by the Gizmodo piece:

- A group of American mathematicians reportedly purchased 1 metric ton of the chalk.

- Hagoromo was a technologically pioneering company, with things like (1) innovations in a greener manufacturing process for chalk, (2) chalk that can write on a wet blackboard surface, and (3) colored chalk that could be discerned by people who are color blind.

- The company is shutting down not because of revenue problems, but because of the ailing health of the President and lack of a successor. Iirc the current president is the 3rd head of the company in its history.

2
alister 1 day ago 6 replies      
All sorts of high-quality stationery are becoming impossible to find. I used to love poking around real stationery stores as a kid. Today's "warehouse" stores are a lot less fun that a real stationer. For people who don't know, a real stationer would have 10x the number of SKUs that Staples and Office Depot carry.

- Elastic bands from almost any store today are now a synthetic beige or grayish-tan material that is much less springy than pure rubber, and it rots leaving a sticky residue in less than a year. A real stationer used to have dozens of shapes and sizes of elastics, not 3 or 4 choices you have now.

- A high quality manual stapler is really hard to find. I haven't found any equal to my Apsco 2002 stapler which is no longer made. Everything I tried jams more often.

- If you want to put an envelope snugly inside a slightly larger envelope -- for example, to enclose a reply envelope -- good luck finding that in any brick-and-mortar store today. You'll have to order it.

Those huge arts and crafts stores like Michaels do have an overlap with what a real stationer used to be, but it's not a superset.

I've noticed some differences by country too: Compared to the US, good quality stationery is very hard to find in Brazil (even in rich neighborhoods)[1]. Generally in brick-and-mortar stores in Western Europe, you find high-quality stationery more easily than in the US warehouse stores. I've never been to Japan, so I'm curious to know what it's like there.

[1] http://brazilsense.com/index.php?title=Items_more_expensive_...

3
MichaelCrawford 1 day ago 4 replies      
I personally cannot fathom why anyone at all uses whiteboards.

When I was at Caltech, we called our Physical Chemistry prof "Wild Bill Goddard" because he wore a cowboy hat and boots to his lectures. His course was largely conceptual, illustrated with balloon-shaped drawings of electron orbitals, drawn with 3-D projection in which the yellow and red were in the plane of the chalkboard while the pale blue projected in and out.

We all complained that we could not see the blue diagrams. "That's OK, you're not supposed to, that's why I use pale blue."

On night Sonja Benson and I snuck into the classroom to put the arm on his blue chalk. We took it down into the steam tunnels where she doused it with her hairspray, giving it a hard coating so it would not mark the board anymore.

The very instant Wild Bill saw what we had done, he cracked his blue stick in half then continued drawing orbital diagrams we were not meant to see.

4
GuiA 1 day ago 11 replies      
I grew up writing on blackboards, so obviously this is purely my subjective preference, but I also find blackboards much better to think on than whiteboards. One thing I particularly don't like is that I tend to write quite fast; the faster you write, the more likely it is that a marker will leave a thin, washed out, barely legible stroke, whereas chalk is always nice and clear. I also tend to leave things up on my board for a few days, after which a lot of dry erase markers become a pain to erase, whereas chalk always erases nicely.

I have a whiteboard in my office, but if I could get it switched out with a blackboard without bothering my nice office manager who already has way too much work to be bothered by my weird requests, I'd do it in a heartbeat.

5
ot 1 day ago 0 replies      
In case anyone's wondering what the formula on the blackboard is:

https://en.wikipedia.org/wiki/Tupper's_self-referential_form...

6
nwhitehead 1 day ago 1 reply      
In my mind the real tragedy is blackboards. Almost nowhere has good blackboards anymore except for math departments in older universities that stubbornly refuse to replace their old boards. Since the 1970s everything has been replaced with inferior materials.
7
fsk 1 day ago 2 replies      
The reason you have problems with markers is that, when a marker is out of ink, people put it back on the whiteboard instead of throwing it in the trash.
8
nosuchthing 1 day ago 1 reply      
Price history is interesting [1], I'll take a guess that information of Hagoromo discontinuing this product went viral in January.

[1] http://camelcamelcamel.com/Hagoromo-Fulltouch-White-Chalk-72...

9
blt 1 day ago 3 replies      
Watch video lectures from MIT, they know what's up. 9 blackboards sliding on vertical rails. The lecturer doesn't have to waste time erasing boards, and the history stays up for a long time in case students fall behind on their notes.
10
NoGravitas 1 day ago 2 replies      
The solution to your whiteboard woes: don't use dry-erase markers. Use washable crayons (as sold for small children). You'll never be surprised by an empty marker again. They take a damp cloth to wipe off, but they wipe off completely, regardless of how long they've been on the board.
11
kqr2 1 day ago 0 replies      
OT, however, this made me think of Brandon Sanderson's book The Rithmatist where chalk and its formulation is an integral part of magic.

http://smile.amazon.com/Rithmatist-Brandon-Sanderson-ebook/d...

12
quasiresearcher 1 day ago 1 reply      
I remember a discussion about this on MathOverlow http://mathoverflow.net/questions/26267/where-to-buy-premium...
13
deerpig 22 hours ago 0 replies      
I have two large blackboards in my own office in Phnom Penh. I couldn't live without them. There is a shop that makes them out of wood. They aren't nearly as nice as a proper ceramic coated steel chalkboard (they still make these in china -- look it up on alibaba) but wood still does the job. The only problem is that it's difficult to find large chalk sticks. Couldn't find them in Cambodia or Thailand, but there is a shop in Vientiane (the capital of Laos) that bought a crate of buckets of multi colored sidewalk chalk years ago, and I'm the only one who buys it. I've now bought at least 10 buckets. The only problem is that there are only two white sticks in each bucket and Vientiane is very far from Phnom Penh.... sigh.
14
jipumarino 1 day ago 0 replies      
I taught with blackboard and chalk at a university and then at a high school for 9 years. I'm allergic and hated the chalk with a burning passion.
15
ryanobjc 1 day ago 1 reply      
Someone pointed out that it's really hard to erase white boards. While it's trivial to erase a blackboard. Better for top secret stuff.
16
beloch 1 day ago 0 replies      
Another evil of whiteboard markers is that they dry out. Leaving them open hastens their demise. This is especially noticeable right at the end of their lives when capping them in between periods of drawing and switching between several dying markers can get you through a talk.

Chalk is not entirely without treachery itself. The common North American breed is thinner and prone to snapping if you press too hard. If you don't bring your own with you, you're likely to have to make do with one of the tiny stubs that have been left behind. It's also pretty common to run into chalk brushes that contain more chalk than your chalk box.

17
tenfingers 1 day ago 1 reply      
Blackboards are better, but the argument about superior chalk is pretty dumb.

I had/used everything from blackboards to digital whiteboards. I still likeblackboards better, but it's only going to last as long as digital displaysimprove.

Blackboards lasts forever. Whiteboards, especially the cheap ones, inevitablystart to have stains that you cannot clean without some solvent.

Blackboards work with any chalk, essentially. You have to be careful aboutwhiteboard markers, because many stink to oblivion (I get asthma out of them),and many others don't erase properly. One of the best markers I tried are thewater-based made from Edding, but they're harder to clean, and you have toclean within a couple of hours in order to be able to clean at all. Chalk, onthe other hand, lasts forever and can always be cleaned. If you want a perfectblackboard, just use a damp cloth.

There's this illusion that whiteboards are cleaner. Actually, they're not.Whiteboard "dry" markers work by depositing a fine powder on the surface. It'sinitially suspended in a liquid, and then sticks on the whiteboard "ideally"only due to electrostatic tension. Dry marker powder is often toxic, thesuspension liquid is often toxic (especially when you breathe it), and thepowder sticks fucng everywhere. If you're careful when erasing, a blackboardcan be kept very clean. I have one in the kitchen.

The markers are expensive. Especially when you want the good, water-based ones.I often have to order those. Chalk is inexpensive, I can buy it anywhere.

Why the hell are we using whiteboards??

I built several whiteboards myself. They suck. The best approach to awhiteboard is buying a piece of glass the size you want, and gluing a piece ofwhite adhesive plastic on the back side. Dry marker on the glass alwayscleans. You might get stains if you leave the marker for days, but they comeoff easily, with just water.

Now what.. digital whiteboards. It's a love-hate relationship. They'recompletely clean, which is what I like about them, but there are many downsidestoo. You need power, which means that you need to switch the screen off. Awhite/blackboard on the other hand shows you stuff all the time.

The screen is large, but the pixels are too large. The DPI, at close distance,is ridiculous. They're also too bright when seen at close distance. There'sglare. The "pens" suck. But if you did the same with some e-ink technology, Iwould switch to digital whiteboards instantly, as it would solve all theseproblems at once. It looks like a dream application where the price of thedisplay wouldn't matter much (ardesia costs a fortune nowdays anyway).

I had to use at some point an Epson projector which had some whiteboardcapability on it. The pen had an infrared sensor/reflector on top, which meantthat you had to hold the pen without making a shadow between the projector andthe pen. Who's the idiot that thought this would work??

18
tel 1 day ago 0 replies      
I don't know the reason, but I feel nearly intellectually crippled since losing access to the multitudes of blackboards available on college campuses. White boards do not even approach the utility of blackboards for me.

People in this thread have suggested fine tipped markers and washable crayonsI will try them both!but I truly feel I will simply not be able to recover the efficiency and value of a blackboard any time soon.

19
gsam 1 day ago 1 reply      
While I grew up with mostly whiteboards, there's just something magical about blackboards. Disregarding my disgust for screeching on a blackboard, mathematics on a chalkboard seems so natural. Where I study they've been constantly removing blackboards and even whiteboards. Now we're left with dozens of terrible document cameras, which nobody can be enthusiastic about.
21
facepalm 1 day ago 1 reply      
"theres no way to tell when a marker is running low"

Hm, maybe that's a problem that can be solved and earn the inventor good money?

SpaceX Hyperloop Pod Competition spacex.com
244 points by dtparr  2 days ago   127 comments top 25
1
guynamedloren 2 days ago 3 replies      
Any engineers or mechanically inclined folks around LA interested in assembling a team? Let's chat.

About me: full stack software engineer with a degree in Systems Engineering from University of Illinois Urbana-Champaign. I work on classic cars and build shit with my hands when I'm not coding. I'm into efficiency, and I loooove the idea of building a better, faster cheaper CA high-speed railway.

2
michaelbarton 2 days ago 2 replies      
This immediately reminded me of the early days of steam when, very similarly, at Rainhill a one mile test track of railway was constructed and people were invited to compete their different locomotives. The winner was Stevenson's rocket which amazed the large crowd by travelling at a heady top speed of 30mph.

An "also ran" was the amusing cycloped, the only non-steam entry, a horse on top of a treadmill walking the carriage forward.

3
marbogast40 2 days ago 1 reply      
Wait But Why - Tim Urban's take: http://waitbutwhy.com/2015/06/hyperloop.html
4
lisper 2 days ago 5 replies      
> SpaceX will construct a one-mile test track adjacent to our Hawthorne, California headquarters

I'm not sure which would be the more remarkable achievement, getting the Hyperloop working, or obtaining the real estate to build a mile-long test track in Hawthorne.

5
dtparr 2 days ago 2 replies      
I thought this was an interesting tidbit from the linked rules pdf:

> In addition to hosting the competition, SpaceX will likelybuild a pod for demonstration purposes only. This team will not be eligible to win.

6
loceng 2 days ago 0 replies      
I'm really happy to see Tesla and SpaceX being run properly as platforms. Clearly Elon Musk has a great understanding of platforms with his online involvement with PayPal, however there aren't as obvious examples of it occurring in the physical product world; there are attempts though they seem more to control a user's behaviour and entrench people into a recurring business model than actually disrupting an unmanaged-disorganized system.

Platforms that generate a lot of value from APIs succeed by getting developer adoption by running things like hackathons - and this effort seems no different. Kudos.

7
imglorp 2 days ago 5 replies      
I wonder what they will do about claustrophobia?

> The capsule itself would need to be small4.43ft (1.35m) wide and only 3.61ft (1.10m) high. No standing room

No windows, pod inside a metal tube, seats reclined with minimal head room. There's no rational reason to panic in that position, but many people will not be comfortable. Video displays might help, but only to a point.

8
Animats 2 days ago 1 reply      
The original Hyperloop system was supposed to be powered by linear induction motors spaced along the track. The pod doesn't have propulsion capability, except maybe an emergency system. So the pod/track interface is pretty much set by the propulsion system.

For trains, linear induction motors have usually been paired with magnetic levitation, as in the Transrapid system. But they don't have to be. Many roller coasters with a linear induction motor launch system have been built; Flight of Fear at King's Dominion, by Premiere Rides, was the first, in 1996. For a 1-mile test track, outsourcing the whole job to Premiere Rides or Intamin would be a good move. Intamin, which mostly makes amusement park rides, also has a transportation division. They build monorail systems. Intamin would just have to combine the car design from their P8 monorail[1] with their linear induction motor launch system.[2] By Intamin standards, a fast 1 mile loop with no hills is easy. ("We can put in a vertical loop for a small extra charge...")

[1] http://www.intaminworldwide.com/transportation/Home/news/Sha...[2] http://www.intaminworldwide.com/amusement/RollerCoasters/LSM...

9
martinald 2 days ago 3 replies      
The projections of capacity seem way too low. It says it can do 840passengers/hour.

In the UK each 11 car train on the West Coast Mainline can take about 600 passengers seated + maybe slightly over 100 more standing. So around 700. We have 3 trains per hour between London and Birmingham, which adds up to a rough capacity of 2200 per hour.

This doesn't even take into account the multitude of slower trains that go between the two cities.

All these trains are generally congested as hell at peak (and increasingly off peak).

How is 840 passengers/hour enough for two big cities? I would assume making each pod bigger and heavier would require a greater stopping distance between them so the only way I can see to add more capacity is to build parallel hyperloops. At that point you've got the land take and the expense that comes with it.

10
zacharypinter 2 days ago 0 replies      
Very cool announcement.

It strikes me that this is a pretty good insurance policy against a poor implementation of Hyperloop being used as an argument against its feasibility. SpaceX/Musk gets to put out a proof of concept track and pod without committing to building the full thing themselves.

11
jcchin41 2 days ago 1 reply      
For those interested in getting a better handle on the vehicle thermodynamics, see[1].Feel free to play around with the open-source python model here[2]

[1] https://mdao.grc.nasa.gov/publications/AIAA-2015-1587.pdf

[2] https://github.com/OpenMDAO-Plugins/Hyperloop

[3] http://www.popsci.com/hyped-up-startups-race-hyperloop-life

12
omegant 2 days ago 1 reply      
The 9 engineering points at the end of the text, are very important and extremely dificult each one of them.

I would add a pod decompression, and the emergency braking emergencies. Common oxigen masks don't work above 50000' and you can not dissipate the kinetic energy of a pod traveling at 900km/h just by friction without a brake fire.

13
fitzwatermellow 2 days ago 0 replies      
They plan on releasing the full requirements for the final design in Aug, but the rules document includes some example technical questions. Most calculations I imagine can be estimated using college physics and mechanics: drag coefficient, pneumatic pressure, heat flux, etc. But I'm curious if anyone has any references specific to air compresser propulsion systems? I'm thinking it would make a nice WebGL simulation that shows the relationship between speed and heat generated ;)
14
myth_buster 2 days ago 0 replies      
For those who couldn't access the site:

 Webpage screen grab [0] Guidelines [1]
[0] http://i.imgur.com/iq3W18x.png

[1] https://drive.google.com/file/d/18IkkbuxMbrzaVKHRnqXqtWimxF5...

15
ljk 2 days ago 2 replies      
> Neither SpaceX nor Elon Musk is affiliated with any Hyperloop companies

what does that mean? Didn't they come up with the idea of hyperloop?

16
capkutay 1 day ago 0 replies      
Does anyone know about the relationship between SpaceX and Hyperloop Technologies? It looks like Hyperloop Tech has a lot of ex SpaceX engineers/execs, but they don't seem to have an active partnership with Musk or SpaceX
17
robin_reala 2 days ago 0 replies      
Site was erroring when I visited; Google cache is here: https://webcache.googleusercontent.com/search?q=cache:www.sp...
18
mixmastamyk 2 days ago 2 replies      
If it works well, I'd like to see these in LA or other big cities too, not just long distance. Would be great to get on one of these and end up at the beach in Santa Monica at high speed instead of suffering through traffic.
19
revelation 2 days ago 0 replies      
It seems like theres way too little information here to build anything.
20
databound 2 days ago 0 replies      
It's funny they are building a 1 mile test track but don't have any interest in running with the idea... seems pretty committed at this point.
21
SapphireSun 1 day ago 0 replies      
Hey Boston, want to join a feasibility meeting I'm setting up? Email me. :)
22
moey 2 days ago 1 reply      
"Neither SpaceX nor Elon Musk is affiliated with any Hyperloop companies. While we are not developing a commercial Hyperloop ourselves, we are interested in helping to accelerate development of a functional Hyperloop prototype."

I thought Elon wanted to create the Hyperloop. Here I was thinking I would see it in the next 5-10 years :(

23
jackreichert 2 days ago 0 replies      
I'm finding the center alignment of the text bothersome.

Aside from that, very cool!

24
CodeSheikh 2 days ago 1 reply      
"Neither SpaceX nor Elon Musk is affiliated with any Hyperloop companies" Yet it is hosted on spacex.com. I love the project but I hate the legalities here. What does that mean? If something bad happens during operation then these aforementioned entities cant be held responsible.
25
toomuchtodo 2 days ago 1 reply      
> Now if only I had an idea that had this much interest...

Or perhaps, more importantly, the resources to bootstrap the ideas..

> By the way, what kind of business strategy is this called?

Patronage or philanthropy, depending on how you see it. It's okay for DARPA to do grand challenges, but not Musk portfolio companies?

Let's Encrypt Launch Schedule letsencrypt.org
229 points by joshmoz  1 day ago   62 comments top 9
1
diafygi 1 day ago 1 reply      
I'm suuuper excited for this to launch! However, it's worrisome that the ACME protocol (what Let's Encrypt uses) still has a ton of bugs open[1] and they are still changing the protocol often. Just search for "TODO" on the spec markdown[2].

I want this project to proceed, but they should really focus on getting a much more mature and stable spec before launch. This isn't WebRTC, where you can just continuously tack on additional stuff or change the API constantly. It's TLS certs. The certs issued using this API end up telling people it's safe to input their passwords or credit card numbers.

I really hope the ACME spec gets stable before the launch in July.

[1]: https://github.com/letsencrypt/acme-spec/issues

[2]: https://github.com/letsencrypt/acme-spec/blob/master/draft-b...

2
qrmn 1 day ago 0 replies      
I gather they're not launching with ECDSA certificates (and obviously not with EdDSA or whatever comes out of CFRG, because that's still being discussed by the IETF/IRTF), but they're going to add it later. Any idea when?

What's the hold up; HSMs that'll do secp256r1?

Because of the huge performance improvement ECDSA brings over RSA, I know I'm not going to be deploying Let's Encrypt certs until I can get ECDSA ones (as well as RSA ones, presumably).

3
jtchang 1 day ago 1 reply      
I am really excited about this whole initiative. Mostly because encryption should really be standard at this point if not for the hurdles one has to face in deploying it.

What type of help is the Let's Encrypt team still needing?

4
tokenizerrr 1 day ago 0 replies      
Very glad to hear there is a launch schedule, have been curious about how this project has been progressing. It's a fantastic intiative and I almost can't wait until September 14.
5
masida 1 day ago 4 replies      
Very nice initiative.

But for me the biggest problem with adoption of SSL is still that every domain name needs it's unique IPv4 address, and all problems that come with that, not registering or paying for the SSL certificate.

At work, I usually use virtual hosting for about 100 domains on one IP address. I don't see us buying an IPv4 address per domain and adding them to my NIC configuration one by one. Once we can safely ignore IPv4 and use IPv6 only it will probably become easier and cheaper.

6
general_failure 1 day ago 2 replies      
can someone clarify if revokation is free with letsencrypt?

Also, who pays for all this infrastructure? Mozilla?

7
worklogin 1 day ago 2 replies      
Do Chrome and Mozilla have Let's Encrypt in their Root stores? I don't see them.
8
jglauche 1 day ago 6 replies      
Damnit, my existing cert expires September 12. Any free alternatives to that?
9
EGreg 1 day ago 3 replies      
Can someone summarize why this is better than, say, StartSSL or AlphaSSL?
How to Become a Great JavaScript Developer ustunozgur.com
238 points by ludwigvan  14 hours ago   142 comments top 40
1
smlacy 8 hours ago 11 replies      
No. Not this. Net even close to this. Let me illustrate:

How to become great at Sports: - Read books about sports. - Watch other people play sports. - Read in-depth analysis of past sports games.

How to become great at Playing Violin: - Read books about playing violin. - Real sheet music by the great masters. - Listen to many concerts.

It's inherently obvious that the above approaches are totally and completely wrong. How do you become good at anything? You practice it. You do it. You have a coach or teacher or mentor who can give you pointers, but lets you make mistakes. You have someone who lets you get things wrong the first time, so that you can see the consequences.

There is a time and place for theory, for reading, for analysis, and this is too part of the learning process, but this is not at all the most important tool for becoming great at something.

Theory, analysis, critique, history, and context all matter when learning any skill, but first, you must build the fundamentals. First, you must do the thing and once you reach a level where you really understand the thing, then and only then can you begin the more introspective task of theory and analysis. Even without these steps, the best way of becoming great is to DO.

2
fridek 12 hours ago 3 replies      
The best thing that happened to me as a JavaScript developer was being exposed to other languages. Especially the not-so-fancy Java and C++, really opened my mind about structuring code and planning for a long-term project.

Knowing JavaScript and JavaScript frameworks is surprisingly useless for the type of work JS devs usually handle. The documentation often consists of TodoMVC type of examples and approaches. Hardly anyone explains how patterns like DDD, SOLID, advanced REST, etc. fit into this. The highly appraised Flux is in reality just a pub/sub system like many on backend (and still it's a huge step forward - it admits JS systems are large and complex).

I'm just looking at a system design graph, preparing for a new job. There are about 30 nodes representing multiple services, databases, load balancers and processes on the backend. There are two for frontend - one says "JS" and the other "JSON". This is how most people think about frontend and to be a great JavaScript developer just don't be one of them.

3
darby2000 11 hours ago 7 replies      
Just adding my two cents. Wanna be a good JS dev? Go play around with a lisp. Like Clojure for a month or two. Get comfortable with functional programming. Then come back to JS. So many people come from a OOP to JS and they have a bad time with it. JS is more like a lisp with C syntax than it is a traditional OO language. Learning Clojure, not only improved my JS abilities, but just my over all programming maturity. IMO. Additionally, I agree with the author about reading books, and libraries. It's always good advice for any language.
4
mhd 12 hours ago 3 replies      
If you're already a decent enough developer and just need to get your grips on JavaScript, I'd say that most books listed aren't that useful. A decent enough online reference will get you going and after that you'll have to consider that JS is a pretty large dung heap by now where you probably won't get a lot of mileage out of inhaling dust from the crusty parts. Unless you're an enterprise developer tasked with maintaining your company's Dojo application.

If you're a new developer coming into JS from a more tabula rasa situation (possibly these days), I'd still say that most of those books are wasted, and you'd probably get a better mileage out ouf SICP or the GoF book than most Ninja/21days/Dummies tomes.

The core of JS is small enough, and the rest is highly dependent on your task, scope and framework. No jQuery book will help you with your intranet extjs app or your state of the art fluxified React SPA. You'll have to wade through code to get there, preferably as much of your own as possible.

And the most important thing: Pick something and stick with it. No mid-project framework/build tool/library changes. Even "obsolete" tech still works better than falling into the Duke Nukem hole. So don't get nervous about still using "grunt" when all the cool kids are using "gapoodle".

5
murbard2 12 hours ago 1 reply      
Missing advice: learn to program well in a few well designed programming language. JavaScript was designed in 10 days and the main reason it still exists today is incredibly strong path dependence. There's nothing wrong with wanting to become a great JavaScript developer, but one needs to become a great developer first, and JavaScript just isn't conducive to that.
6
sailfast 12 hours ago 0 replies      
This is a good collection of resources that I have found are very educational and useful. That said, for me personally it was extremely important to dig into some problems concretely rather than by reading. "OK I've read about how I should do this in the ideal - now go actually set up an Angular app with a full test suite pointing at a REST API." Or perhaps - "I've got a great idea for some code art - let's try it out."

Without the follow-through I found that I could keep up in a conversation but still had doubts and issues when it came to implementation. Doing it in anger, delivering a product, as many times as possible helps a good deal in moving toward greatness.

7
scelerat 1 hour ago 0 replies      
My breakthrough moment with JavaScript was reading Mark Jason Dominus' ''Higher Order Perl'', and realizing everything described there could be applied to JavaScript.

That and just building a lot of things with it.

8
serve_yay 1 hour ago 0 replies      
I don't think there's any other way than working with talented people. It shows you what they're better at than you are, and (maybe) what you're better at than they are. I did all this stuff and I thought I was pretty hot shit, until I joined the team I'm on now.

You have to find out what you don't know that you don't know, and it's really hard to do that on your own.

9
alkonaut 5 hours ago 0 replies      
I'd like to know how to learn to "accept" JS, coming from other languages.

No really.

How do I become a JS developer that doesn't whine all day long about all the little idiosyncracies of the language, or the tediousness of writing tests for things my compiler should work out? Obviously people enjoy writing JavaScript, and at least some fraction of these have to have been the kind of person that whined about the type systems in Java or C# being too weak.

If you were one of those developers, how did you let that go (did you?) when becoming a javascript developer?

10
danbruc 7 hours ago 0 replies      
Become great in language X seems the wrong goal to me. An enormous amount of things and at the same time the most important things that make you a great software developer are completely language agnostic. And I would argue they are harder to learn through the lens of a single language and software stack. You then still have to become a productive software developer knowing your libraries and tools and their quirks but this seems secondary to me and easily compensated by your favorite search engine. Or by example - knowing when to use a set as opposed to a list is more important than knowing in which namespace you can find the implementations for language X.
11
donatj 12 hours ago 0 replies      
I 100% indorse "read libraries". There really is a lot of wisdom buried in them. I read through MooTools back in the day and I would go as far as to call the experience formative. Read frameworks you don't like and consider WHY they work like they do.
12
rrss1122 12 hours ago 0 replies      
What prompted me to develop as a JavaScript developer was being put in charge of a major project that was mostly JS code. Up till then, I had done some JS work, using jQuery to add animations and interactions to a UI and some ajax. Doing this JS project exposed me to pretty much everything you can do with JS, and I learned it all pretty quickly so I could start contributing to the project.

I only got this project too because the dev previously in charge of the project left the company and I asked about it. It's another lesson to new developers: never let an opportunity pass you by.

13
humbleMouse 12 hours ago 0 replies      
I think it totally depends on the problem domain you are working with. There are so many different ways to use javascript it is overwhelming. There are also lots of ways to structure it. I feel like a very good understanding of a widely used OOP language such as Java is a good place to start before jumping into the wild west of javascript programming.
14
kriro 10 hours ago 0 replies      
I think that setting up a test environment is one of the bigger hurdles (it seemd more confusing for JS than for most other languages I used before). Starting with installing node and "node myfile.js" is pretty good to get going (writing hellow world, fizzbuzz etc.). I struggled quite a bit sifting through all the options after that (e.g. npm+Grunt+Browserify to be able to require stuff etc.). Maybe I'm imagining things but the simple "getting started" was quite complex for JS. Or in other words...the first steps are pretty hard imo.

The inclusion of Backbone seems a bit odd. Isn't that also frontend-ish (like the pick one of Ember,React,Angular). I'd rather substitute it for Express.

15
dj_doh 7 hours ago 0 replies      
Read a book or six top to bottom, mix it with video and blog posts focusing on concepts that are deemed complex or critical in that subject.

Build general purpose libraries, plugins, micro-framework or personal projects. Follow-up by validating your work with your community or sample audience. I have been taking the later route.

When it's done. Read some more and build some more. Put it on a repeat loop. Look for some variation as it brings different perspectives.

16
stared 11 hours ago 0 replies      
Maybe it's person-dependent, but I've learn most JS from MDN (https://developer.mozilla.org/en-US/docs/Web/JavaScript) _and_ collaborating with others (plus a lot of blog posts, code snippents, and in general: reading code by good developers (side note: it's why I hate minification - I habitually read code on webpages)).

For books the main problem is that they are getting obsolete quickly (JS is a rapidly-changing technology) and, except for very basics, somewhat opinionated and task-dependent.

17
z3t4 11 hours ago 0 replies      
To be good at something, you do not need to know things outside of the area. It might however be helpful if you need to go "outside the box". But to be an expert at the things in the box, you only have to know about the box ...

That said, before I started with Node.JS and understood it's "patterns" JavaScript was just some evil necessary to get things to work. But after learning the module patterns of Node.JS, JavaScript started to get fun!! And the more ppl that understands "modules", the more fun it gets! So go learn about modules and even try making your own!

I guess Node.JS is now part of the JavaScript box.

18
beat 11 hours ago 0 replies      
"A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects." Robert Heinlein, Time Enough for Love

I have a hard time thinking of "web programmer" as "polymath" and worrying that it's not specialized enough.

19
petercooper 11 hours ago 1 reply      
Fancy linking up http://javascriptweekly.com/ where it's mentioned? :-) Not everyone knows where it is. Thanks!
20
ahallock 8 hours ago 0 replies      
> Reading and understanding underscore will improve your functional programming skills.

No. underscore does it wrong (check https://www.youtube.com/watch?v=m3svKOdZijA) Check some of the real functional libs like Ramda and Pointfree Fantasy

21
srpablo 11 hours ago 0 replies      
Tastes in books varies wildly between developers: there are many on there that I wouldn't say helped me as much as others, but to each their own!

With that in mind, I'd put a plug in for David Herman's _Effective JavaScript_. It goes beyond simply stating the mechanics of JavaScript into what pitfalls can be had, and uncommon but critical factors to consider when writing your code. The point about UTF-16 code points in strings alone makes it a valuable resource.

 [1]: http://effectivejs.com/

22
adam12 12 hours ago 0 replies      
I think he/she should have suggested that you actually work on some javascript projects that you are passionate about.

Also, it seems like a big jump to go from doing exercises to giving lectures.

23
landmark2 12 hours ago 0 replies      
tl;dr - to become a great javascript developer...learn javascript
24
fapjacks 6 hours ago 0 replies      
This is awful and wrong. Always Be Coding. That's how you become a great JS developer. Or great anything. Practice makes perfect. It's said for a reason.
25
Roboprog 10 hours ago 0 replies      
I mentioned this elsewhere, but I thought it would make a nice "top level" comment: Learn to use (and use) JSDoc in your JS code. Having comments that generate a standardized index of your code, as well as being used by an IDE, greatly helps the comprehension of the code when you come back to it a day or two later.
26
sjcrank 10 hours ago 0 replies      
In addition to these great points, I find it extremely helpful to spend some time reviewing the source of various open source JS frameworks.

You can learn so much from patterns and techniques others are using, but that may have not been documented in the other listed resources.

27
javabank 7 hours ago 0 replies      
They straight up stole the Aphex Twin logo: https://twitter.com/ustunozgur
28
rilita 12 hours ago 2 replies      
tldr: ( my own, not the authors in the article itself )

- Read books

- Learn libraries ( author seems to like node.js and recommends libraries associated with that )

- Do exercises

- Learn how classes work in JS ( Note this is amusing to me since JS does not have classes in the typical sense [ they are implemented via libraries with prototypes and closures ] )

- Learn what Es5, Es6, ES7 are ( There are good things here, but be aware that most of these features are not implemented in most browsers and will requires shims and/or translators to even function. Be careful as they may work in your browser but not others. Test! )

- Read JS blogs and watch JS educational videos

- Practice

It's an okay article. If you are clueless how to start learning seriously this should help. Some decent books and websites are mentioned by name.

I think the "every JS developer needs to learn XYZ" is a bit off though. This is one man's perspective.

29
synthmeat 10 hours ago 0 replies      
I definitely do not consider myself great JavaScript developer. Not even mediocre. Code by some people that now do JavaScript lectures, I've refactored and made at least order of magnitude faster, so there's that.

There's one important thing I've already learned though and I'll do you a solid - be a total nazi to your code.[1]

At the start, it's easy to get impression that it's all loosey-goosey-everything-works kinda thing, and it actually is...at small scale! When you get to medium-sized thing, all hell starts to break loose, and I'm not talking just callback hell. You can avoid all that with modest amount of discipline.

I'm aware many people have qualms with JSLint, some even with JSHint. But it doesn't matter what you use as long as you keep consistency. Those two tools help you with that. If you can be disciplined without it, sure, go for it. Just reading on possible configuration options for JSHint already made me consider many potential pitfalls I wouldn't have even thought of otherwise..

As far as learning goes, I'm definitely recommending learning as-you-go. To hell with academics - this is JavaScript, language made in few weeks[2]. You can develop amazingly accurate feel for the language in spite of not knowing rigorous abstracts. You're not sure what new actually does, and you're on deadline? Make a note - "figure out new" - and move along. When you manage to scrounge few hours of your busy week, run through those questions of yours across plethora of amazing resources online, head over to #Node.js or ##javascript on freenode, ask and ye shall receive.

In JavaScript, there's about a million ways one can make something work, an immense solution space. Are you sure your attack vector is good enough? Refactor aggresively!

[1] I wanted to go with "anal" instead of "nazi". But you get the idea.

[2] ...by some amazing dude though.

30
lgsilver 11 hours ago 0 replies      
Force yourself to lint your code. Having clean code is like having good handwriting, it makes content of your projects easier to understand and drastically improves how you think about what you're building.
31
HaseebR7 10 hours ago 3 replies      
I've never learned JS completely, just dangerous half knowledge from stackoverflow answers or blogs.

Should I jump into ES6 directly or learn ES5 and learn ES6 when it is implemented across all browsers ?

32
gauravgupta 9 hours ago 0 replies      
I personally like to follow the best (and frequently updated) tutorials on crowdsourced tutorial websites like Hackr.io for example. Just my 2 cents.
33
interdrift 12 hours ago 0 replies      
The 'great' developer has nothing to do with this description. It's not only about knowing/learning stuff,it's about coming up with smart stuff.
34
moron4hire 12 hours ago 7 replies      
Books are a waste of time. Most of them are garbage to begin with, and most of them are practically obsolete by the time they are published. Few have any lasting staying power.

Besides, nobody ordained the author to be an authority on the subject. He was just a guy who wrote a book. It's zero indication of the quality of the content.

Just write code. Practice, practice, practice.

35
clavalle 12 hours ago 1 reply      
This is all solid advice.

Is there a 'Genius' (nee 'Rap.Genius') type resource that annotates well known open source code?

36
pmalynin 2 hours ago 0 replies      
1. Stop.

2. Switch to TypeScript

3. ???

4. Profit!!!

37
kelvin0 13 hours ago 2 replies      
Learn Dart :)
38
notNow 3 hours ago 0 replies      
How many books did this guy plug in his piece? I really lost count at 10 or so ...
39
isisanchalee 10 hours ago 0 replies      
I wish this post was for Ruby! I've already done most of these things for JS -_-
40
ExpiredLink 10 hours ago 1 reply      
Can you become a great developer in a crap language?
Ask HN: Where is it OK on the Net to say I'm a developer looking for work?
214 points by hoodoof  1 day ago   69 comments top 19
1
duckspeaker 1 day ago 3 replies      
Whether you're looking for freelance or full-time, there's a site that combines the two monthly HN threads in a convenient format: http://hnhiring.me/

In general I find the whole handwaving "I'm looking for work" approach not very effective. You really need to actively contact companies/potential clients.

With that said, here's a list of resources I resort to when looking for a next thing:

 freelance remote http://hnhiring.me/ https://github.com/lukasz-madon/awesome-remote-job/ http://www.lancelist.com/ https://gun.io/dash/ http://www.10xmanagement.com/ https://theworkmob.com/ http://workingnotworking.com https://authenticjobs.com https://www.upwork.com http://www.happyfuncorp.com on-site http://getlambda.com/ full-time remote https://weworkremotely.com/ https://careers.stackoverflow.com/jobs/remote https://www.wfh.io/categories/1/jobs https://remotecoder.io/ http://www.workingnomads.co/jobs on-site https://angel.co/jobs https://hired.com/ https://jobs.github.com/positions https://www.themuse.com/ http://startupjob.me/ http://www.insidestartups.org/

2
perlgeek 1 day ago 0 replies      
http://careers.stackoverflow.com/ and linkedin come to mind.
3
glogla 1 day ago 0 replies      
Now if there were something similar for EU. Getting work visa in US is not exactly simple and easy.
4
blfr 1 day ago 4 replies      
Related: if you had a personal website/blog, would it be a good idea to put "I'm looking for a gig now" on it?
5
adamnemecek 1 day ago 3 replies      
there's a monthly thread here 'who wants to be hired'. idk if that's ideal for your situation.

https://news.ycombinator.com/item?id=9639011

6
drincruz 1 day ago 1 reply      
I've had good experience with http://angel.co though you have to be careful since there are recruiting agencies that scour that site as well. In NYC there is also http://interviewjet.com, http://hired.com, and http://underdog.io. These all pretty much have a similar model where you can have be in the highlight with a handful of other engineers sent to a whole lot of companies at the same time.
7
a3n 1 day ago 2 replies      
I've been at my current job for three years. It's a good job. A recruiter cold-called me after seeing me on LinkedIn.

If you have the common worry that your current employer will see "I want a new job!!!!" on your LinkedIn, then say something like "I'm always open to interesting opportunities."

8
xsolid 1 day ago 0 replies      
If you are looking for a steady stream of high quality freelance gigs and do not want to worry about billing, hunting for clients and other headaches you encounter as a freelancer, then you should definitely check out http://toptal.com

The best part of it is that it has a sprawling community of freelancers all over the globe who are happy to help whenever you need it and if by any chance you happen to end up having holidays somewhere, there is a huge chance that there are some Toptalers right around the corner happy to take you out for drinks and give you some insider tips on the location.

Full disclaimer: I work with Toptal at the moment and am pretty happy to be there :)

9
larrydag 1 day ago 0 replies      
For those looking at Data Science and Analytics I created an app that looks at the DataTau Who's Hirning.

https://larrydag.shinyapps.io/dthiring/

10
lazyjones 1 day ago 0 replies      
I wouldn't bother "advertising" unless you're looking at freelancing, it might come across as desperate.

There are some places where recruiters will constantly look for candidates: LinkedIn, XING (not sure if it's worthwhile outside Germany), Github, maybe SO (not sure), possibly FB. Having a polished appearance there will yield plenty of contacts even without indication that you're currently looking for work (assuming hard skills that are in demand somewhere).

11
suttree 1 day ago 0 replies      
More long-term, brand building, augment your resume style:

https://www.somewhere.com

disclaimer, I'm the founder ofc.

12
ganzuul 1 day ago 0 replies      
https://www.bountysource.com/ is an interesting option.
13
snowysocial 1 day ago 2 replies      
I've been seeing a trend on twitter, where people are doing just this. Might be worth just tweeting a few of the 'popular' people in your community asking if they know of any jobs. You never know they may just give you and RT that grabs someones attention.

Good luck.

14
sid6376 1 day ago 0 replies      
In case anyone is interested in a full time role at Booking.com , based in Amsterdam, My company is currently looking for a variety of roles. I would be happy to talk to you. See email in my profile.
15
harunurhan 1 day ago 0 replies      
Addition to others http://remoteok.io/ is good for remote jobs.
16
w1ntermute 1 day ago 0 replies      
Check out Hired.com. One of its nice features is that it'll hide the fact that you've set up a profile from your current employer.
17
romanu_zg 1 day ago 1 reply      
Did you try toptal.com? Toptal offers heaps of projects, and every single client is vetted by developers that work for Toptal before the job is posted. Basically, they guarantee continuous quality work for remote developers. Changed my life!
18
tejay 1 day ago 0 replies      
You're welcome at gun.io any time. :)
19
pknerd 1 day ago 0 replies      
Twitter/Facebook.
Bling the $ of jQuery without the jQuery github.com
213 points by kolodny  2 days ago   134 comments top 23
1
ceronman 2 days ago 6 replies      
Yes, you can re-implement jQuery in 10 lines of code... if you only support 0.1% of the functionality of jQuery.

jQuery was a game changer for web client side development. Thousands of work hours have been spent on it. It had a specially important role of dealing with all the inconsistencies between browsers. If you don't use all the features, nowadays you can just create a custom build with the stuff you want.

If you don't need jQuery, maybe because you only care about modern browsers, or you use another library or you don't mind a bit of extra boilerplate for DOM manipulation, that's perfectly fine. But these kind of posts seem like mockery on the developers of a library that has provided extreme value for thousands for developers over many years.

2
addicted44 2 days ago 3 replies      
If you are using $ signs all over your code, they'd better refer to jquery and nothing else.

jQuery has, for better or worse, become such an integral part of web development, the $ namespace is pretty much owned by it at this point.

If you do want to use it for not-jquery, it should be something that is drastically different so someone new trying to patch a bug on prod doesn't run around in circles wondering why their code that should be working is not.

The bling.js project by repurposing $ into something that looks like it might be jquery but isn't, is just a bad idea.

3
pluma 2 days ago 2 replies      
This does three things:

1. Alias document.querySelectorAll to $.2. Alias node.on to node.addEventListener.3. Make NodeList a sub-type of Array (so you can use forEach etc on node lists as you would on arrays).

The $ of jQuery also does a few other things, e.g. create HTML elements from text ($('<div class="foo"/>') creates a div element with CSS class "foo") and wrap elements in a node list ($(document.body) creates a node list containing document.body).

4
yakshaving_jgt 2 days ago 2 replies      
jQuery is not need anymore with modern browsers.

Bull. Shit.

This document proves why, and is written by the same guy that is now pushing this you dont want jQuery! stuff.

https://docs.google.com/document/d/1LPaPA30bLUB_publLIMF0Rlh...

5
vortico 2 days ago 2 replies      
This is all I personally ever need.

 function $(q) { return document.querySelector(q) }

6
goyote8 2 days ago 6 replies      
Can someone explain the new trend to not use jQuery?
7
shekel 2 days ago 0 replies      
My worry about using this is you'd better be damn sure you never actually use jQuery or libraries that integrate with jQuery if you're going to use it. I think this is a cool demonstration of the difference between some of the "convenience" functions of jQuery many are familiar with and the MEAT of jQuery which is the functionality and cross-browser issue smoothing.
8
eloisant 2 days ago 1 reply      
I don't really know what to think about that.

On one hand it may be a nice way to show to people that what they use in jQuery can be written in a few lines of pure JS.

On the other hand it encourages people who could use pure JS (because they only use a small portion of jQuery) to stick with some non-standard helper syntax.

9
XCSme 2 days ago 1 reply      
Well, the thing with jQuery is that if you use a CDN to get it from, then it is a very high chance that the client already has the library cached, so the bandwidth problem is actually not that much of a problem now.
10
Kiro 2 days ago 2 replies      
> [].slice.call( document.querySelectorAll('.foo'), function(){

When do you need to do that?

11
nateabele 2 days ago 0 replies      
Here's an idea for a jQuery replacement: a website that allows you to select what browsers/versions you want to support, then it generates you a polyfill.

Then aliases `$` as a selector function.

12
franciscop 1 day ago 0 replies      
13
EugeneOZ 2 days ago 0 replies      
When some part of jQuery can be replaced by native APIs so easy, it only shows how good jQuery was designed.
14
awalGarg 2 days ago 0 replies      
I wrote a similar small dom module[0] which also fits in a gist with docs and tests :P --shameless-self-plug

[0] https://gist.github.com/awalGarg/8a0e18c6fe87456d885f

15
mattdlondon 2 days ago 0 replies      
Shameless self promotion for something I threw together ages ago and may be implemented in a better way these days. https://github.com/matt1/doc.js

So it uses "doc" instead of "$" but feel free to fork :-)

16
etjossem 2 days ago 1 reply      
If you quite literally want "what jQuery uses to select things" and nothing else, you're really looking for Sizzle.

http://sizzlejs.com/

17
inglor 2 days ago 1 reply      
Note that work is being done to give `NodeList`s the `Symbol.iterator` property so in new browsers it's trivial to use the native DOM with `for(let x of nodeList)` loops making our lives easier.

This will make our lives easier :)

18
nickysielicki 2 days ago 2 replies      
I use jquery because I use bootstrap and it requires it for some features I use.

I wish that someone would write a drop-in javascript replacement for jquery in the context of bootstrap.

19
joesb 2 days ago 1 reply      
Nice. So where is my $.text(), $.css(), $.html(), $.val(), $.first(), $closest(), etc.?
20
omegote 2 days ago 3 replies      
Tries to play it cool by wrecking jQuery? Check.No semicolons? Check.Put together by Paul Irish? Check.

Yep, this submission is Hipster 1.0 Certified.

21
fleitz 2 days ago 0 replies      
jQuery the useful parts.
22
yakshaving_jgt 2 days ago 0 replies      
Lets replace non-standard JavaScript syntax-sugar with more non-standard JavaScript syntax-sugar. Except we dont like that the first non-standard thing is so terse so lets improve it with a more verbose notation. We dont want it to be too verbose though, so lets drop the semicolons.

This is definitely progress. Im quite convinced that the users of our applications will feel the difference, and theyll love us for it.

Wow.

23
vsaiz 2 days ago 0 replies      
funny! I had the same idea a few months ago, 1 extra feature: .off binding to removeEventListener ;)

https://gist.github.com/vectorsize/feda8ac1ebc889c33b6f

not as tested as yours

Becoming a contractor programmer in the UK github.com
208 points by medwezys  1 day ago   165 comments top 30
1
monkeyprojects 1 day ago 4 replies      
As a contractor of many years standing (I started contracting when the internet appeared in 1994) I started reading the article hoping it was correct and finished it with my fingers covering my eyes in fear. Sadly I don't have time to correct many of the misconceptions and total inaccuracies it contains.

Its a shame really as good articles are hard to find and I'm sure many americans would find the differences between the UK and US markets very interesting...

However if you really want to be a contractor https://www.ipse.co.uk/advice/articles/starting-out has a lot of advice on starting out.

http://www.contractoruk.com/first_timers/ also has a lot of advice although as a new starter I would avoid the general part of their forum...

And there is a reason why people use limited companies. You can work as a self employed person but many clients stop when HMRC come knocking while agencies have been legally barred from employing people as self employed since the 1970s...

2
ed_blackburn 17 hours ago 3 replies      
What this does not touch on is be prepared to be a grease monkey, roll your sleeves up and get your hands dirty and most importantly learn how to make recommendations but not take it personally if theyre ignored for seemingly irrational reasons. Organisational dysfunction is the norm, not the exception.

Broadly speaking Ive seen most of my work fall into these categories:

a) Help we need someone competent to aid us in a murky projectb) We are a dysfunctional organisation, who require transient developers to put up with their modus operandi. c) We need your experience and expertise for a gap in our project

Ive found (c) is best but (b) pays best though it can be stressful if your passionate about quality, engineering practices or process and (a) is often relatively short term but can garner kudos and create better opportunities.

Most companies dont hire contractors because theyre doing swimmingly. Often its because they have some degree of dysfunction. For example large institutions in the City regularly operate as a parody of the Mythical Man Month. Expect Waterfall, PMO, silos of BA, Dev, QA; UAT (manual), Cookie Cutter templates to everything. Expect most business interaction to be via a PM and scrums to be lengthy tortious ordeals. (This is why companies like Thought Works do so well and why I expect some serious disruption in the coming years from Startup targeting City companies).

Expect people to ask you your advice and for you to mentor less experienced developers. Do not expect your advice to be implemented, or rather expect it to be watered down with compromise by non-technical councils.

I really like contracting. I enjoy the flexibility, variety and the challenges. I enjoy the people and skills I learn and now my network has expanded and I have earned a reasonable reputation I enjoy the better projects.

I second the sentiment about going IPSE and of hiring a decent accountancy. Don't worry about their portals or how shabby a website may look, pick them based on their competency.

Bite the bullet. Go for it!

3
jackgavigan 1 day ago 4 replies      
> The fixed rate VAT does not let you reclaim VAT, but you pay a lower rate than you charge your clients. So if you're a developer contractor you'll be adding 20% VAT to your client invoices, but only pay 14.5% to HMRC, keeping the remaining 5.5% to the business.

This is not correct. The way the flat rate VAT scheme works is you add VAT to your bill, then pay 14.5% of your "flat rate turnover" to HMRC. Flat rate turnover includes the VAT.

For example, let's say you did 100 worth of work for a client. You invoice them for 100 + 20% VAT = 120 in total. You must then pay 14.5% of 120 to HMRC (i.e. 17.40).

4
rossriley 1 day ago 3 replies      
In terms of the recruiters section, whilst I appreciate you may not have had a good experience, this is probably the way a lot of people get contracting work, at least until you have a fairly big network of contacts.

I'd be more interested in knowing what agencies do have a good reputation / good developer experience along with the note to beware of the cowboys.

5
Nursie 1 day ago 6 replies      
Why bother with a third party company registration service? You can do it yourself pretty easily through companies house.

Do companiesmadesimple have any value-add?

While I do share your cynicism about recruiters, I have also got most of my contracts through them. There are bad recruiters (vague job descriptions, never call you back) and there are good ones (We need someone here, you fit the bill, can we arrange a time to talk to my client?).

Learning to swallow your distaste and listen to them as if they were worthwhile human beings is a useful skill. Some (few) of them actually are.

(--edit-- good guide in general! I don't just want to harp on the negatives!)

6
gadders 1 day ago 1 reply      
I'd also mention joining IPSE (http://www.ipse.co.uk/) which a is trade organisation for freelancers. They provide various benefits including IR35 insurance that covers investigations.

Freeagent is a good online accounts package as well, but you need to find an accountant that uses it.

7
hunglee2 1 day ago 1 reply      
Good to see recruiters getting some love on the comments here. If there remains a place for 3rd party recruiters its in the contract market. A few reasons:

Payments processing - a lot of employers will not want to deal with processing payments for contractors. Indeed, its often the main reason why they might go for contract vs permanent resource in the first place. I've seen situations where contractor and employer have discovered each other, only for the employer to then ask the contractor to 'go through agency X'. No one likes admin load and we'd all get rid of it if we can.

Reduce assessment load - job opportunities cycle much more frequently on the contract market - typically 3-6 months. This means a lot more time involved in opportunity sourcing / vetting, potentially a hugely time consuming exercise. A good recruiter will be able to filter these opportunities for you, and only get you the most suitable gigs

Reduce downtime - going without agencies entirely means relying on your own market gravity as a developer of renown to secure job opportunities. This is do-able for high profile developers, of course, especially those who live in metropolitan areas and are prominent on the open source / community / events scene. However, if you work on proprietary software, have heavy family obligations and live outside of a big city, you're probably going to find agents very useful indeed.

Salary / Rate negotations - they are going to take their 15-20%. But they may end up earning you more by negotiating hard with the end employer. Certainly an inexperienced contractor is at risk of being exploited, but that's true in whatever of the type of contract you sign. A good relationship with a trusted agent can really help you make more on your rate, especially if you are not a naturally comfortable at negotiating.

And I say all this as a maker of a tech hiring platform that doesn't allow 3rd party recruiters on it. They have their place - just a smaller one than they currently occupy.

Great article in all other areas

8
kybernetyk 1 day ago 1 reply      
>A service like www.companiesmadesimple.com (aff.) will make the incorporation process easy. It usually takes up to three business hours.

Yeah, better do it yourself. I'm from Germany and I set up a LTD directly with Companies House myself. It's really (I mean really really) straight forward and they even accept PayPal to pay the 15 GBP fee. It took me ~20 minutes and the company was incorporated the next day.

Those formation companies usually just are an unnecessary middleman.

9
teknologist 15 hours ago 0 replies      
Something called entrepreneurs' relief is also nice for those with ltds or tech startups in the UK. It allows you to save massively on CG tax (10% instead of the normal rate of 18% or 28%) when selling assets or liquidating the business.

https://www.gov.uk/entrepreneurs-relief

10
Wintamute 1 day ago 0 replies      
Uh oh, if it's one thing us brits like to do its wrangle over the finer points of tax, employment and contractual law. This thread is going to get long, involved and possibly slightly testy :)
11
celticninja 1 day ago 0 replies      
one thing I would say is that i have had good fortune with recruiters, providing they are a decent recruiter they can ensure 0 downtime between contracts and if financial stability is a concern then this should notbe overlooked. One thing i have found is that finding a good recruitr and sticking with them has ben more useful than calling a handful of agencies and using the shotgun approach.

I would note that none have ever told me the name of a client prior to me accepting to be represented by them and I have been introduced to a client company who then asked me to work directly through them and to bypass the recruiter (and any fees the client company would be paying to them). I refused as it was a single contract for 6 months and burning bridges with anyone isnt worth it for that sort of duration. Plus the recruiter had got me the interview within 7 days of me getting in touch with them.

YMMV just my 2cents.

12
laverick 1 day ago 1 reply      
Finding a good accountant isn't easy, but I would recommend looking for something better than Crunch. They were the default option "just pick Crunch", but I had to do a lot of tax research myself and recommend favorable approaches to their accountants on certain non-standard things. I also felt their first line of support is spread very thin and they put a lot of bookkeeping work on me. Then a lot of services cost extra each month, making their total pricing uncompetitive. In the end I got everything done and they did their best to rectify issues, but it was a big burden on my time and significant amounts of stress.

Maybe I would have had this issue with any accountants as a first time contractor, but I know it could have been better and I would have happily paid more to make it so. Do yourself a favor and pay extra to get an accountant that handles more things on your behalf (bookkeeping, VAT filing, etc) preferably with a simple non-proprietary interface.

I haven't tried these two services but if I go back to a UK accountant they'd be at the top of my list.

http://www.3wisebears.co.uk/ contractor)http://www.proactive.uk.net (more startup focused, very helpful via phone)

13
martinald 1 day ago 1 reply      
Do not ever be a sole trader. It means you are personally liable for anything that goes wrong.

Say you end up with a dreadful contract and everything goes wrong and the client sues. If you're a ltd company you can dissolve the business and pay them out of whatever assets your company has.

If you're a sole trader you're liable for all the costs. Until you're personally bankrupt. Sure, there may be clauses in the contract etc, but if someone is mean enough they can make your life very difficult and they will exploit the fact you are a sole trader.

14
kifler 1 day ago 4 replies      
This looks incredibly helpful - I wish someone would do one for the US/Canada
15
kaolinite 1 day ago 1 reply      
I recently started working as a contractor (just for a few months to bring some cash in so I can continue to work on my business).

As I have a business already, I was considering doing the contracting under the business to reduce liability. Does anyone know whether this will increase the amount of tax I have to pay? I figured that I would have to pay corporation tax on any money I bring into the company as well as personal tax on any money that I pay myself as a salary. Is that the case?

16
asherkin 1 day ago 0 replies      
> As far as I understand, it is important to have a contract that allows you to:

> [...]

> If the client insists on including the clauses above, be ready to move on.

These lines appear to contradict themselves, am I missing something?

17
tkyjonathan 1 day ago 1 reply      
You guys need to learn some accounting. Don't be afraid of book-keeping and be keen on any rule that helps you save on taxes. Overall, a limited company can save you around 20% on taxes over a salaried worker. That alone makes it difficult to go back to full-time employment.
18
Keats 1 day ago 0 replies      
I read that book http://www.amazon.co.uk/Contractors-Handbook-Expert-Guide-Fr... when I started contracting last year and thought it covered quite a bit.

Another thing I'd recommend is finding an accountant that uses FreeAgent or something similar rather than their homegrown system. Also, If you use an accountant they will do the company registration for you

19
boothead 1 day ago 5 replies      
Good advice, especially the bit about crunch. First time I contracted I let my accountant talk me into using their horrible spreadsheet. I couldn't make my self use it it was that bad and my accounts were a mess. Now my accountant (nimble jack) bundles freeagent (which when I looked into it was better than crunch and has an API) and keeping on top of things is much easier.

One additional thing: If you're married make sure your spouse is a director of the company as well (especially if they're not working). You can be hugely tax efficient in this manner and can extract nearly 80k per year from the company with no personal tax to pay (there will still be corporation tax to pay on profits).

20
new299 1 day ago 3 replies      
It's probably worth noting that you only need to register for VAT if you're expecting to make > 82KGBP in VATable sales per year.

If you're contracting is mostly remote and outside the UK, you don't need to register, or charge VAT (as I understand it).

21
kenshiro_o 13 hours ago 0 replies      
Very good guide. I am thinking of becoming a contractor but had not idea you needed so much saved up!

Looks like I am gonna have to scrap my lavish summer holiday plans to put money aside...

22
ticksoft 1 day ago 3 replies      
I notice that business bank accounts are always mentioned in these sorts of lists. To me they seem like an extra overhead for no reason. You essentially have to ask a bank's permission and pay them just so they can accept your payments? So weird.

I could understand it if you had a business with employees and you sold products on a daily basis with loads of transactions, or if you plan to get into debt... but for invoicing someone every month? Personal account seems fine, and I'm sure people have several of them already.

23
flog 21 hours ago 0 replies      
24
zimpenfish 1 day ago 3 replies      
My advice would be to use an umbrella company and let them deal with all the tax hassles - they have an entire staff purely for this. You will take home slightly less (no opportunity for tax "optimisation" but I disagree with that on principle anyway) but then also have no exposure to HMRC coming after you 6 years later saying "Where's our 10,000?" (has happened to several people I know.)

(edit for spelling)

25
stefek99 1 day ago 1 reply      
I would all the formalities after landing the first contract.

It takes like 15 minutes online to establish limited company.

And when you buy insurance - how do you know which one? My current contract requires me to have 10 million employer liability insurance (I don't hire anyone) but I have to have it anyway.

I guess I'll add some comments in this thread... (if time allows)

26
antouank 14 hours ago 0 replies      
What do you think is the percentage of contract dev jobs? ( vs permanent
27
collyw 13 hours ago 0 replies      
Anyone know if there is much of a contractor market in Spain?
28
thruflo 1 day ago 0 replies      
Tax.

You invoice 4k. It gets paid into your bank. Congratulations. You just earned 3k.

29
M8 1 day ago 2 replies      
The salary ceiling is very low in UK.
30
ForHackernews 1 day ago 2 replies      
I've heard that in the UK, it's much more advantageous to work as a contractor than a full-time employee. Is there any truth to that? What advice would HN offer to a US-based developer looking to move to the UK?
       cached 18 June 2015 02:11:04 GMT