hacker news with inline top comments    .. more ..    25 Mar 2016 Best
home   ask   best   2 years ago   
1
NPM and Left-Pad: Have We Forgotten How to Program? haneycodes.net
1687 points by df07  1 day ago   839 comments top 175
1
runin2k1 1 day ago 25 replies      
Holy moly-- is-positive-integer/index.js:

 var passAll = require('101/pass-all') var isPositive = require('is-positive') var isInteger = require('is-integer') module.exports = passAll(isPositive, isInteger)
I retract my previous statements that Javascript programmers are going down the same enterprise-y mess that Java programmers went down a decade ago.

They've already taken it to an entirely different level of insanity.

2
atjoslin 1 day ago 13 replies      
Counter-argument:

A good micro-module removes complexity. It has one simple purpose, is tested, and you can read the code yourself in less than 30 seconds to know what's happening.

Take left-pad, for example. Super simple function, 1 minute to write, right? Yes.

But check out this PR that fixes an edge case: https://github.com/azer/left-pad/pull/1

The fact of the matter is: every line of code I write myself is a commitment: more to keep in mind, more to test, more to worry about.

If I can read left-pad's code in 30 seconds, know it's more likely to handle edge cases, and not have to write it myself, I'm happy.

The fault in this left-pad drama is not "people using micro-modules". The fault is in npm itself: all of this drama happened only because npm is mutable. We should focus on fixing that.

3
nly 1 day ago 5 replies      
Nobody has forgotten. These people never knew to begin with.

NPM/JS has subsumed the class of programmer who would have previously felt at home inside PHPs battery-included ecosystem. Before that, a similar set of devs would have felt at home with Visual Basic. Seriously, go visit the comments section on archived copies of the PHP documentation. You'll find code of a similar nature. If PHP had had a module system 10+ years ago you would have seen this phenomenon then. Instead it was copy and paste.

This isn't elitism, it's just the way it is. The cost of a low barrier to entry in to a software ecosystem is taking in those who don't yet have software engineering experience.

Nobody should be surprised that NPM, which I believe has more packages than any other platform, is 90% garbage. There are only so many problems to solve and so few who can solve them well, in any language. Put 100 programmers in a room, each with 10 years experience, and you'll be lucky to find 1 who has written a good library. Writing libraries is really hard.

4
thkim 7 minutes ago 0 replies      
The core issue here is that there is no included standard package in Javascript. It happened because Javascript did not have authoritative implementation when it first began. Next ECMA should require to pack some batteries in to avoid this micro-module hell.
5
Wintamute 1 day ago 1 reply      
Going down the "lots of tiny modules" route is about these three things:

a) No standard lib in JS

b) JS is delivered over the internet to web pages in a time sensitive manner ... so we don't want to bundle huge "do everything" libs. Sometimes its convenient to just grab a tiny module that does one thing well. There isn't the same restriction on any other platform

c) Npm makes it really easy to publish/consume modules

d) And because of c) the community is going "all in" with the approach. It's a sort of experiment. I think that's cool ... if the benefits can be reaped, while the pitfalls understood and avoided then JS development will be in an interesting and unique place. Problems like today can help because they highlight the issues, and the community can optimise to avoid them.

Everyone likes to bash the JS community around, we know that. And this sort of snafu gives a good opportunity. But there many JS developers working happily every day with their lots of tiny modules and being hugely productive. These are diverse people from varied technical backgrounds getting stuff done. We're investigating an approach and seeing how far we can take it.

We don't use tiny modules because we're lazy or can't program, we use them because we're interested in a grand experiment of distributing coding effort across the community.

I can't necessarily defend some of the micro modules being cited as ridiculous in this thread, but you can't judge an entire approach by the most extreme examples.

6
NathanKP 1 day ago 8 replies      
I don't see anything wrong with using a pre-made left pad function. Why waste time and lines of code implementing something so trivial when there is already a solution available?

However, I agree it is ridiculous to have a dedicated module for that one function. For most nontrivial projects I just include lodash, which contains tons and tons of handy utility functions that save time and provide efficient, fast implementations of solutions for common tasks.

Lodash includes `padStart` by the way (https://lodash.com/docs#padStart).

7
adambard 1 day ago 4 replies      
I think it speaks to just how lacking the baseline Javascript standard library is. The libraries that come with node help, but all of this stuff seems like it should be built-in, or at least available in some sort of prelude-like standard addon library. The lack of either leads to all these (apparently ephemeral) dependencies for really simple functions like these.

That said, I work with Java, Clojure and Python mostly so I may be more used to having a huge standard library to lean on than is typical.

8
mastazi 1 day ago 5 replies      
Usually, dependency hell doesn't bite you, until it does. Try to rebuild that thousand-dependencies app in three years from now and you'll see ;-)

I recently had to rebuild a large RoR app from circa 2011 and it took me longer to solve dependencies issues than to familiarise myself with the code base.

Excessive dependencies are a huge anti-pattern and, in our respective developers communities, we should try to circulate the idea that, while it's silly to reinvent the wheel, it's even worse to add unnecessary dependencies.

9
larkinrichards 1 day ago 0 replies      
I wanted to write this post after the left-pad debacle but I've been beaten to it.

I think we got to this state because everyone was optimizing js code for load time-- include only what you need, use closure compiler when it matters, etc. For front end development, this makes perfect sense.

Somewhere along the line, front end developers forgot about closure compiler, decided lodash was too big, and decided to do manual tree shaking by breaking code into modules. The close-contact between nodejs and front end javascript resulted in this silly idea transiting out of front-end land and into back-end land.

Long time developers easily recognize the stupidity of this, but since they don't typically work in nodejs projects they weren't around to prevent it from happening.

New developers: listen to your elders. Don't get all defensive about how this promised land of function-as-a-module is hyper-efficient and the be-all end-all of programming efficiency. It's not. Often times, you already know you're handing a string, you don't need to vary the character that you're using for padding and you know how many characters to pad. Write a for loop; it's easy.

Note that this is exactly the sort of question I ask in coding interviews: I expect a candidate to demonstrate their ability to solve a simple problems in a simple manner; I'm not going to ask for a binary search. Separately, I'll ask a candidate to break down a bigger problem into smaller problems. In my experience, a good programmer is someone who finds simple solutions to complex problems.

Note: rails is similarly pushing back against developers that have too many dependencies:

https://www.mikeperham.com/2016/02/09/kill-your-dependencies...

10
darawk 1 day ago 4 replies      
Everything in this article is categorically wrong and antithetical to every principle of good programming ever articulated. The only problem here, as others have already noted, is that NPM allows people to delete published packages.

Small modules are not evidence of a problem, and they certainly aren't evidence of an inability to implement these things on the part of the people depending on them. Why would I implement left-pad myself when there is already a well-tested implementation that I can install? Building up an ecosystem of tiny abstractions, bit by bit, iteratively and evolutionarily, is how we get robust, well-designed complex systems. We don't get there by everyone reinventing the left-pad function to sate some misplaced appetite for self-reliance.

The author seems to make some arbitrary distinction between things that are 'large enough' to be packaged and 'pure functions' which are 'too small' to be their own modules, and I just couldn't disagree more. Tiny, pure functions are ideal modules. They facilitate the greatest degree of re-use, most clearly articulate what they ought to be used for, and stateless things are, in general, more composable than stateful things. There is no better unit of re-use than a tiny, pure function.

11
panic 1 day ago 2 replies      
Functions are too small to make into a package and dependency. Pure functions dont have cohesion; they are random snippets of code and nothing more. Who really wants a cosine dependency? Wed all really like a trigonometry dependency instead which encompasses many tricky functions that we dont want to have to write ourselves.

This is a pretty weak argument. What is "cohesion" and why do we care that modules have it? Joe Armstrong, one of the creators of Erlang, has argued the opposite (http://erlang.org/pipermail/erlang-questions/2011-May/058768): that lots of small, individual-function modules are better than a "misc" module that grows endlessly and may overlap with other people's "misc" modules.

Calling a function instead of writing the code yourself doesn't mean you've forgotten how to program! The real problem here is the cost and risks associated with dependencies in general (both of which are actually lower for single-function modules), and the broken package removal policies of npm.

12
haddr 1 day ago 1 reply      
While in general I agree with the article I must admit that I also strongly DISAGREE with the overall message. Especially with this:"Finally, stringing APIs together and calling it programming doesnt make it programming."

Stringing APIs together is what actually programming is. This is building software and for instance when i use .toString() method I can easily forget how it is done, focus on other high level things and don't care about dependencies, as long as everything works fine.

Let's admit that the main problem here is with broken npm, rather than packages themselves. If someone has written the "leftpad" function, it is so I don't have to write it again, and I can save probably 15-40 min programming and checking some corner cases.

Also please note that javascript can be really tricky down in the details. So if there's anything that can help, it's better that it exists, rather than not.

13
pjlegato 1 day ago 4 replies      
Yes, or more accurately a large new generation of coders is entering the workforce who know how to code only in a superficial sense and think this is a good thing.

Programming, and especially startup programming, is being taken over by people who are primarily technicians rather than engineers. They want to assemble prefab components in standardized ways rather than invent new things. They are plumbers who know how to install from a menu of standard components, rather than civil engineers desigining purpose built one-off aqueducts.

It is the inverse of the "not invented here syndrome." The technician-programmer is trained to minimize time spent thinking about or working on a problem, and to minimize the amount of in-house code that exists. The goal is to seek quick fix solutions in the form of copy/paste from StackOverflow, libraries, and external dependencies to the greatest extent possible.

In house coding should, they believe, ideally be limited to duct-taping together prebuilt 3rd party libraries and services. Those who want to reinvent the wheel are pompous showboating wankers (they believe); creating your own code when you don't absolutely have to is a self-indulgent waste of time for impractical people who just like to show off their hotshot skills and don't care about getting things done. Move fast and break things and all that.

This began with stuff like PHP but really got going with Rails, which preached convention over configuration as a religion, and supplied a standardized framework into which you could easily fit any generic CRUD app that shuttles data between HTML forms and a database (but is painful if you want to deviate from that template in any way.) Note that Rails doesn't use foreign keys and treats the relational database as little more than a glorified persistent hash table.

This set the stage for Node.js (why bother learning more than 1 programming language?) and NoSQL (why bother learning how database schemas work?)

14
peferron 1 day ago 0 replies      
The funniest thing about this entire debacle is the thousand of self-assured programmers coming out to show the JS/NPM world how it's done, only to have their short, simple, no-nonsense functions fail miserably on some edge cases they didn't think about.

This discussion about the "isarray" package is probably my favorite: https://www.reddit.com/r/programming/comments/4bjss2/an_11_l...

15
mooreds 1 day ago 2 replies      
Relevant:

'"The Excel development team will never accept it," he said. "You know their motto? 'Find the dependencies -- and eliminate them.' They'll never go for something with so many dependencies."

In-ter-est-ing. I hadn't known that. I guess that explained why Excel had its own C compiler.'

http://www.joelonsoftware.com/articles/fog0000000007.html

16
pkrumins 1 day ago 0 replies      
Yes, we have.

The entire Javascript ecosystem is a huge catastrophe. It will collapse any time soon. It's complex, fragmented and no one really likes it. There are a dozen different tools to get started. No one even understands how to get started easily. There are no fundamental tools. Everything is changing every week. You can't just build a product and then rebuild it even a month later. Nothing works anymore a month later - your dependencies have changed their APIs, your tools have different flags that do different things, there are new data models that you never needed and shouldn't even care about.

The developers are in high stress. Devops engineers are in even higher stress because they get to see what developers don't.

It's a huge mess and my advice to "prefer core-language solutions to small abstractions to small helper libraries to general libraries to frameworks" (http://bit.ly/1UlQzcH) hasn't been more relevant than today.

Software should be developed using least amount of complexity, dependencies, effort and using fundamental tools that have been and will be here for the next 20 years. Cut those dependencies, you don't need them. They're here today and won't be here tomorrow.

17
pfooti 1 day ago 0 replies      
So, I suppose you could do something like this instead.

 function leftPad(str, width, pad = ' ') { const actualWidth = Math.max(str.length, width); return `${pad[0].repeat(actualWidth - str.length)}${str}`; }
And that would do a leftPad pretty well, and be reasonably robust to stuff like the required width being less than the string width, the padding character being multiple characters long, and so forth. It doesn't do any type-checking of course.

It also doesn't work on older browsers - both string.repeat and template strings are new. You could fake it with string addition, but addition behaves oddly in the case your arguments are numerical, whereas template strings handle that. There's also a trick where you can say (new Array(desiredLength + 1)).join(' ') to make a string that is the appropriate length, but you've got OBOEs to worry about if you're not paying attention (Array.join puts the character between the elements, so you need an n+1 array for an n-length string). Also, at least on some browsers, Array.join is pretty cruddy, and you really ought to construct the string with an old-fashioned for loop.

Javascript has all kinds of weird corner cases and lots of browser compatibility problems. The fact that someone's written a decent implementation of something that should have been standard in the String object means I don't have to worry about it.

Of course, I do have to worry about stuff like losing access to left-pad when someone throws an npm tantrum, or dealing with future build issues if npm becomes untrustworthy. A cryptographically sound package manager seems like a reasonable want, especially after this week's issues.

But if your take-away from this whole problem is "meh, javascript devs are lazy", you're missing the point.

18
terryf 1 day ago 1 reply      
So, apparently some guys managed to build a system where it is very easy to re-use small parts of other people's code and now the author is complaining that "too much code re-use is happening" ?

I'm fairly old, so I remember the complaints a decade or two ago that people had where "We can compose hardware from IC's and you don't have to know what's going on inside and it's all standard and just works! Why can we not do that with software?!?! (of course that ended up with things like CORBA and DCOM, which was all wrong)"

aaaand here we are in a situation where code re-use is actually happening on a wide scale and now you're complainig about that?

28k lines in an empty project? ha, how many lines of code does the preprocessor generate for #include <stdio.h>I haven't actually measured, but I bet it isn't that far off from 28k lines.

19
tschellenbach 1 day ago 2 replies      
In a way this shows what a great job NPM did at making it easy to publish packages. It's so easy that people decide to package up extremely easy functions.

As a python developer I would never publish a small package, simply due to the overhead of setting up a PIP package.

20
haberman 1 day ago 1 reply      
> In my opinion, if you cannot write a left-pad, is-positive-integer, or isArray function in 5 minutes flat (including the time you spend Googling), then you dont actually know how to code.

Spoken like someone who writes functions in 5 minutes that I find bugs in later.

Just because a problem is simple to describe informally doesn't mean it is simple to implement without bugs.

21
jgrahamc 1 day ago 0 replies      
What concerns me here is that so many packages took on a dependency for a simple left padding string function, rather than taking 2 minutes to write such a basic function themselves.

It takes more than two minutes. That little module has a test suite, if you're including it then you have some assurance it does what it says it does. If you write it you've got to worry about whether it works or not.

22
xupybd 1 day ago 0 replies      
Have we forgotten how to program? No, we have changed the way we program. We leverage the work of others to solve ever more complex problems. When we do this we get more than just the simple functionality we require. We get all the testing that comes from a package being used by thousands of projects. We get updates when people find a better faster way. We get to lower the number of lines of code we have to maintain.

Yes, these are simple tasks but do we gain anything doing these simple tasks ourselves? I find there is a finite amount of focus I have when programming, and I'd rather spend that solving the bigger problem.

This reminds me of a discussion I overheard between two professors when I was an undergrad. Prof 1 "This new language makes it so much easier to x, y and z"Prof 2 "Yes but what's the point, I can do all these things in Java"Prof 1 "I could do all these things in NAND gates, but I'll get more work done if I have this tool."

23
qewrffewqwfqew 1 day ago 1 reply      
The elephant in the room here is Javascript. No other scripting language has been so bloody awful in the past that a five-line module that saves you typing `array.prototype.forEach.call` or something that papers over the language's awful idea of equality comparison with three functions has been "useful" or "more than five minutes coding without references".

Granted, these modules don't do such useful things, but that's the environment their creators were immersed in.

24
spo81rty 1 day ago 0 replies      
As a dot net developer I avoid adding packages and references at all costs due to all of these same reasons. Usually for something simple like this left pad example, I just copy the code and put it in to my project in some sort of class of helper functions or extension methods.

Seems like a lot of these basic Javascript functions need to be built in to javascript/node itself or consolidated down to a single package of common core functions that extend Javascript. Like padding, is array, etc. As others mentioned, these are fundamental things in other languages.

25
xdissent 1 day ago 2 replies      
In the case of left-pad, 2538464 of its 2550569 downloads last month are attributed to dependents of the line-numbers package (https://www.npmjs.com/package/line-numbers). So it would appear that relatively few people directly rely on left-pad, which highlights the importance of vetting the dependencies of dependencies.
26
nick32m 1 day ago 1 reply      
What's the problem of writing a function with a few lines of code and exports as a module?

I think it's totally fine. Like other people said, it's the mindset we borrow from Unix, do one thing and do one thing well. The function would be well tested, and could be reusable.

I don't understand why so many people just require lodash into their project (when they start project) while they only use one or only minimum set of the functions. I mean lodash is a very great library with clean and well tested code, but it's also quite bulky like a big utility lib, and for me most of the time I only need one or two of the functions I would just go to npm and find a module just do that thing.

27
buckbova 1 day ago 2 replies      
Nearly everyone has had it drilled into them the "Don't reinvent the wheel." nonsense for better or worse.

I use lodash in almost every javascript project I start, big or small because it makes my life easier.

I'd rather use the lodash isArray than roll my own.

https://lodash.com/docs#isArray

28
supermatt 1 day ago 0 replies      
There are lots of useful titbits often left out of a languages corelib.

Back in the day we would have built up various pieces of library code, which we would have utilised rather than continually guessing best practices. For example, the isArray method cited may be trivial but is also non obvious. We'd probably have something like that in our library of useful snippets.

Sometimes we may have shared the code on forums and the like and people would copy and paste the code, sometimes into their own libraries. We would locate them by browsing known locations or inefficiently querying search engines

Now we utilise a central resource and simple tools to have a global library of code. Instead of searching far and wide we query a handful of tools that effectively does the copying and pasting for us.

How that can be considered a bad thing is beyond me. It's not a question of knowing how to code, it's a question of using your time effectively.

Granted, there is the problem of so-called modules being removed and dependencies breaking. This can be alleviated by vendoring your modules, a simple task with most dependency management tools.

Personally I think that published modules should persistent indefinitely based on the license the code utilises, although I'm not clear on the actual legalities of the recent npm issue (although if it's due to a trademark complaint, I don't see how it would ever be enforceable for completely unrelated code in any slightly sane country).

29
newobj 1 day ago 1 reply      
Have we forgotten how to program?

Maybe we've forgotten how to /just/ program. Everyone bangs the drum so hard of "let github be your resume." Incentivizing putting every brain fart you ever had out into the universe instead of just keeping it to yourself.

Just a thought.

30
joeandaverde 1 day ago 1 reply      
I completely agree with the author.

The loudest people in the Node community have been evangelizing this practice for as long as I can remember. This shouldn't come as a surprise.

The argument, "If I didn't write it I don't have to think about it" is ludicrous. I just have to point at the left-pad incident disprove the premise of this argument.

The analogy of building things with a bunch of npm lego blocks is laughable. Those responsible for advocating the use of trivial functions by acquiring module dependencies are leading the masses astray.

"But, If I find that there's a bug in a module I can AUTOMATICALLY fix it everywhere!"

No.

You still need to assess how the change to that module impacts any code that depends on it. Just by updating a module and posting a "minor" bug fix can lead to other bugs that RELIED on the behavior as it was originally written.

It's simple, write your own trivial functions. Test them. Maintain them.

P.S.

Another module that can easily be in-lined to every code base you own. (3 million downloads this week).

https://www.npmjs.com/package/escape-string-regexp

31
lmm 1 day ago 1 reply      
If your package manager is so cumbersome that it makes a 14-line package not worth it, get a better package manager.

We haven't forgotten how to program. We've got better at it.

32
adamwong246 1 day ago 0 replies      
"There are no small parts, only small actors". - Constantin Stanislavski

If there's a flaw to this debacle, it's that packages can be un-published. That is some grade A+ BS.

But no, there is no such thing as a package too small. Coding is hard. Collaboration should be default-on, not default-off.

33
gardano 1 day ago 0 replies      
When I was prevented from upgrading Xcode/Swift for a project last year because of podfile dependencies, that cemented in my mind that every time I include a dependency, I'm putting myself at risk.

If I can spend half a day writing the code myself, I will -- because I know it will prevent headaches down the road. Yes, yes, I know any code I write adds the probability of bugs down the road. But at least the roadblock would be of my own doing, and not dependent on if/when the package maintainer upgrades their stuff.

34
nnq 1 day ago 1 reply      
...maybe it's time a committee of really smart people sit and sip through all the most used modules below N lines of code or smth, and just write an opensource JS-stdlib, hopefully merging in top 30% most used methods of Lodash too? Node/NPM is a great example of why too much democracy and decentralization is bad. Just gather some experts and have them centrally plan a "standard library" then impose it as "industry standard" have a recommended "no fragmentation policy" like "no forking" and "use it all or not at all", the hell with your web app's need for "performance"... even a few hundred Ks of code will not hurt anyone nowadays ff sake...

I even consider PHP a "more sane" language because you at least have most of the useful utility functions in a global namespace and everyone uses them. Of course, the real ideal on this is Python's solution: a nice set of standard libraries that you know are baked in, but most of them you still import explicitly - hence it's pretty easy to write even large Python applications that have a small and comprehensible number of dependencies!

(And more generally: our strike for "more efficiency" in programming is stupid imho! I'd always take a less efficient solution, even "less safe/tested", if it's more "understandable" and "explainable" and sometimes, paradoxically, making things a bit more monolithic and centrally planned makes then orders of magnitude easier to reason about for our tiny ape brains...)

35
chukye 1 day ago 0 replies      
Man, I have to quote: `In my opinion, if you cannot write a left-pad, is-positive-integer, or isArray function in 5 minutes flat (including the time you spend Googling), then you dont actually know how to code.`

You would be surprised of how many developer these days have 'afraid' to write such functions, or how lazy they are, they found this thing and just add to a project, then push to some google list and the project got a lot of followers, and in the next day the project has changed about 90%. I saw this happen over and over again in this ecosystem, this is insane dude.

A lesson I learn is: you _need_ to read every module source code before add to any project, the NPM ecosystem has so many "shits" out there. You cannot trust in any npm module, recently I tried to trust in a module that has more than 5k stars, but I found a such ugly bug on that, that I feel my soul die, and I swear I hear the angels cry, thats not how open source supposed to be.

These days, seems that people dont care about the 'bug free' as long as it work a half way.

36
sauere 1 day ago 1 reply      
> Theres a package called isArray that has 880,000 downloads a day, and 18 million downloads in February of 2016. It has 72 dependent NPM packages. Heres its entire 1 line of code: return toString.call(arr) == '[object Array]';

How anyone can deal with JavaScript for more than 5 minutes is absolutely beyond me

37
tomohawk 1 day ago 1 reply      
Rob Pike: "A little copying is better than a little dependency"
38
asragab 1 day ago 0 replies      
This was probably grotesquely naive of me, but I literally had no idea how jenga-tower like the javascript ecosystem was. Eye opener!
39
ZeWaren 4 hours ago 0 replies      
Regardless of whether micro-modules are good or bad, I think that if you are the owner/manager of a project, you should be able, given its full flat list of dependencies, to explain why each one of them is useful for you.

Every project I've seen that uses npm always required 100s or 1000s of dependencies.

If building or running your project requires something and you can't explain why, I think there's a problem.

40
noiv 1 day ago 1 reply      
The Python community has a proper response: https://pypi.python.org/pypi/left-pad/ /s
41
mr_justin 1 day ago 0 replies      
It's not that anyone has forgotten, it's that a lot of people never learned how to in the first place. Every programmer community is riddled with these problems but the NPM world seems to be the worst. The ruby gem "american_date" annoys me to no end. It's just a highly-specific implementation of Time#strptime. Gah
42
imh 1 day ago 0 replies      
Unzipped, the source code for GNU coreutils is 30MB (zipped 4MB). This is a great example of a collection of single purpose functions you should never rewrite yourself. There's only one dependency if you want to use them because they're packaged together. With normal desktop code, 30MB doesn't really matter and you can link only what you need. Can you do that with the usual Javascript package managers/bundlers, or would you need to send the whole 30MB package to the client to use one function from it?
43
Alex3917 1 day ago 0 replies      
So any developer could make this in five minutes, but for some reason they can't verify whether or not it works? That doesn't make sense.

In reality it could take an hour to get this working properly, but it does take only a couple minutes to verify that the solution here is correct. There are certainly good reasons for not adding extra dependencies to your project, but trading a known amount of time to check that an existing project does what you want for an unknown amount of time to redo it yourself is probably not a great bet.

44
sebak 1 day ago 0 replies      
The main reason there is a fetish for these micropackages is a fetish for github stars. The formula seems to be: Overly generic + Simple + Javascript = Lots of stars.

That being said, there is something to be said for using these micropackages. Left padding a string is easy, but you might just have forgotten about that one edge case where in browser X and language Y you have to do things different. It's not really the case here, but things that seem simple at first often turn out to be hard because of some edge cases. One might hope these edge cases are solved if they use a library.

45
nikolay 1 day ago 1 reply      
We have. I spent some time optimizing [0] a String.repeat function over at Stackoverflow and I was surprised that many developers today don't know what they are doing, including core team members [1]. Specifically,

 function repeatString(str, len) { return Array.apply(null, { length: len + 1 }).join(str).slice(0, len) }
[0]: http://stackoverflow.com/questions/202605/repeat-string-java...

[1]: http://stackoverflow.com/questions/202605/repeat-string-java...

46
mtalantikite 1 day ago 0 replies      
One of the things I've enjoyed the most while programming in Go for the past couple years is the standard library and how much the Go community emphasizes keeping external dependencies to a minimum. For most projects I find myself using few if any external packages.

Now of course there are times you'll want to reach for a library, say for something like an http router, and up until recently the dependency management side of Go has been lacking. But when a pattern or library arises that many find useful the core team is open to pulling that in to the standard library if a strong enough case is made, for example in the context package (https://github.com/golang/go/issues/14660).

47
kerkeslager 1 day ago 0 replies      
A lot of the discussion here really isn't talking about the problem at hand.

From the perspective of Babelify users, a major bug was introduced into software they depended on. I don't know how much money in developer time was lost due to this but it would almost certainly be in the thousands of dollars.

And it could have been a lot worse. It could have been something more complicated than left-pad. The author could have introduced a vulnerability or outright malicious code, or been hacked and done the same, and millions of people would have downloaded it and run it.

Arguably, small modules are good if you control them. Maybe they are more composable, maybe they enable better testing, maybe they encourage code reuse. I am not going to argue for our against small modules.

But none of the positives of small modules matter if an unknown developer who you have no reason to trust can change or unpublish the module out from under you. It's irresponsible as developers to risk our employers' and clients' businesses in this way for a function we could write in five minutes.

48
namuol 1 day ago 0 replies      
Formula for a top HN article:

1. Make an observation about a popular thing.2. Blindly extrapolate.3. Make one or more broad, controversial statements.4. (Optional) Nuance.

49
robodale 1 day ago 0 replies      
I'm content and happy not knowing what the fuck this is all about. Pad left? Are you kidding me? If your tower og babel fell because of your reliance on some script kiddie's toy project, I am happy and content knowing you get what you deserve. Law of leaky abstractions, motherfucker.
50
overgard 1 day ago 0 replies      
Here's the funny thing that gets forgotten: in a lot of commercial software, 3rd party dependencies need to go through legal to get properly vetted and attributed and so on. This also usually requires an engineer (to be able to answer things like if it's dynamically linked or not, etc.).

As staid and corporate as it might sound initially, it's a very smart thing to do. One screw-up with licenses could be catastrophic. Are you all really checking that carefully?

I can't even imagine how any sort of proper legal checks could be done with a trillion micro libraries.

51
spion 1 day ago 1 reply      
Its interesting how everyone used this as a chance to attack the small modules approach. This approach definitely has downsides, but the problem caused by leftPad being unpublished wasn't one of them.

If jdalton declared a jihad on arrays tomorrow and decided to pull all array related functions from lodash, we would have the exact same problem.

If kriskowal decided that Q must mirror built in Promise and published a version that does this tomorrow, we would again have the exact same problem.

There is only one connection between this problem and the small module approach. As the size of a module decreases, the number of dependencies increases and so does the number of authors that produced your dependencies. With the number of authors increasing, the chances that some author decides to go rouge or protest for some reason also significantly increases.

Therefore, its irresponsible to use this approach with a package manager that allows an old, established module with many dependents to be unpublished so easily by the original author.

52
fredbot 1 day ago 1 reply      
I call this kind of attitude the "Tea Party" of JavaScript development. The reason why we currently have JavaScript tooling fatigue is exactly because Tea Party developers insist on writing everything themselves instead of trying to build a better abstraction. The lesson here isn't not fewer dependencies: it's managing dependencies. NPM should not allow someone to arbitrarily remove modules that other's may be depending on. It's like building a bridge and them deciding to remove it after a whole city now depends on it.
53
rycfan 1 day ago 1 reply      
If I engage in as much hyperbole as the author, where does "write it yourself" stop? If I'm working on a team of two, should we each write our own left-pad? How about a team of three? Four? Five? Fifty? At a certain point, it makes sense for that to be written once for the project. We spent 30 years in software engineering trying to figure out how to get code re-use, and now that's it common and widespread, we want to go back to NIH?
54
88e282102ae2e5b 1 day ago 1 reply      
To me, this looks like a symptom not of bad programmers but of a terrible standard library.
55
partycoder 1 day ago 0 replies      
node is can be a good idea. But people don't take JavaScript programming seriously. Most libraries objectively suck.

Google, authors of V8 and #1 subject matter experts on V8, have published coding standard. Does someone use it in the node community? No. Everyone loves "standard", a lousy standard that allows everyone put a badge on their github page while still having a code base full of vomit.

JSDoc. A great solution for documentation. You would expect major libraries to adopt it, or a similar thing. But again, no. Major libraries such as busboy do not use them. The documentation resembles a napkin.

Then everything else: input validation, error handling, consistency... etc. Take "request" for instance, one of the most widely used libraries. The state machine it implements is inconsistent. You abort a request and get a timeout, you can abort a request without starting it and get an exception. Issues that will drive you insane while debugging.

Express, one of the most widely used web frameworks on node.js. Do this on a route: setTimeout(function(){ throw new Error(); });. Great, now you have broken out of the error handling context. Great job.

Node libraries suck all across the board. It's the PHP of the 21st century. There are exceptions, like: lodash, bluebird, and others.

56
jeffdavis 1 day ago 0 replies      
Joe Armstrong had an interesting related comment here:

http://erlang.org/pipermail/erlang-questions/2011-May/058768...

Maybe the unit of modularity should be a single function and we can do away with modules?

57
meric 1 day ago 2 replies      
NPM modules can be used on browsers. On browsers, space is a premium.

Why would you want to install a 500kb dependency that has only one function you need, when you can install a 10kb dependency that has it?

Would you want each of your five 20kb dependencies to re-implement the same 5kb function, increasing the code you must send to the client by 20%, or would it be more optimal for each of those dependency to use the same 5kb function?

The author rants about practices of developers from different programming environment, without experience, without figuring how things came to be. If he did give an effort to think from the perspective from Node.JS developers hed have addressed the previous two points.

This is like going to a friends house and complaining everything is put in the wrong place. It would have been wise to immerse in Node.JS conventions and observe for a while before making comment.

EDIT: Reply to scrollaway:

I've also understated the problem.

Let's look at the problem in the current Node.js environment, it's not uncommon for a web app to have 20 dependencies, each of those have 10, and each of those 10 have 5. That's a total of 20 times 10 times 5 = 1000 dependencies in total.

Let's say you were to remove a 10 line library function that's "standard library-like", used by 15% of those dependencies, and have each of the existing dependencies re-implement that in each of those dependencies that uses it.

15% times 1000 times 10 lines is 1500 lines of code.

So if you're going to troll a solid argument by nitpicking, do it properly and get the details right.

58
cel1ne 1 day ago 0 replies      
I know the discussion revolves around the amount of dependencies, but I want to add a comment about semver, which has a part in this mess:

In my opinion it will not be done right in 80% of cases.

Every breaking change requires updating the major versions, but developer are hesitating to go from 1.0.0 to 6.0.0 in a month.

The way out is staying in the 0.x range therefore abandoning semver alltogether.

A nice write up about how packages are not following semver in java-land:

http://avandeursen.com/2014/10/09/semantic-versioning-in-mav...

59
omaranto 1 day ago 0 replies      
This culture of tiny one-function modules sounds like Joe Armtrong's proposal about the "Key-Value database of all functions".

http://erlang.org/pipermail/erlang-questions/2011-May/058768...

60
duncanawoods 1 day ago 0 replies      
The problem with shared hyper-modularisation is that it assumes the name of a function is unambiguous with only one valid implementation. If that were true, it should be encouraged but given it isn't, the practice will crushed by ambiguity and unintended consequences.

My app might well have an is-positive-integer function but it will include a range of context dependent choices about e.g. floating point, infinities, zero, null, "9", "9 ", "09", boxed numbers, string representations exceeding js max int, etc. etc.

61
joshstrange 1 day ago 0 replies      
I can't take this author seriously at all, one of his most egregious cases is the is-positive-integer library which until today had around 30 downloads in the last month.... No one was really using this and furthermore of course you can find bad/iffy code on NPM for the same reason you can find bad/iffy code on github. ANYONE can publish code. I could write a similar module for any other library, publish it to their repo, then scream LOOK! python is stupid and python devs are stupid.

I firmly believe that building on what has already been done allows for much safer code written at a quicker pace. Why should we constantly repeat ourselves? Also by using npm modules we can abstract logic and prevent someone on the team from going in and modifying it for their own use. It is a documented/constant function that we can use knowing exactly what it does. Is it a better world where everyone just copies code out other's repos and then has to include the licence/docs/tests along with it? It's much easier to just pull in the repo which contains everything and make it trivial to see where a function came from.

People are blowing this whole thing way out of proportion just because it makes a good headline "11 lines of code broke node"... You can all try to shame people who build on what's come before and chant "Not invented here" but I'll opt to build on what is proven to work instead of rewriting everything. At the end of the day that's what ships products.

62
sebringj 1 day ago 2 replies      
This is a non-issue and taking focus away from the real issue. The issue is the security hole that NPM opens up when a namespace can be grabbed up by anyone if the original developer pulls out.
63
kevin_thibedeau 1 day ago 1 reply      
The insanity is that JavaScript doesn't have a standard string library. More a case of forgetting how to design a programming language than how to program.
64
rimunroe 1 day ago 0 replies      
I'm not really sure whether or not I should to add my voice to the din, but I feel like this whole thing is more a problem with npm and what it allows vs. what it encourages (and the rather paltry standard libraries in Node & browsers), rather than a problem with developers feeling entitled to not have to write their sorters and padding functions.

npm actively encourages structuring projects as many tiny individual modules and dealing with the resultant dependency trees and deduplication. Both of these things (along with the ease of publication) combine to encourage people to share their packages.

They make it incredibly easy to consume code from other people, but but at the same time provide a similarly low-barrier mechanism to retroactively change published code. That combination seems like a way more deserving topic of criticism than the unending refrain of "developers these days are so lazy".

65
dc2 1 day ago 0 replies      
For a less sensational review of this situation, NPM published a post-mortem:

http://blog.npmjs.org/post/141577284765/kik-left-pad-and-npm

66
UK-AL 1 day ago 1 reply      
I find NPM packaging ridiculous. Awhile I go I used NPM on windows, where the folder hierarchy became so deep it broke windows file handling. I could not delete the modules folder. I had install a npm package which allowed me to delete it. I think this is fixed in new versions by flattening the hierarchy, but still.
67
memracom 1 day ago 0 replies      
Agreed.Most development groups should be building a local collection of utilities that contains all of these snippets, and most importantly, some documentation of what they do and some unit tests to demonstrate that they are correct.

No need to have global dependencies on small snippets that really should be in a core library anyway. C has libc, Java and C# have the creator's (Oracle or Microsoft) standard set of libraries, Python has the "batteries included" stuff in all the different distros. And so on. All of these snippets rightly belong elsewhere, not in packages.

And even if you did get them added to the right libraries, I guarantee you that you will not get rid of the need for a collection of small, and somewhat random, in-house functions, classes and libraries.

68
HarrietJones 1 day ago 0 replies      
This isn't forgetting to program, it's a deliberate choice as to how library code needs to be organised. We can disagree about choices made, but let's not assume other people aren't able to code as well as we think we can code.

That being said, I think this is a perfect example of where a good concept (small, tightly scoped modules) is applied dogmatically at the cost of the codebase. It's the node.js equivalent of AbstractRequestFactoryFactoryFactory stuff you see in Java, and the Mock Messes you see in Ruby.

69
j-diaz 1 day ago 1 reply      
Another explanation for the flourishing of these one function modules may be the fact that some people feel a kind of high/achievement from being able to say they have a published module out there. A sort of bragging rights if you will.
70
stevewilhelm 1 day ago 0 replies      
> What concerns me here is that so many packages took on a dependency for a simple left padding string function

Clearly to mitigate such a tightly coupled dependency, Left-Pad should be a micro-service. :-\

71
blainesch 1 day ago 0 replies      
I think we missed the point here. The fact that it's 11 lines is meaningless, this could have been babel itself.
72
AdamN 22 hours ago 0 replies      
One should think of these requires as like .h files and the underlying code as something like a .c file. They're public definitions with potentially changing underlying code.

It's good to have small packages. Don't forget that the underlying ECMA script is changing so the implementation of these 'libraries' are (or will be) different over time from what they used to be. If somebody finds a faster way to do the method, then it will be done.

Finally, anybody who has used js in the real world understands how many corner cases there are and how difficult it is to make durable methods (i.e. how to know if an array is empty - which requires like 4 different conditions).

73
kf5jak 1 day ago 0 replies      
I wouldn't even think of looking for a package that does something as simple as the ones mentioned. If I need to pad a string, my first thought would be to create a new function, not look for a package...
74
gdulli 1 day ago 0 replies      
There's so much wrong here, it can't even all be seen at once. The part we can perceive is only a projection from a higher dimension into our three-dimensional space.
75
smokeyj 1 day ago 1 reply      
What's with these kids and their "calculators". Back in my day we used a slide rule and we liked it!

But seriously this is stupid. Programming shouldn't be the goal. Just because you can write a function doesn't mean you should. Every line of code you write is overhead that must be tested and maintained. I guarantee that if the author chose to hand roll code instead of using packages he'd have a lot more bugs. But he wouldn't know that until he hit some mundane edge case scenario in production.

76
erikpukinskis 1 day ago 2 replies      
I think this is the fundamental thing people don't understand about NPM and JavaScript, and the web in general:

Nothing is included. And that's a feature. The web is not trying to be the kitchen sink. That's iOS. They provide high level APIs for everything. And as a result, the platform is architecturally only as vibrant as Apple can make it.

Now maybe you're happy with Apple, maybe you love the iOS APIs. But if you don't, you're stuck. There's not a rich bed of alternative view layers that you can draw from to build your own vision of how software should work.

Node and the web browser strive to be lowest common denominators. They provide just the very basics: a document format, a very simple programming language, and an http server. The rest is up to you.

That's pretty scary, and so the JavaScript world has dabbled in frameworks. In the end all-inclusive frameworks are antithetical to the spirit I'm talking about, so things trend towards small modules that do one thing pretty well. People go overboard sometimes. I would argue left-pad should just be a copy-pasted snippet rather than module. But that's not a sickness in the community, it's just developers feeling out how far to go.

If you like every application to look the same, and you don't mind being chained to enormous and restrictive standard libraries and frameworks, then you will hate JavaScript. If you like writing software from scratch, forming opinions about all of the different parts of the system, and allowing each application to be built out of different parts, out of the right parts, then you should give JavaScript a closer look.

77
grillorafael 1 day ago 0 replies      
I agree that some packages might be too much but I don't think `left-pad` is one of them.

I wrote my own left-pad for a project I'm working now and I had to revisit a few times for tiny problems and lack of time to write tests. I would definitely use `left-pad` module if I knew the existence at that time.

78
raz32dust 20 hours ago 0 replies      
The author brings up an excellent point, but I disagree with the solution. We should of course reuse existing, well-tested code if it available, even for simple things like left-padding. The real issue here is that there is a module for left-pad alone. If it were something like StringUtils module with a bunch of commonly used string functionality, it would have been great.

What is it about the node community that triggered this misunderstanding of package management and code reuse?

79
tobltobs 1 day ago 0 replies      
Need more of this WTF stuff? Have a look at the docker registry/repository/hub and those pearls of wisdom of the new DevOps guilde.
80
gsmethells 1 day ago 0 replies      
The fact that anyone has to even think about code size is the real problem.

Yes, downloading a giant JS lib for one function is insane, hence the ton of tiny dependencies.

However, it is equally insane that basic SDK expectations found in every other language has yet to come to be pre-implemented by the JS engine in the web browser itself. Some basic code ought to already be on the localhost the moment your app arrives.

81
dreta 1 day ago 0 replies      
After reading this, i dont think im capable of ever complain about OOP again. Whenever you think youve seen it all, web developers always manage to come up with something worse. The only thing more depressing than this is the comment section below the article.
82
rsp1984 1 day ago 0 replies      
Dependencies are one side of the problem. Unavailability of binaries and dogmatic use of dynamic linkage are the other side.

When I installed a simple lines-of-code counting tool through Macports the other day I accidentally opened the door to dependency hell as gigabytes of not even remotely related stuff started to build [1].

Without a doubt something is going very wrong with Free Software and package managers. On the other hand, never look a gift horse in the mouth so I may not even be the right guy to complain here.

[1] http://pastebin.com/cAZgbaFN

83
gitaarik 17 hours ago 1 reply      
Why was it removed anyway? I agree that the ability to unpublish something is the real problem, but I wonder why the author actually unpublished it. I wonder if the author knew about all the projects that depend(ed) on it. Maybe he/she actually did it as an evil experiment, though a very interesting and eye-opening experiment. Does anyone know?
84
markbnj 1 day ago 0 replies      
This is literally one of the funniest things I've heard about in months. Look for my new python library isDict, coming soon.
85
grumblestumble 1 day ago 0 replies      
Couple of things:

* If you're in favor of the micro-module approach, you shouldn't be relying directly on NPM, and should have something like Sinopia in place. After all, external code isn't the only thing you're vendoring, right?

* Micro modules are fine - but your application code should depend on a privately published utils module whose entry point is a prebuilt distribution of all your external micro-modules exposed through a facade. Your utils module deps are all installed as dev dependencies to avoid the Fractal Nightmare.

* Yay, now you have your own 'standard library' which still manages to leverage the NPM philosophy of distributed code. And if some twit decides to throw a tantrum, it will only impact future builds of your custom std lib - and you'll know about it at build time.

86
hughw 20 hours ago 0 replies      
Of course I would never create a dependency on a small module like left-pad. I would simply copy the function wholesale into the body of my code!
87
vu3rdd 1 day ago 0 replies      
I am posting the article "No silver bullets" again, in the wake of the npm fiasco. I think it is an essential reading for every programmer, every year!

https://news.ycombinator.com/item?id=11350728

88
zalzal 1 day ago 0 replies      
Separate from the discussion of whether super small modules and hundreds or thousands of deps are a good idea, is the point of operational stability.

Putting on your devops hat, whatever your dependencies, from a reliability and reproducibility point of view, you should control your reliance on unexpected decisions of npm or third-party developers. A lot of the panic with npm issues comes from people blindly using the npm registry and then seeing breakages, with no safety net. I hate to say "I told you so" but this is an issue we worried about a lot when considering Node productionization last year: https://medium.com/@ojoshe/fast-reproducible-node-builds-c02...

89
svs 1 day ago 0 replies      
The problem is not of small modules. The problem is lack of dependability. If the language patrons stand behind a set of modules and guarantee continuity and availability, it really doesn't matter what is in them and the world can continue regardless of how insane the module or the whims of any one author. This is not about the technical merits of having or not having a stdlib. The module in question could have been anything.

Making this about is-positive-integer misses the point that this is a social/political problem not a technical one. A language ecosystem must address concerns of business continuity as first class concerns.

90
gladimdim 1 day ago 0 replies      
On DigitalOcean instance I cannot even use browserify+minify+babel6 cause npm process is killed by host (it consumes > 512Mb of RAM). So I have to manually run browserify + babel then minify. Still it produces 500kb of bundle.js :D
91
nv-vn 1 day ago 1 reply      
Here's a proposal (that I'm sure others have come up with in the past) -- why not create one big, community backed "batteries included"-type module that would implement all the small, commonly used functions. This could combine all these ridiculously small libraries and greatly reduce the number of necessary dependencies for a package. Extending the standard library should be just that: standardized. If the entire community focused on one project like that they could just as easily write the same code (but with smaller package.jsons, less require()s, and less time spent learning new libraries/searching for the right libraries. In fact, it would be great if something like that could be packaged as a standard node module so you'd get the same sort of quality assurance as you get with official projects.
92
lucb1e 1 day ago 0 replies      
Yeah look at all these modern languages that do so much for you, like .empty() on an array -- have we forgotten how to do simple comparisons?! You could just take a second to consider the properties of an empty array, namely it contains no items (.count() == 0).

My point being, if something is a completely reusable and basic feature, a dependency is totally just. I remember a few years ago when I and all devs I knew (which weren't many, I was maybe 17) had our own libraries to include in all personal projects we made. It contained features we had to look up once and from then on just automated and imported, stuff like password hashing or input checking. This went out of style as single-programmer programs are going out of style, but the little useful features are still there.

93
joantune 1 day ago 0 replies      
One can argue that modules are good, but them depending blindly on newer versions like that was bad dependency management.

I say this because I strongly believe that reinventing the wheel is unnecessary and can bring more problems than not.

There are many examples, and I could come up with a made up one, but here's a very real bug that I debugged from another programmer not so long ago:

So, he came up with a JNI for SPSS's C library, applied it correctly, and got haunted for lots of months with an unsolvable bug. The problem? he miswrote the final copy of the file, and sometimes, some bytes where copied twice.

He tried to solve this problem for a long time (and eventually lived with it because SPSS was still resilient to this)

Is this concrete example of a 'module' ridiculously short? yes, but my logic still holds IMO.

94
morgante 1 day ago 0 replies      
This shows the beauty and success of npm in making it very easy and cheap to publish small modules.

It's not that we can't write a left pad function ourselves. It's that we might easily miss an edge case or make a mistake in doing so.

The author seems to be hung up on a preconceived idea of what a package "should" be without actually offering a compelling argument for why a single short function can't be a module.

Yes, every dependency you introduce is a liability. But so is every line of code you write. I'd much rather take the risk on a shared library which can be audited and battle tested by the entire community.

If a function is so easy to write, it's trivial to check out the module's code, review it does the right thing, and then lock the dependency.

95
blablabla123 1 day ago 0 replies      
I guess this is the way one is supposed to use node, preventing one to write non-DRY code. (Golang takes the exact opposite approach, having it's own drawbacks of course.) However, when using React, I kind of trust that the maintainers don't include packages than require 12 star projects, and if, that they fork this stuff themselves.

BTW, isn't that a Facebook project, so aren't they supposed to use a CI? ;P

96
doctorstupid 1 day ago 0 replies      
Smart people created software that lowered the barriers of entry to making software. It was inevitable that not-so-smart people would eventually be writing software that others would build upon.
97
Artoemius 1 day ago 0 replies      
It's not that we have forgotten how to program. It's that everybody and their dog is now a programmer.

Professional racers don't need an automatic transmission, but it's quite helpful for an unprofessional driver.

98
smitherfield 1 day ago 0 replies      
Without addressing the wisdom or lack thereof of including dependencies for small functions, perhaps the problem of disappearing/changing small dependencies could be solved with an option along the lines of

 npm install <small-dependency> <destination> --save-inline
Which would just copy the dependency verbatim to <destination>. Maybe have a "<dependency> is 100kb. Are you sure you wish to copy the entire source to <destination> instead of using a 'require' reference? y/n" prompt for the inevitable silly types who'd do it with Angular.

99
nostrademons 1 day ago 1 reply      
Somehow, someone always forgets that engineering is about trade-offs, and so every few years we can an indignant series of articles about how stupid and ignorant today's programmers are and how we should all go back to the same processes that they called stupid and ignorant 4-5 years ago.

Relying on reimplementation, copy-paste, npm shrinkwrap, or various other ways of importing third-party code into your repository results in the following advantages:

1. You know exactly what goes into your product, and can audit everything for security, performance, or coding standards.

2. You often end up importing less, as third-party modules may have functionality that you don't need but other clients do.

3. You can modify the resulting code to add showstopper functionality, even if upstream doesn't want to.

4. You aren't subject to the whims of someone removing your dependency from the Internet or replacing it with a version that does something you don't want.

Relying on lots of little libraries installed via package manager gives you the following advantages:

1. You can easily install & try out modules that other people have written, letting you test out new features on users more quickly.

2. You can share code with other modules that have the same dependencies, often reducing the overall size of your system. This is important when there's a cost (eg. download size) to your total bundle.

3. You have less code for your engineers to read & maintain.

4. You can easily track licensing & contact information for your dependencies.

5. You automatically get any new features released by your upstream dependencies.

6. You automatically get security updates and performance enhancements released by your upstream dependencies.

The last is nothing to scoff at: imagine if the headline, instead of 'left-pad breaks the Internet!', had been a security vulnerability in left-pad which literally broke the Internet. Imagine how hard that would be to fix if everyone had copy/pasted the code or re-implemented it. This is not an academic scenario either: remember "Nearly all binary searches and mergesorts are broken", published by the guy who wrote the broken binary search implementation in the Java standard libraries?

http://googleresearch.blogspot.com/2006/06/extra-extra-read-...

Always copying your dependencies into your source tree is not the answer to this, no more than always relying on npm modules was the answer to updating your dependencies. They both have pluses and minuses, and if you really want to be a good programmer, you need to weigh both of them. For my projects, I tend to use whatever libraries I need when building them out (via npm, if possible), and then periodically audit the dependencies to make sure I'm still using them and they wouldn't be better off incorporated directly into the project. I wish more products did this, but I don't control what other programmers do.

100
Animats 1 day ago 0 replies      
It beats the alternative - pulling in some huge package with lots of little functions, all of which end up in the output. At least you're not loading huge amounts of unreachable code into the running system.

In languages with linkers, the better linkers would discard all unreachable functions. Then came DLLs. Use one function in a DLL/.so, and the whole thing has to be loaded. Go and Rust are usually statically linked, reflecting the fact that pulling in big DLLs was usually a lose. You rarely have two different programs on the same machine using the same DLLs, except for some standard low-level ones such as the C library.

101
jmount 22 hours ago 0 replies      
For more fun, my commentary on leftpad code: http://www.win-vector.com/blog/2016/03/more-on-npm-leftpad/
102
pklausler 22 hours ago 0 replies      
Insight from this story: If it is important that a thing be done correctly, it should not be made so easy to do that it will end up being done by people who shouldn't be allowed to do it.

I may have just invented a rule for choosing programming language and systems there.

103
smegel 1 day ago 1 reply      
> What concerns me here is that so many packages took on a dependency for a simple left padding string function, rather than taking 2 minutes to write such a basic function themselves.

Wait -- code reuse is bad now??

104
andrewingram 1 day ago 0 replies      
This is from a client-side web dev perspective:

I'm hoping that proper support for ES6 modules to enable tree-shaking bundle builds (in the short term), and HTTP2 with client support for modules (in the long term), will allow us to head towards a world where we converge around a handful of large utility libraries.

In theory, tiny dependencies was supposed to allow us to only include code we actually needed in our bundled code. Bu the reality is that everyone uses different tiny dependencies for solving the same problem. So you end up with enormous bundles made up of different solutions to the same tiny problems.

105
romualdr 1 day ago 0 replies      
The author missed the point about modularity in Javascript.

Small packages done right, well tested = maintenable, reusable, stable code.

The problem does NOT comes from packages. The problem comes from un-publishing public packages and centralized repository server.

I have a java project with a lot of dependencies. Does it mean it's bad ? No, but if maven repos are closing tomorrow, my project will not build as well.

106
StreamBright 1 day ago 0 replies      
So funny, just few weeks back I had an argument with somebody about writing a simple functions vs. importing libs when you need less then 5% of the functionality. I am more convinced than ever that it is better off to have the least amount of external dependencies. Of course I would not want to rewrite a 2M+ LOC library with very complex code, but left pad is not one of those use cases.
107
ycmbntrthrwaway 1 day ago 3 replies      
What would happen to all those NPM projects if GitHub is destroyed? I don't think it will close anytime soon, but lets say, a meteor shower hits GitHub data center or something along the lines of this.
108
progrocks9 1 day ago 0 replies      
I just removed the tilde and caret of all my dependencies (in my package.json) and maybe that's the way to go. Seal a local version of your packages and don't update unless is completely needed. But I'm still worried about the fragility of the package environment.
109
alistproducer2 1 day ago 0 replies      
On the one hand, I ,love how the JS community is constantly coming up with new [versions of old] things. Even though most of it is creative waste, it's still creative and out of that churn we get some pretty awesome apps and tools.

On the other hand, there's a lot of bad practices disguised as "simple" and "efficient." Using NPM for one line functions is a great example of this.

110
spotman 1 day ago 0 replies      
Every single dependency you import should make you nervous. You would look at it like hiring someone. You are letting go of some responsibility. You are letting go of some control. The end result might be better than you could do in-house, but it might not. But you are already hoping that it's better, and giving up control.

Save these for the things you can not do in house, like NaCL. Don't write that yourself.

But string padding, sorry. Any developer worth their salt would laugh at adding a dependency for this. It's irresponsible, and comes across as amateur hour.

This is a simple case of optimizing for responsibility. The inexperienced programmer does not know how to do this, because they have not spent enough time being responsible, and having to deal with the fallout of managing responsibility poorly.

An experienced programmer carefully manages responsibility. A simple function, that is easy to understand, easy to test, and easy to reason about, is something that makes more sense to either pull all the way into your codebase, or write yourself.

Years of doing this means that managing dependencies should never be just slap it into your packaging system and start using the function. If you are on the line for making the wheels turn for a large scale platform that is directly connected to a monetary transaction of some nature, you will quickly find yourself preferring to remain responsible for everything that you possibly can control. There is certainly enough that you can't control to keep you on your toes.

111
city41 1 day ago 0 replies      
I'd just like to point out that React does not have a dependency on left-pad. React has no dependencies at all[0]. devDependencies have no impact on third party consumers.

[0] https://github.com/facebook/react/blob/master/package.json

112
salehd 1 day ago 0 replies      
Well it has always been like this. In early 2000s most developers I knew would simply copy-paste code from all over the Internet.

Nowadays you use NPM search instead of Google search

The fact is that lazy programmers are lazy. The methods change but the principle remains the same. In 90s people typed in code from magazines and books.

113
jammycakes 1 day ago 1 reply      
Here's a quick rule of thumb. If it's the kind of function you would ask a candidate to write at the start of a job interview, you shouldn't be importing a separate module to do it.
114
j-diaz 1 day ago 1 reply      
Maybe some people just want to claim they have a published module. Making them feel some sort of achievement or glory.
115
collinmanderson 17 hours ago 0 replies      
I'm generally not a big fan of IPFS, but IPFS seems like the perfect solution to this problem.
116
dustingetz 1 day ago 0 replies      
If the code is open source, what difference does it make if the code is in my module or someone else's?
117
polynomial 1 day ago 0 replies      
What actually happens when you try to update a required package, but it is gone from upstream? Is there no way to keep the existing package you already have?
118
TickleSteve 1 day ago 0 replies      
"Small modules are easy to reason about"

No... "appropriately sized modules are easy to reason about"

In this case... "Appropriate" has gone out of the window!

119
niklabh 19 hours ago 0 replies      
npm had the solution one year ago (namespacing) https://docs.npmjs.com/getting-started/scoped-packagesif only developer can embrace the "change"
120
z3t4 1 day ago 0 replies      

 function foo(arr) { var str = ""; var leftpad = require("leftpad"); for(var i=0; i<arr.length; i++) str += leftpad(arr[i]); return str; }

121
ajuc 1 day ago 0 replies      
What's wrong with reusing small fragments of code?

The usual complains about many dependencies are mostly void (it's not bloat if you only depend on single functions you actually use).

122
sordina 1 day ago 0 replies      
Check out the work on Morte for a more reasoned approach for how to take micro-modularization to it's natural (or insane) conclusion.
123
digitalpacman 1 day ago 0 replies      
.... isn't this a problem with node, and not developers? Wouldn't you say this is a symptom of a systemic problem of the framework that it is lacking common features that everyone needs?
124
IvanK_net 1 day ago 0 replies      
When am I developing some large project for a long time, sometimes I find, that I have reimplemented the same function at several places (few years between implementations).
125
losvedir 1 day ago 0 replies      
There's nuance to the discussion that both sides are missing. People argue forcefully whether these small modules are good or bad, but I'm not seeing much evidence that they understand the other side.

First: why small modules are bad. Lots of dependencies complicate your build, and you end up with the dreaded diamond dependency issue. Failures in a dependency become more likely to affect you. It gets you in the habit of using prebuilt modules even if maybe it's not quite what you need and it would have been better to write yourself. With `npm` specifically, we've seen how its mutability can break the build, though that's about `npm` and not the idea necessarily.

I think most software developers' gut responses are that something is wrong and crazy in the npm ecosystem.

That said, there are benefits that this blog post and others aren't mentioning, related to the javascript situation specifically.

The first one is that javascript is a surprisingly difficult language to get right. Sure, the final solution is only a few lines, but which lines are hard. You have to navigate the mindfield that are is V8 in nodejs, v8 in chrome, spidermonkey, chakra, etc. I've had code work in Chrome before but blow up in IE, and it's really hard to track down and test.

The comments in the blog post are illustrative:

One line of code package:

 return toString.call(arr) == '[object Array]';
Crazy right? And my first stab probably wouldn't have been to implement it that way. Why not:

 (testvar.constructor === Array)
that a commenter suggested, which should be faster? Well another commenter said:

 The constructor comparison will fail if the array comes from a different context (window).
I've run into issues before with cross-browser compatibility stuff, and it's frustrating and hard to test. If there's some de facto standard package that implements it for you, hopefully the community can iron out edge cases.

The other thing that people don't bring up, is that there's not much JS standard library, and in the browser context you have to send all your code to the front end.

So maybe you write these 11 lines yourself, and then another package writes these 11 lines, and another... it adds up. But if everyone uses the same package, the code only gets sent once and they all share it.

Lastly, people talk about how `sin` should be a part of a "trigonometry" package and not by itself. Well, again you're faced with sending a bunch of unnecessary code to the frontend. With webpack2 and tree shaking, or e.g. Google's Closure compiler, it can strip out dead code and so this issue will go away in the future, but we're not quite there yet. So package authors still bundle all these things separately.

So pros and cons.

126
benedictchen 1 day ago 0 replies      
I find it interesting that blaming code reuse is a valid thing, but blaming a lack of test coverage and CI build testing is not.

The problem is the lack of a test culture.

127
grav 1 day ago 0 replies      
Isn't the problem that there is a lack of a good standard library in Javascript?
128
Nr47 1 day ago 0 replies      
when you go out for food or order in, do you ask yourself "have I forgotten how to cook?"

Sure in some ways NPM has packages that don't deserve the title of a package, but isn't the convenience of not having to reinvent every code worth it?

129
qaq 1 day ago 0 replies      
It's up to you how to go about it on server side you for example can go with express or hapi (minimal external dependencies).
130
bliti 1 day ago 1 reply      
It suddenly feels like the 90s all over again.
131
iamleppert 1 day ago 0 replies      
I couldn't agree more. I've been using the new ES6 style module syntax for a few days now because a co-worker forced me to, so he would use my library.

I'm not convinced its worth it compared to the simplicity of commonjs and module.exports. You have to pull in babel, which has over 40k files to do all this.

Why are people destroying the beautiful simplicity that is javascript? Can those people please go back to java?

132
z3t4 1 day ago 1 reply      
This is why one guy can now compete with say Google or Microsoft, because that guy uses code written and managed by more engineers then both Google and Microsoft have combined. Instead of paying hundreds of dollars to said companies, you can just NPM install "what you need".
133
lintiwen 1 day ago 2 replies      
I see two kinds of programmers here.

Good programmers understand the risks of making your system depend on something you don't have control really well; They know how keeping system complexity low is like an good investment which makes your life later easier (low maintaining costs).

Bad programmers stacks up technical debts such as including unnecessary dependencies until the system no longer works.

134
robodale 1 day ago 0 replies      
I'm content and happy not knowing what the fuck this is all about. Pad left? Are you kidding me? If your tower og babel fell because of your reliance on some script kiddie's toy project, I am happy and content knowing you get what you deserve. Law of leaky abstractions, motherfucker. Spolsky...do you read it?
135
fieryeagle 1 day ago 0 replies      
<rant>The problem here is JS developers have baked in the notion of having NPM as the alternative to Google + StackOverflow + own thoughts. It's really a no-brainer (literally) to just slap another package than to bother thinking about what a piece of code does, the edge cases and pitfalls. Move fast and break things, right?

Sure there was some argument about Unix philosophy, small module doing one thing and does it very well. Did anyone bother considering the quality of most NPM packages? Quality is not reflected with passed Travis CI or extensive community testing and feedbacks. Not at all. Look at the those packages on apt-get. They are modular and robust. They do what they were supposed to do.

Now take a long hard look at the state of NPM. What do we have? People clamoring for reusability and whatnots. Most of them don't even know what they're talking about, just reciting the latest hip statement from the Internet. Being mature about development means accountability for what you do, not pushing shit around that you don't even have knowledge off. As a self-proclaimed polyglot, I love JavaScript as a language but not the ecosystem. It's like watching a dog chasing its tails:

- Endless loops of discussion that help stroke the egos but not improve anything else.

- Craps for resume/repository padding, not for actual developers to use.

- Bandwagon mentality that just pushes the latest fad along, and the herd towards the cliff.

The notion that JS developers are kids playing grown-up, has been reinforced with this NPM incident. If we want to discard that notion slowly, we need to be more mature developers. It's that simple. Here's what I think we could do:- Have a clear idea on what dependency you need. Browser, IDE, terminal etc are dependencies. Basic type checking is not.- Be better craftsmen. Roll and maintain your own toolboxes. Only share a working hammer, not a broken nail or a wood chip.- Note that for each package you publish, thousands more hours would be spent on learning, adapting, using and reporting mistakes. Collectively, the current community wastes so much time with finding the right things to use. Often times, we learn much more by playing with code, even posting on StackOverflow. That's hands-on, `npm i` is not.- Own the code better. The idea that teams like Babel and React devs with all the brilliant developers choose to put their eggs in a private corp's whims is just scary. You can't hope to build robust software while playing Jenga tower.

136
spajus 1 day ago 0 replies      
That's what you get when you let JavaScript into the server side.
137
MattHeard 1 day ago 0 replies      
> Even if correct, is it the most optimal solution possible?

"most optimal"?

138
return0 1 day ago 0 replies      
Packaging is the new programming.
139
justin_vanw 1 day ago 0 replies      
Forgotten? Most people who develop on Node.js never knew...
140
sorpaas 1 day ago 0 replies      
IMHO, this is all due to lack of a standard library in the Javascript world.
141
zongitsrinzler 1 day ago 0 replies      
It's not about programming, it's about fast progression.
142
thrillgore 1 day ago 1 reply      
I started out really thinking Kik was wrong to sue this guy but like with all things, the longer this goes on the less sympathetic I grow.

Write your own goddamn utility classes, people. Or learn how to suspend package releases, include them in your projects, and smoke test your releases.

143
memracom 1 day ago 0 replies      
Seems that we need a tool to crawl all the repos that a developer owns and report lines of code in each package that they wrote. If there are lots of small packages, then this is not the kind of person you want to hire, except maybe to do PR.

Real developers do not do stuff like this.

144
innocentoldguy 1 day ago 0 replies      
This problem speaks volumes about the myriad shortcomings of the JavaScript standard library, in my opinion.
145
spullara 1 day ago 0 replies      
It seems to me that the JavaScript VMs should get together and start including standard library. That would also give the benefit that those would be highly optimized. They can keep it small at first and focus on number and string manipulation.
146
vdnkh 1 day ago 0 replies      
I don't have an issue with modules or the Unix philosophy, I have an issue with using NPM for all these tiny modules. Hint: you can make your own modules, store them within the project, and add them to your package.json to be required anywhere.
147
danielrhodes 1 day ago 0 replies      
It does seem pretty insane, but how many of these are polyfills?
148
democracy 1 day ago 0 replies      
Re-usability is a good concept but can be overused easily.
149
ankurdhama 1 day ago 0 replies      
What's next? A hello-world package and Node tutorial about hello world program being include this package as dependency and call the exported function.
150
plugnburn 1 day ago 0 replies      
So sad yet so true.

We haven't. React developers probably have.

151
po1nter 1 day ago 1 reply      
Everyone keeps mentioning the lack of a standard library for JavaScript as an excuse for this shit show. IMO this is just a futile attempt to mask incompetence.
152
forgotmypassw 1 day ago 0 replies      
JavaScript was a mistake.
153
tonetheman 1 day ago 0 replies      
All the comments here have answered the question quite well. Yes we have forgotten how to program and we want to argue about it.
154
fiatjaf 1 day ago 0 replies      
No saner alternative presented.
155
dschiptsov 1 day ago 0 replies      
I remember how I have been downvoted to oblivion for comparing JavaScript madness with Java EE "packer's" paradise years ago.

The Programmer's Stone first essay is actual as it never been before.

Actually, it is a common pattern. When some activity becomes popular due to very low barrier to enter it will end up in a such kind of mess. It seems like nowadays everyone is either a programmer or a scientist and researcher.

This is the quality of their software and research.

There has been a reason why good schools taught principles (algorithms and data structures) not particulars (objects and classes). But since MIT and Berkeley dropped Scheme-based courses in favor of "pragmatic" Python (thank god not JavaScript) based courses we are heading to a disaster. Java madness taught us nothing.

History is full of examples where assault by mediocrity ruined the whole branches of philosophy, arts and crafts. Instead we have fastfood, mass media, social media and now this mass coding, which combines worst from mass and social.

Just try to compare things like Smalltalk or Plan9 or R4RS Scheme or Zeta LISP of Symbolics or with this stuff.

156
plugnburn 18 hours ago 0 replies      
By the way, ES6 syntax (works in modern foxes and chromes):

leftpad = (str, len, pd = ' ') => Array(len > str.length ? 1+len-str.length : 0).join(pd) + str

WTF are you talking about? Making this into a module?!

157
plugnburn 18 hours ago 0 replies      
Have to put a little clarity over my "so sad yet so true".

When the developers of such a serious library as React start to depend on a third-party one-function module made by some Dick-from-a-mountain (a russian idiom that means a random person who did nothing significant but tries to show out in all possible means), that means React developers are even more to blame than that Dick-from-a-mountain himself.

If you make any really popular piece of software, you absolutely must have a failover plan. No excuses for not having it.

But what's even sadder is that this issue had spawned a new wave of pseudo-elitist attacks on the entire JS dev community. Calm down guys, things like that could have happened for any language that has a widely-used centralized package system (Perl's CPAN, Python's pip, Ruby's gems etc).

Let me repeat that again: languages don't make software bad, people do. Just don't let such Dicks-from-a-mountain rule over your own modules with elementary stuff like leftpad, and you'll be safe.

158
Shivetya 1 day ago 0 replies      
Hell, the java programmers I work with seem to never use each others simple functions and instead recreate the will every single time.

As for the issue with such a short piece of code being reused by many, why score on number of lines? If it works and is a useful function is more important to me. I am not familiar with bundling within the usage the article covers but we tend to bundle like functions together and the compiler drops unused ones

159
serge2k 1 day ago 0 replies      
> if you cannot write a left-pad, is-positive-integer, or isArray function in 5 minutes flat (including the time you spend Googling), then you dont actually know how to code.

I'd probably get 2 of those wrong in some weird way, but I blame javascript. I mean without Google of course.

160
venomsnake 1 day ago 1 reply      

 module.exports = leftpad; function leftpad (str, len, ch) { str = String(str); var i = -1; if (!ch && ch !== 0) ch = ' '; len = len - str.length; while (++i < len) { str = ch + str; } return str; }
Isn't that the least efficient way to do that function? Prepending a string has always been very expensive operation.

Calculating needed length. Using repeat and just concatenating 2 strings would be faster.

161
dschiptsov 1 day ago 1 reply      
This is, finally, the "Fractal Of Bad Design" moment for JavaScript.
162
lifeisstillgood 1 day ago 1 reply      
Surely there is a need to standardise on a set of well-maintained "batteries included" packages.
163
ocdtrekkie 1 day ago 2 replies      
My current programming project, my goal has been to do as much in-app as possible. Does that mean I'm more likely to have bugs in my own code? Yes. But I've learned a ton doing it, and I know that my code doesn't have a giant monster of bloat hidden behind some random dependency somewhere. And yeah, that means when I wanted to handle email, I learned a heck of a lot about how programs handle email. Did it take more time? Yup. Education is time well spent.

I've got two dependencies besides the basic framework my project is built on: A scheduling engine, and a database interface. I eventually hope to factor out both.

164
frozenport 1 day ago 1 reply      
I don't see nobody criticizing 'std::min()'. Perhaps what we really need is a 'std' for js?
165
facepalm 1 day ago 1 reply      
What I don't get about Left-Pad, shouldn't they have used Arrays.join for better perormance?
166
shitgoose 23 hours ago 0 replies      
here is one of the comments to original post:

"Immutability at the centralized authority level and more decentralization of package distribution is the solution, not 'write more functions yourself'."

what the fuck does that mean?? they just don't give up, do they... Fucking retards.

167
dang 14 hours ago 1 reply      
This comment as well as https://news.ycombinator.com/item?id=11354704 break the HN guidelines. Please don't post uncivil comments regardless of how wrong or ignorant someone seems.

We detached this subthread from https://news.ycombinator.com/item?id=11351657 and marked it off-topic.

168
CodeOtter 1 day ago 0 replies      
npm install goldmansachs
169
jsprogrammer 1 day ago 0 replies      
This post is too dismissive and confuses some topics.

"Packages", "modules", "functions", and other words can mean different things within different contexts. Well-known and tested functions are useful. Putting them in a "package" is just a consequence of how node/npm work. There should certainly be a much, much better implementation of code sharing, but sharing and using well-known, singular functions should be exactly what we are going for.

170
tn13 1 day ago 0 replies      
I don't think it is bad at all. For a lot of project where saving even a minute matters.
171
c3t0 1 day ago 0 replies      
Functions Are Not Packages

Cracked me up :D

172
irascible 1 day ago 0 replies      
Ahh job security :D
173
irascible 1 day ago 0 replies      
Ahhh job security :D
174
anonymousguy 1 day ago 0 replies      
This is acceptable because web developers expect a certain level of coddling. Many developers are quick to defend this insanity because they simply cannot see their own level of entitlement.
175
tiglionabbit 1 day ago 1 reply      
This is just more evidence that the unit of code is a function, not a module.

Eventually, given a pure enough language, every "package" could contain only a single function, and every function in a project could be published as an independently reusable unit.

2
I've Just Liberated My Modules medium.com
1543 points by chejazi  2 days ago   789 comments top 117
1
callmevlad 2 days ago 17 replies      
The fact that this is possible with NPM seems really dangerous. The author unpublished (erm, "liberated") over 250 NPM modules, making those global names (e.g. "map", "alert", "iframe", "subscription", etc) available for anyone to register and replace with any code they wish.

Since these libs are now baked into various package.json configuration files (some with 10s of thousands of installs per month, "left-pad" with 2.5M/month), meaning a malicious actor could publish a new patch version bump (for every major and minor version combination) of these libs and ship whatever they want to future npm builds. Because most package.json configs use the "^1.0.1" caret convention (and npm --save defaults to this mode), the vast majority of future installs could grab the malicious version.

@seldo Is there a plan to address this? If I'm understanding this right, it seems pretty scary :|

[1] https://medium.com/@azerbike/i-ve-just-liberated-my-modules-...

2
nordsieck 2 days ago 6 replies      
One interesting thing to me, is that it is pretty clear that the kik lawyers pretty dramatically over enforced their trademark.

For those who don't know, the purpose of trademarks is to prevent customer confusion; essentially we don't want people to be able to sell cheap knock-offs of someone else's thing without the general public being able to easily distinguish between them. In practical terms, trademarks are "scoped" by their "goods and services" declarations.

For example, Apple the device manufacture[1] and Apple the record label[2] could both be trademarked because they had non-overlapping goods and services declarations... until iTunes started selling music[3].

If you look at kik's trademark application[4], you can clearly see that the trademark is limited to chat/media consumer applications, a pretty obvious over enforcement.

[1] http://apple.com

[2] http://applerecords.com

[3] https://en.wikipedia.org/wiki/Apple_Corps_v_Apple_Computer

[4] https://trademarks.justia.com/858/93/kik-85893307.html

3
larkinrichards 2 days ago 12 replies      
I applaud this action and while I'd like to point the finger at NPM, there's no real other method to fix historical package versions that depend on this.

It is worth pointing to the silly state of NPM packages: Who decided that an external dependency was necessary for a module that is 17 lines of code?

 module.exports = leftpad; function leftpad (str, len, ch) { str = String(str); var i = -1; if (!ch && ch !== 0) ch = ' '; len = len - str.length; while (++i < len) { str = ch + str; } return str; }
Developers: less dependencies is better, especially when they're so simple!

You know what's also awesome? The caret semver specifier[1]. You could install a new, broken version of a dependency doing that-- especially when other packages using peerDependencies rely on specific versions and you've used a caret semver specifier.

[1] https://github.com/lydell/line-numbers/pull/3/files

4
smsm42 2 days ago 4 replies      
Reading some of the comments reminds me old tale about a young man, that every morning on his way to work passed by a beggar and gave him a coin (that was back when coins actually had some value). One morning though the beggar notices the coin is smaller than usual, and he asks:

- Why you gave me a different coin today?

and the young man says:

- I got married and now I'm starting a family, I need more money so I can not give you as much anymore.

And the beggar cries out:

- People, look at this putz, he got married, and now I have to feed his family?!

I think the fact that we get so many awesome things for free is unbelievably lucky. I mean, not only we work in the one of the more generously paid jobs, we also get a lot of the tools we need for free! How cool is that? But some people think that if they are given those awesome things for free, they must deserve it and whoever gives them owes them forever. That's not the case. Yes, it is annoying to find somebody who contributed before does not want to do it anymore. It is mildly inconvenient and it can be improved. But let's not lose the perspective - the author does not owe us or npm continued support. It is sad he does not want to do it anymore, but that's what open source is about - people can take it over, and it happened within a single day. Such resilience is something to be proud of, not something to complain about.

5
camwest 2 days ago 4 replies      
FYI I'm the one who republished left-pad after it was unpublished.

I think of it similar to letting a domain name expire. The original author removed the code and I forked it and published a new version with the same package name.

The main issue was there were so many hard coded dependencies to 0.0.3 so I asked npm support if they could allow me to re-publish that version and they complied since I was now the maintainer of that package.

6
praxulus 2 days ago 2 replies      
This is a surprisingly effective protest action. It got the attention of an incredible number of people very quickly, and the damage is mostly limited to wasting the time of a bunch of build cops.

I don't have much of an opinion on his actual reasons for protesting, but I do think it was a pretty cool protest.

7
felixrieseberg 2 days ago 6 replies      
Azer has contributed awesome modules to the community, but such a move _obviously_ messes with a bunch of people who previously didn't trust npm, but Azer. Npm works fine. There might be issues with it, but the reason builds are failing right now is that he decided to unpublish all of them - in a move that feels very kneejerky, despite him claiming that it's the opposite.

If this had been actually in the interest of the community (because he thinks that npm isn't acting in our interest), he'd give people a fair warning. I could have lived with a "Hey, this was my experience, it sucked, I'll unpublish things in 30 days. Please update your dependencies." We know how to deprecate things gracefully.

8
jimjimjim 2 days ago 2 replies      
I am obviously a old fossilized ancient developer. This situation seems like insanity.

not the unpublishing part. the part where the thing that you require to sell/publish/do your job isn't under control or isn't stored within your organization.

Am i wrong in thinking that you should just have a local copy of all of your source code dependencies. would it really take that much longer?

9
chvid 2 days ago 3 replies      
In case anyone is wondering what was in the now broken dependency - here is the source code in full:

 module.exports = leftpad; function leftpad (str, len, ch) { str = String(str); var i = -1; if (!ch && ch !== 0) ch = ' '; len = len - str.length; while (++i < len) { str = ch + str; } return str; }
https://github.com/azer/left-pad/blob/master/index.js

10
cammsaul 2 days ago 3 replies      
Update: NPM takes "unprecidented action [...] given the severity and widespread nature of the breakage" and un-un-publishes left-pad

https://twitter.com/seldo/status/712414400808755200

11
jerf 2 days ago 5 replies      
This is why you should vendor it. What is "it"? All of it, whatever it may be. You should be able to build your systems without an internet connection to the outside world.

I say this with no reference to particulars of your language or runtime or environment or anything else. This is merely a specific example of something that could happen to a lot of people, in a lot of languages. It's just a basic rule of professional software development.

12
drinchev 2 days ago 1 reply      
Sadly there is a user @nj48, who already published empty modules and took the names [1].

Is this a joke or something coordinated with the community?

[1] https://www.npmjs.com/~nj48

EDIT : The hijacked modules look suspicious. http://www.drinchev.com/blog/alert-npm-modules-hijacked/

13
tdicola 2 days ago 1 reply      
I've never felt good any time I have to use node modules and see this gigantic stream of dependencies come flying down. It's even more painful when you need to assemble license information for your software and crawl through _every single dependency and all of their dependencies_ to find their licenses, etc. to check they are OK to use in your software. Just look at the View License info in the Atom text editor some time for a truly insane wall of text (over 12,000 lines!!). IMHO the entire node / NPM system is seriously flawed with so many tiny dependencies for trivial stuff.
14
jwiley 2 days ago 2 replies      
I think that unfortunately this was a foregone conclusion. Copyright law, like most other laws in our society, favor corporate interests.

I support his stand on principal, however. Azer is a talented developer and has an impressive life story, and has certainly contributed more to society than a social network well know for invading children's privacy.

https://medium.com/@azerbike/i-owe-my-career-to-an-iraqi-imm...

https://en.wikipedia.org/wiki/Kik_Messenger#Controversies

15
x0ner 2 days ago 3 replies      
Not sure I follow this completely...

You start a project with the same name as a company, which owns the registered brand and are surprised when some 3rd party complies with legal suggestions to make an adjustment?

Seems kind of silly to expect that NPM would want to fight for your project name when you didn't seem to do your own due diligence when picking a name. Also, a bit backwards to go remove all your modules as well, therefore breaking builds.

16
joeandaverde 2 days ago 1 reply      
Here's a highly downloaded 11 line module with lots of dependents.

https://www.npmjs.com/package/escape-string-regexp

I stopped searching at 1.

I've certainly benefitted from the vast ecosystem of npm. I greatly appreciate the work that goes into making this ecosystem what it is. However, I think we need to be a bit more critical when it comes to acquiring dependencies. Especially authors of very prominent packages.

Fun fact: one of my projects (a web api) depends on over 700 unique name/version modules.

Fellow programmers. This is embarrassing.

17
nchelluri 2 days ago 0 replies      
Wow, very interesting post for me. Earlier today, at work, we ran into an issue where `npm install` was failing because the `shuffle-array` module wasnt found. Investigation showed that the cause was that it was unpublished today. We found that this was a required dependency of the `match` module and this was in our dependency list in`package.json`.

We investigated and found out that it had been erroneously committedits actually a memory game and has absolutely no place in our webservice project. :) (Mistakes happen dependency audits can be worthwhile!)

Now, some hours later, I found your post on HackerNews and was really shocked to see, hey, this is exactly why it was unpublished. Quite a chain of events. Never thought Id figure out why the modules were unpublished, but now I get it! Thanks for the explanation.

[crossposted from the medium article]

18
aioprisan 2 days ago 2 replies      
If NPM wants to stay relevant and a serious contender, they need to have more clear policies in case of IP issues. In this case, the companies weren't even in the same space. Republishing someone's package who has chosen to unpublish and leave your platform is akin to Facebook resurrecting a Facebook profile because they had a lot of friends and the social circle ripple effects would be too high for feed quality for other users, so they chose to reactive the account AGAINST the author's wishes. WHAT?!?We need an open source NPM alternative, yesterday.
19
KajMagnus 1 day ago 0 replies      
Does this mean that I can no longer safely run `npm update`, or ask anyone to download my Node.js project and tell them to run `npm install`? Because the npm repo has in effect been compromised and is unsafe to use, until further notice?

That's what I'm assuming right now anyway. I'm not going to upgrade any Node.js dependencies or run `npm update` or tell anyone to run `npm install`.

If you look at the list of liberated libraries ( https://gist.githubusercontent.com/azer/db27417ee84b5f34a6ea... ) it's "impossible" for me to know which ones of all these libs I use indirectly via some other libraries, and ...

...Elsewhere in this discussion: (https://news.ycombinator.com/item?id=11343297)

> > Is there a plan to address this?

> Too late. Every package name on the list has been claimed already by a randomer with unknnown intentions.

Sounds dangerous to me. ... And I wish there was some way to get notified, when this issue has been fixed somehow.

20
zwetan 2 days ago 1 reply      
funny thing, but assuming that kik is related to kik.com

if you look here http://dev.kik.com/build/, they promote their own server eg. "Our open source web server Zerver can help serve your cache manifest properly, as well as doing other speed boosting stuff like automatic style inlining."

this Zerver is on github and build with npm

https://github.com/jairajs89/zerver/blob/master/package.json

I did not run the build but I'm pretty sure that now their server is not building anymore as it depends on babel

call that irony ;) ?

21
overgard 2 days ago 6 replies      
I think it's amusing to see this from the perspective of the company. Some guy uses your trademark without your permission so you tell him to knock it off. He refuses, so you go around him, and so he protests... by fucking over all of his users. In a dispute that doesn't involve them. And people are celebrating this.
22
adamkittelson 2 days ago 0 replies      
About a year ago I tried to unpublish a version of a library I'd pushed to Elixir's hex.pm package manager but the API rejected it. Turns out they only allow you to revert publishing for an hour after you push.

It was a little inconvenient at the time but in light of this I can very clearly see the wisdom of that decision.

23
chejazi 2 days ago 0 replies      
This broke a number of builds that depended on the (previously) published modules, here's a GitHub issue showcasing that: https://github.com/azer/left-pad/issues/4
24
al2o3cr 2 days ago 1 reply      
"eventually create a truly free alternative for NPM."

Which will either comply with copyright laws, or get blasted off the 'netz and break everyone's build...

The rules are messed up, but dramatic gestures and abstract hopes that "free software will save us" aren't going to fix them.

25
dham 2 days ago 0 replies      
What if Kik uses Node and they broke their own builds inadvertently by enforcing their trademark. 0_0
26
dham 2 days ago 0 replies      
Small modules they say. Small standard lib is ok they say. Just going to point out that in a lot of other languages, string padding is just built into the standard lib.
27
kikcomms 1 day ago 1 reply      
Hi everyone, please read this explanation from Kik's head of messenger about how this played out: https://medium.com/@mproberts/a-discussion-about-the-breakin...

We're sorry for our part in creating the impression that this was anything more than a polite request to use the Kik package name for an upcoming open source project.

28
mschuster91 2 days ago 3 replies      
brouhaha, this is why you should not put node_modules into .gitignore (same for PHP's composer.lock and vendor/ folder).

To be honest, I have waited for something like this to happen so that people finally wake up and realize how deeply and truly compromised the JS ecosystem really is. 11 SLOC not available any more and all over the internet builds are breaking etc.?!

And please, why isn't essential stuff like this in the JS standard string library?

29
jonathankoren 2 days ago 1 reply      
My god! It's full of attack vectors!https://github.com/substack/provinces/issues/20
30
vulpes 2 days ago 1 reply      
Here's [1] a list of all modules that were liberated. Some serious land-grab opportunities there

[1]: https://gist.github.com/azer/db27417ee84b5f34a6ea

31
tobltobs 2 days ago 0 replies      
Even if those trademarks would include a tool like kik, it is completely brainwashed to enable trademarks for three letter words and enforcing them on software packages names.

What are we supposed to type for package names in 10 years. 'abshwjais_kik', or will it be hipster to use unicode like in ''.

32
fiatjaf 2 days ago 3 replies      
Why isn't GitHub the source of all node packages? npm supports it very nicely.

I mean: why don't people write `npm install user/repo --save` instead of `npm install package --save` every time already?

33
cyphar 2 days ago 2 replies      
Seems odd that a patent lawyer is being involved in a trademark dispute. Also, given the fact that he didn't make any money off it, I severely doubt that it would ever go to court.
34
larrik 1 day ago 0 replies      
I don't think Kik is the bad guy here. This npm module was rather new (<6 months?), while Kik Messenger has been around for years, and is VERY popular with the young crowd. They are both software. It would be like the author naming his module 'imessage' or 'spotify', except this is with a company that isn't as visible to the HN crowd.

I personally think him not knowing Kik existed was odd, and not googling the name at all even odder. Even still, I think Kik's response and npm's response were perfectly valid.

Looking at the voting of the comments here makes me sad for what has become of the HN community.

35
lerpa 2 days ago 0 replies      
Good for him, if that platform isn't working go somewhere else.

The major problem here is relying on a central authority like NPM in the first place.

36
seldo 2 days ago 2 replies      
The package author decided to unpublish the package. A new author has now stepped in and re-published the package (yay open source!) and deps are fixed.
37
lukegt 2 days ago 1 reply      
I like how npm even encourages you to create packages in place of the "liberated" ones when you try to visit their now missing pages:

https://www.npmjs.com/package/abril-fatface

 abril-fatface will be yours. Oh yes, abril-fatface will be yours. mkdir abril-fatface cd abril-fatface npm init # work your magic npm publish

38
julie1 2 days ago 1 reply      
The number of coders complaining about an author exercising the basic of intellectual property rights is too high.

1) all coders should understand authors right be the code free or closed;

2) there is no excuse for someone whose value is based on creativity to ignore how IP works (the good and the bad part) because our comfortable incomes come from the protection these rights gives to our work

3) if your code is broken for 11 sloc, maybe you depend too much on others work and you have no value yourselves.

Benevolent persons sharing their code on their free time owes you nothing.

Repay authors whose code you use.

Buy closed source software, and at least respect the free software authors. It costs you nothing already.

39
sklivvz1971 2 days ago 0 replies      
The problem here is that NPM is a private company in an institutional role.

You will always have some very common dependencies which, if brought down or altered, could compromise a lot of projects.

The problem is that npm has to act like an institution, not like a private company.

40
tobltobs 2 days ago 0 replies      
My congrats and respect for his decision. Through actions like this companies might understand that the current trademark (and patent) law is only benefiting lawyers.

And wouldn't it be wonderful if as a result of this the build for KIKs Pedo API are broken?

41
_it_me 2 days ago 0 replies      
Lol that satire flipped bit site even caught it http://www.theflippedbit.io/2016/03/23/developer-outraged-as...
42
nikolay 2 days ago 1 reply      
IED [0] + IPFS [1] + GPG looks like a dream come true.

Note: IED could be much faster than NPM installer due to parallel downloads, which would work great with the slower IPFS.

[0]: http://gugel.io/ied/

[1]: https://ipfs.io/

43
jlarocco 2 days ago 1 reply      
Assuming it's kik.com that complained, the complaint to take down the kik NPM module seems legitimate. They've clearly been around a lot longer, are known by more people, and are in an overlapping market.

It seems like a lot of people would expect a kik module in NPM to be related to the company in some way, and it wasn't.

44
swang 2 days ago 3 replies      
Was that lawyer overreaching? I don't know. But for this guy to expect npm to use their resources to defend him (which they may even possibly lose!) and get mad at them is... a bit presumptuous? Github isn't open source either so is he going to get mad when the lawyers send them an email about kik?
45
datashovel 2 days ago 1 reply      
Open source community needs to aggregate a list of lawyers who will consult on these sorts of things (related to the community at large) pro-bono. This way all parties on the open source side can feel a little less pushed around and bullied and a little more protected.

The best part would be to learn that the claim was not valid in the first place. At the very least, having representation would provide for some wiggle room where you can have days if not weeks to resolve the issue, instead of feeling you have to take immediate action.

46
hellbanner 2 days ago 1 reply      
Can we talk about how patents own namespaces? If I have a little "kik" soccer tournament that no one knows about, then it's fine. As soon as the namespace collides with the HUGE, vastly connected internet, it's a "problem".

We're going to run out of proper nouns, folks.

47
fold_left 2 days ago 0 replies      
I've been warning of the potential for issues like this for quite a while and would be really grateful for people's feedback on this approach to try and insulate your projects from them https://github.com/JamieMason/shrinkpack.

Its not completely there yet, but I think there's something worth exploring further in this idea.

48
sjclemmy 2 days ago 0 replies      
I am a heavily invested user of JavaScript and the surrounding ecosystem and the security aspects of the npm package system has been in the back of my mind for a while. As I don't consider myself an 'expert' in all things npm and package management I've deferred to the general consensus, which didn't seem to mind too much about the security problems npm exhibits (This reminds me of the sub-prime crisis).

I think an event like this is a really positive thing, as it promotes discussion about something that is exceedingly important. All it takes to exploit this vulnerability is a bit of time and effort, it looks really easy to inject malicious code into any number of 'de-published' packages.I hope that some kind of name spacing and / or locking of npm packages results from this and that the javascript ecosystem continues to mature and develop in the right direction. Npm inc have an opportunity here to do the right thing. If they don't then there's going to be a mutiny and a 'better' alternative will supersede npm. Bower anyone? ;)

50
st3v3r 2 days ago 1 reply      
Wow, first they steal a package from the original author, then they do this. Why will anyone want to publish to NPM after this again?
51
diffraction 2 days ago 0 replies      
kik has lawyers all over the world... because it is the platform of choice for pedophiles and sexual predators. there are many billable hours spent responding to doj/states attorney subpoenas. (http://www.trentonian.com/general-news/20140728/pedophile-on...)(http://woodtv.com/2015/02/02/sexual-predator-warns-parents-a...)
52
cdubzzz 2 days ago 1 reply      
Does this series of tweets [0] seem rather odd to anyone? He seems to be calling people soulless and pondering his own "Power and Responsibility".

[0] https://twitter.com/izs/status/712510512974716931

53
zachrose 2 days ago 2 replies      
So what keeps Kik from going after Github?

https://github.com/starters/kik

54
sbuttgereit 2 days ago 2 replies      
I've been reading in the comments regarding 1) the practical effect of breaking builds and 2) the security issues of how package names can be reused on npm once they are unpublished (versioning aside for a moment).

I wonder what other, similar, packaging distribution platforms are vulnerable to this sort of thing? I am not speaking from knowledge of any of the procedures of any of those I'm about to mention, but I have and do depend on some them. Thinking about this issue and some of those other tools that pull long strings of dependent packages does give me pause. Especially the replacement of some dependency with less than friendly code... breakage can be managed, but silent invaders...

Does Perl & CPAN, Rust & crates.io, or Ruby & RubyGems.org suffer these same issues and it just just hasn't been a problem yet? Do they have means of avoiding this? Again, I haven't studied the question... but I think I may :-)

55
wtbob 2 days ago 1 reply      
The problem here is that there is a single petname space (pet namespace? pet name space?) administered by one organisation but used by everyone.

With a different system, the author could have a key $foo, and call his package ($foo kik), and that wouldn't interfere with (us-trademark-office kik).

56
octref 2 days ago 3 replies      
Why don't people just use lodash?

https://lodash.com/docs#padStart

It's well-tested, well-maintained, performant, with good documentation and has custom-build to leave out functions you don't need.

57
taumeson 2 days ago 1 reply      
Wow, this is an amazing outcome here.

Why is "unpublishing" something that can happen in npm? What's the point? I can see the downside, what's the upside?

58
Top5a 1 day ago 0 replies      
All legality, copyright law, etc. aside, how did this even create a problem?

Even on small projects, basic build engineering dictates that you are cognizant of which package versions against which you are building. Furthermore, all packages should be locally cache-isolated on your build server (or local box if you do not have a build server). Building against the most "up-to-date" versions of remote dependencies puts you completely at risk for situations such as this, let alone at the mercy of malicious updates to such remote dependencies.

What sane (pun intended) person would ever build against the most recent version of all packages (including small ones such as this) from a remote build server? Also, for larger (i.e. more than several employees) type operations, how could QA possibly function when building from "most recent version of all packages"?

All these entities that are suffering because of this should immediately fire all their build engineers, because they are not only a reliability concern, but, more critically, a vulnerability concern.

59
TimJRobinson 2 days ago 1 reply      
Quick script to test if your project is using any of the modules he unpublished:

 for module in $(curl -s https://gist.githubusercontent.com/azer/db27417ee84b5f34a6ea/raw/50ab7ef26dbde2d4ea52318a3590af78b2a21162/gistfile1.txt); do grep "\"$module\"" package.json; done
If any names appear you should replace them or force that specific version always (remove ~ or ^ before it). If nothing appears you're probably good.

60
erikb 2 days ago 0 replies      
I don't see the issue here. If the name is taken the lawful way (and Kik is a clothes store chain as well as a chat app, so it's even taken twice) why fight it or be angry about it? Just take another name.

That said the decisions by NPM are also hard to follow. Why allow someone else to take over ownership of a package? Why allow anyone to take down published versions of an open source package? If you publish open source stuff on my site I have all the right to keep that stuff in that version and share it with others. That's pretty much what FOSS is about, right?

61
forrestthewoods 2 days ago 0 replies      
Why yes, depending on a third-party, external package manager is a huge risk. I have always believed that open source projects should be fully inclusive of any and all dependencies. This event has not changed that opinion.
62
jahewson 2 days ago 1 reply      
This is a really bad decision on npm's part. Kik's laywer has pulled a fast one on them. Kik has no right to enforce the Kik trademark beyond the limited set of goods and services listed in the trademark application [1]. Kik is a registered word mark for mobile messaging software only. That's why the trademark database contains many entries for just the word Kik, other companies own the use of that word for other goods and services.

I'm really surprised that npm didn't push back against this. It's not like npm isn't full of trademarks:

https://www.npmjs.com/package/pepsi

https://www.npmjs.com/package/coke

https://www.npmjs.com/package/kfc

https://www.npmjs.com/package/virgin

https://www.npmjs.com/package/sprint

https://www.npmjs.com/package/nba

https://www.npmjs.com/package/nfl

https://www.npmjs.com/package/google

https://www.npmjs.com/package/yahoo

https://www.npmjs.com/package/skype

https://www.npmjs.com/package/word

https://www.npmjs.com/package/excel

https://www.npmjs.com/package/unix

https://www.npmjs.com/package/windows

https://www.npmjs.com/package/osx

[1] http://tmsearch.uspto.gov/bin/showfield?f=doc&state=4804:lir...

63
cammsaul 2 days ago 1 reply      
There's a PR open to remove "unpublish" from NPM here:

https://github.com/npm/npm/pull/12017

64
pluma 2 days ago 0 replies      
By complying with kik's request, npm has set a precedent for library authors that basically means: in doubt, you will lose your package name, even if you dispute the trademark.

This means npm apparently wants everyone to handle trademark disputes like Jade did: https://github.com/pugjs/pug/issues/2184

65
rzimmerman 2 days ago 0 replies      
npm really shouldn't let authors unpublish. It should definitely be impossible to overwrite a published package version (it is, but only for the past year or so).

When you install express, you install 40 dependencies. Each of these has separate maintainer(s) and coordination is optional. If we're going to allow this dependency mess to grow organically, npm needs to be strict about what gets published and we need to be really careful about depending on anything but a strongly pinned version.

66
Trisell 1 day ago 0 replies      
I think this blossoming episode leads me to believe that if you are running a production app, then you need to be hosting your own internal npm, and updating that from the global npm. That way when something like this happens you are able to continue on, and not have many issues, like the builds braking that are being reported on github.
67
dc2 2 days ago 2 replies      
> This is not a knee-jerk action.

The only thing knee-jerk and honestly irresponsible is not warning anyone first, especially knowing how much his modules were depended upon.

Otherwise, there's nothing wrong with this.

68
wangderland 1 day ago 0 replies      
If your project use the packages in the list, and got broken due to this.Here comes a solution https://medium.com/@viktorw/how-to-fix-npm-issues-in-your-pr...
69
miiiichael 1 day ago 0 replies      
I'm adding this bash script to the conversation. https://gist.github.com/mbranch/f77e62d91f46972dcc32

It reports on the inclusion of unpublished modules in all package.json files found in deeper directories.

70
albertfdp 2 days ago 0 replies      
In order to check if I was affected by any potentially malicious hacker that gets ownership of one of the existing liberated modules and adds malicious code on them, I have created a small script to check that:

https://github.com/albertfdp/did-azer-break-my-stuff

71
repn001 2 days ago 0 replies      
Not a Package Manager (NPM)
72
zakame 2 days ago 0 replies      
Sounds like something that would not likely happen on other repos like the CPAN/CRAN/CTAN.

Perhaps the JS community at large would be better off with a similar system? I remember sometime long ago that there was a JSAN effort: http://www.openjsan.org/

73
ilaksh 2 days ago 0 replies      
This type of thing is one of the reasons I suggested before that a module registry could and should be a distributed peer-to-peer system.
74
mstade 2 days ago 0 replies      
The ability to "unpublish" a package is fundamentally strange, because it enables situations like this.

It's also strange that people put so much trust and faith into a private company to host and distribute packages largely for free and then rile against them when they do stuff like this with infrastructure they own. NPM is not some free and open space, it's a private company with private interests. You should expect them to do whatever they need to protect those interests which may or may not coincide with public interest.

I hope this resolves in more people getting involved with projects like IPFS and Nix, that may ultimately provide some recourse to the issues of centralized package management.

75
cdnsteve 2 days ago 0 replies      
76
alongtheflow 2 days ago 0 replies      
Aftermath of situation from left-pad. Some says that it started to break major projects like react-native.

https://github.com/azer/left-pad/issues/4#issuecomment-20006...

77
cat-dev-null 2 days ago 0 replies      
NPM is a for profit, so they're a SPoF from lawyers and governments seeking to control others.

The other issues is a lack of distributed package/artifact replication which makes it possible to take down an entire ecosystem by unplugging a few servers.

78
tytho 1 day ago 0 replies      
Perhaps someone has already suggested this, but what if npm had some sort of "unpublish block" if any modules depended on yours? Or maybe some sort of notification to the dependent package owners. This doesn't solve the issue of unpublishing dependent free packages, nor does it solve someone taking over and putting malicious code, but it would encourage a more responsible behavior when removing a highly depended upon package.
79
stblack 2 days ago 0 replies      
I read the whole damn thread and nobody, nobody links to the assholes that deserve to be kik-ed.
80
Coxa 1 day ago 0 replies      
Check your project for these liberated modules using (yay) this module https://www.npmjs.com/package/affected
81
flurdy 1 day ago 0 replies      
Can NPM not add to their TOS and features a "notice period"? With a grace period for errors e.g. if published and older than one week to remove a package you have to give notice first, e.g. 2 months. With a suspension before actual removal?

With some avenues for expedite removal/suspension ie security and legal, which would have removed kik quicker but not leftpad.

Whether people would be aware of the notices or ignore them is another issue.

82
st3v3r 2 days ago 1 reply      
Hopefully NPM will think of this in the future, next time they try something like this.
83
grapehut 2 days ago 0 replies      
My biggest issue with npm is the lack of verifiable build. Even if I read it on github, I have absolutely no idea if that's exactly what the person uploaded to npm. I very well could have malicious code and not know it.
84
gedrap 2 days ago 0 replies      
Thanks to this, I hope people will consider the way too common deployment approach when during the build time you pull stuff from npm (or whatever external package manager/repository), and if it fails, the build fails.

This is fine for small projects. There are tons of applications where availability is less important then development speed.

However, not being aware of the risks and tradeoffs you're making is just plain simple insanity.

85
kelvin0 1 day ago 0 replies      
The only Kik I knew was the Cola:https://p2.liveauctioneers.com/1164/26545/9944497_1_l.jpg

So this lawyer he's from what company? Cause there seems to be quite a lot a Kiks around these days (Kik Messenger?)

86
justaaron 2 days ago 1 reply      
oh geez.... welcome to trademark law

Google.

(why is this getting frontpage HN coverage?)

a trademark is a globally enforceable right (madrid agreement) and one has an obligation to protect ones mark from "dilution" from others in the same category:

i.e. if you are selling "apple" garden shovels, you needn't worry about crossing into "apple" computer land, but I guarantee you that they already registered that mark for "home electronics" etc.

Most countries require formal registration of the trademark (they are searchable in online databases)and most will go on a "first filing" basis.but several, including the USA, go by a "first usage" basis and require you to prove your use of the mark in public...

it's a long shot, but you can always look of that company has, in fact, registered that mark, and in which country/territory are they claiming usage rights.

(for example, they can't be a local computer shop named "apple computers" that only sold to locals since 1854, that suddenly sells computers on the global market, as there is already a global entity with that name registered)

87
superninja 2 days ago 0 replies      
"This situation made me realize that NPM is someones private land where corporate is more powerful than the people, and I do open source because, Power To The People."

This is true of all package distribution systems. There's always a small elite of admins who regard the system as their territory (and usually have no respect for the authors).

People contributing should be well aware of this.

88
staticelf 2 days ago 0 replies      
89
antouank 1 day ago 0 replies      
> npm took the name away because they reasoned that more people would think that `kik` the pkg would refer to kik the app. full stop.

https://twitter.com/ag_dubs/status/712669386511949824

90
joepie91_ 2 days ago 0 replies      
For everybody discussing decentralization of NPM in the comment threads down below, please read the following thread: https://github.com/nodejs/NG/issues/29

Much of the thinking work has already been done.

91
ecthiender 2 days ago 0 replies      
Well done OP! I stand in solidarity with OP. I think this is a good way of showing our resistance to corporate power - by boycotting them.
92
kofejnik 2 days ago 2 replies      
this is why your dependencies should be checked into git
93
Confusion 2 days ago 0 replies      
Meta: please don't upvote for agreement if facts are asserted that you cannot corroborate. And please carefully consider whether you only believe or actually know something is true. A lot of patent falsehoods are being asserted and upvoted in this thread.
94
dclowd9901 2 days ago 1 reply      
While I don't disagree with OP's angst, fuck them for choosing pride over working products. It's irresponsible and shows a complete lack of maturity. I'll make sure never to consume their modules in the future. God forbid they have a bad day and decides to insert malicious code into their modules.
95
spriggan3 2 days ago 0 replies      
It's high time people publishing packages on NPM audit their dependencies. I bet 80% of them are unnecessary.
96
yeukhon 2 days ago 1 reply      
If I call my module pizza, are they going to send me an email about naming it pizza? Let's think about that. If a company owns kik as a trademark, I'd offer some money to buy it off before trying to act like a tough guy. At least be soft first if your goal is get rid of kik module out there.
97
trumbitta2 1 day ago 0 replies      
Trying to help with damage control: https://news.ycombinator.com/item?id=11346633
98
Wintamute 2 days ago 0 replies      
I'm confused weren't scoped packages added to avoid all of this sort of thing? Kik should just have used "@kik/kik", and the original package author should have been left alone.
99
hartator 2 days ago 1 reply      
Atom.io is not impacted, I think it's a good thing that apm is running on its own network.
100
jordanlev 2 days ago 0 replies      
How does serving his modules from a different corporate-controlled repository (github now instead of npm) serve his purpose of "liberating" the code from potential corporate meddling?
101
tehwalrus 2 days ago 0 replies      
Sounds like NPM should move to pulling in the code from specific github tags or something? although I suppose, github is also "private land"...
102
guhcampos 2 days ago 1 reply      
"This is not a knee-jerk action"

Yes, it is. The fact you did not know about a company branded "Kik" does not make you excempt from the law. A law which, surprisingly enough, is being used in a reasonable situation here. Your package and their segment are closely enough related in context that people could assume they are actually related, giving you the power to essentially break their business if you do bad stuff.

In this case it's not really a trolling coming from them. You don't just brand your new beer brew "Coca-Cola" - there's no reasonable argument to do that besides being a troll.

P.S.: holy crap npm is so broken I'm glad I'm on the backend side of life.

103
gambiting 2 days ago 1 reply      
I have read the post and I still have no idea what NPM is.
104
tobltobs 2 days ago 1 reply      
What would Stallmann say?
105
howareroark 2 days ago 1 reply      
If I can buy a domain from ICANN for 10 bucks and then sell it to a company for a million... Why can't this guy reserve the right to sell this to that company for a million?
106
stevebmark 2 days ago 0 replies      
This seems like a fairly childish response. I'm not pro-copyright, especially in software, but "someone took my made up name" seems like a dumb reason to unpublish the rest of your work.

> "NPM is someones private land"

No shit npm is a privately owned company? That hasn't changed before nor after you took these actions.

> "Power To The People"

This is what I don't get. All of the modules that were unpublished seem unpopular / not used so I don't know what impact this will have, but how does screwing over users of open source software equate to power to the people?

107
galistoca 2 days ago 1 reply      
Reading the article I thought it was some massive popular framework. But when I visited the library's github page, it seems to have only 8 favorites. Am I missing something?
108
bbcbasic 2 days ago 2 replies      
> This is not a knee-jerk action

Seems like it. Why break everyone's builds? You could just keep the modules there and then declare you will only keep them updated elsewhere?

109
Chyzwar 2 days ago 0 replies      
We should just troll them and created packages with kik in name like:

kik-looseriKikonly-kiktrue-kiktrue-kik2real-kik

110
EGreg 2 days ago 0 replies      
I'd like to liberate your modules

The new pickup line

111
devishard 2 days ago 0 replies      
Yet another example of the JavaScript ecosystem being pretty much garbage.

To be clear, I'm not attacking the author here. He released left-pad at version 0.0.3; no responsible developer should be using that in production code.

112
chvid 2 days ago 2 replies      
I really just hope that this guy just didn't know what he was doing and what effect it would have.

Otherwise it is totally irresponsible to mess up a big project like babel just because you control a few lines of trivial code.

113
studentrunnr 1 day ago 0 replies      
npm will improve after this and that is a net good thing which comes from this.
114
jlg23 2 days ago 3 replies      
Seriously? "When I started coding Kik, didnt know there is a company with same name. And I didnt want to let a company force me to change the name of it. After I refused them, they reached NPMs support emphasizing their lawyer power in every single e-mail CCing me. I was hoping that NPM would protect me, because I always believed that NPM is a nice organization."

a) Ignorance is no excuse.

b) Expecting others to fight for one is lame. Either have the balls and fight or STFU.

"Summary; NPM is no longer a place that Ill share my open source work at, so, Ive just unpublished all my modules. This is not a knee-jerk action."

Wrong, that is the prototype of a knee-jerk action.

Last but not least, whining about it in public in the hope "something will happen" is pathetic.

What I'd suggest (though now it is too late): Rename the module to comply with legal claims, put up a new module under the old name that throws errors when called that describe the reason so developers see it and put shame on the threatening company/lawyers.

115
turtlekiosk 1 day ago 0 replies      
can i kik it?

RIP Phife Dawg

116
dang 2 days ago 2 replies      
Name-calling and slurs, both of which you've done here, break the HN guidelines. If you have a substantive point, please make it without stooping to these; if you don't, please don't post at all.

We detached this subthread from https://news.ycombinator.com/item?id=11340768 and marked it off-topic.

117
zongitsrinzler 2 days ago 2 replies      
Extremely dick move on behalf of the developer. Why would you remove modules that other people are using in production?

Did you think a small team like NPM would go head to head with a company having full time lawyers? And for what?

3
That awkward moment when Apple mocked good hardware and poor people techinasia.com
1063 points by nkurz  2 days ago   564 comments top 110
1
rockshassa 2 days ago 23 replies      
I just can't bring myself to feel the author's anger, in any capacity. He wants to position this as a jab against those who build their own PCs, but that is utterly irrelevant. What percentage of those 600 million five-year-old PCs do you really think are being thoughtfully maintained by modders? Does the author realize that most people do not want the responsibility of maintaining their own hardware? Or that they don't have the knowledge to do so?

Allow me to paint a more realistic picture: many of those PCs are junky, dusty boxes, running some outdated version of windows, filled with bloatware and riddled with security issues. Inside them are a bunch of spinning platters just waiting to fail. And when they do eventually fail (due to wear, or a virus, et al), someone's Grandmother is going to be shit out of luck, with no way to get at her email, saved photos, or anything else.

A properly configured iPad, leveraging iCloud for device backups, photo backups, email credentials etc, solves all of these problems. And they'll even configure the iPad for you in the store, so grandma doesn't need to know how to do any of it. Do YOU want to be the poor sap attempting data recovery on a failed disk, then realizing that even if you do recover grandma's data, you've still got to go buy a replacement drive, find a copy of windows that grandma knows how to recognize, and install everything exactly as it was before you got there? I've been that guy before, in both a personal and professional capacity. You will eventually fail, memories will be lost, tears will be shed.

We must not gloss over the fact that the iOS ecosystem does solve some very real pain points for real people.

2
vbezhenar 2 days ago 9 replies      
I can't imagine how I would replace my laptop with iPad. Some tasks are definitely doable: Web browsing, Mail processing, Music listening, Skype (though chatting on iPad is terrible because you have to switch around all the time, losing your focus, may be split apps might help, can't experience it, because my iPad have RAM like 15-year old PC).

Generally speaking for power user every activity on iPad is strictly worse. I can't easily download ZIP, unzip it, open some text file, edit it, send it via Mail. Probably I can do it with right apps, but it would require much more clicks or taps.

What I can't even imagine doing on iPad: using Intellij Idea, using XCode, using Google Chrome to debug and develop web apps, using image editors like Sketch and Pixelmator (I know that I can get some kind of image editing, but I don't think that I can do what I'm doing on PC).

Now things I could theoretically do but probably can't, because of walled garden: using Terminal to embrace full Unix power, downloading files with BitTorrent, using BitCoin. Probably possible with Jailbreaking, I'm not sure. Also I'm not sure whether I could download some huge 20GB file and watch it using another app without duplicating (does iOS copy file when I open it with other app or just hardlink?).

And, of course, keyboard is necessary. Mouse would be useful too, but iPad doesn't support mouse, AFAIK.

So probably the only users who can easily migrate from PC to iPad are very casual users, who use their devices to browse web, chat and play simple games. There could be some professionals who work with iPads, it's theoretically possible, but I can't imagine anyone.

3
Udo 2 days ago 5 replies      
I'm an Apple-only user at the moment, both mobile and desktop.

When Apple asserts that a desktop computer should be replaced by a locked-down handheld device with very limited capabilities, the odd thing seems to me is they don't realize these devices do very different things and fulfill very different needs.

I don't worry about the demise of the desktop because I'm nostalgic, I worry about the loss of power and productivity incurred by users with desktop-illiteracy. There are many applications for which a handheld device, especially one with the limitations of iOS, is just not suitable - in much the same way a full desktop/laptop computer is not suitable for things mobile devices excel at.

That the hardware is locked down, outdated, and supremely expensive are additional criteria making the disconnect worse, but these are not the crux of the problem in my opinion. I see two outcomes from this, neither one is appealing: either Apple is misjudging the needs of their users, to the point where trendsetters like programmers will be switching away from the platform. Or, they succeed in their vision and breed several generations of technologically illiterate information workers fumbling their way through life with nothing but extremely limited mobile devices as their only productivity tool.

4
waspleg 2 days ago 18 replies      
I completely agree with the author of the article. Apple is the real-world incarnation of the economic premise of Huxley's "A Brave New World". This was just the mask dropping for a second to pander to the faithful.

I'd like to add, as someone who works at K12 public high school, that I've seen the reality of the article played out. My building is 100% free lunch, most are extremely poor, and yet there is a sizable number with new iphones. Why? Because they don't want to look poor or be thought of as such.

In American society poverty is associated with failure on many levels. We have our caste system as much as India, only ours is economic, and enforced ruthlessly with endless class warfare - largely in one direction.

5
gurkendoktor 2 days ago 1 reply      
That slide was really bad. It's not about being easily offended (I have no reason to) - but it pokes a hole into the first half of the event, where Apple tried to present itself as a green and caring company.

It's such an obvious mistake that I wonder nobody at Apple has pointed that out during the rehearsals? The thing about old PCs being designed 'before social media' was dumb, too. This is a "pro" tablet, right? How does it matter if it has a Facebook app?

This is the same Phil Schiller that is now in charge of the App Store, and as an iOS developer, I find the carelessness a little worrying.

6
gopz 2 days ago 1 reply      
Apple says this shit every time it has a major press event. Saying it mocks poor people is digging around for something to be offended about. Is the author really offended every time they see an ad for a newer car model when current and prior models work just fine? Saying it mocks good hardware isn't out of touch. Seriously, what did they think Apple was going to say:

> Buy a new iPad pro.

Schiller turns around, goes back to the slides, stops midway, turns to the crowd and says "But I still use an old PC, because if it ain't broke, don't fix it!", winks, and continues the presentation.

Come on, this article is grasping at straws.

7
Spooky23 2 days ago 0 replies      
IMO, this is a typical Apple troll article.

"<vendor> mocks poor people" is the dumbest possible analysis of this data. I work for an organization with something like $1.5B in IT spend. Our 40th percentile PC is 8 years old. The 80th percentile PC is 4. Our desired refresh for a desktop PC is 40 months.

Why? The post 2008 recession killed discretionary spending. Then Microsoft failed utterly to deliver a compelling desktop strategy from 2008 to the present day. They finally got their shit together with Windows 10, but their fantasy world where the universe is transitioning from Windows 7->8->10 makes that more friction-prone than it need be.

Consumers are in the same boat. People skipped upgrades because of the friction involved in the transition, which is why Microsoft is dragging you to upgrade kicking and screaming.

Personally, I use my elderly in-laws as a proxy for non-technical consumers. They are technophobes -- a retired fireman and nurse respectively... not rich, not poor. When I met my wife in 2000, they were still leasing a telephone from AT&T. They made the PC->Mac transition in 2009 and were actually able to use their computer without worrying about the typical PC woes (AV, updates, etc). That Mac is aging and it's was starting to turn into the time to go.

With the iPad Pro, my father in law ran out to the Apple Store by himself, got the stuff he needed and got everything going on his own. Long story short -- he loves it. It does everything that they need to do, and is a more convenient form factor than the laptop. He hasn't touched the computer other than syncing music from the Mac to the PC since.

8
ekianjo 2 days ago 3 replies      
The superior attitude of Apple execs is nothing new. Even right at the start of the company that's a culture Jobs started with, looking down on all its competitors.

What's rather sad is the laughing audience, to me, who left their brains at the door and laugh and applause when they are told to. Just like in 1984's iconic Apple ad, by the way. The loop is complete.

9
jMyles 2 days ago 3 replies      
In addition to the point the author makes about the benefits of modularity, this strikes me as environmentally tone-deaf as well. Do we just expect devices to enter the waste stream every 5 years now? Can't we do better?
10
mrbill 2 days ago 4 replies      
I just bought a "new to me" laptop.

Refurb Thinkpad T420s from 2011.I added 16G RAM, two Intel SSDs, an Ultrabay battery, and an 802.12ac wifi card.Grand total: less than $325.

This will be my primary portable for at least 2-3 years, and it's already four years old.

Just because I can afford Apple doesn't mean I can justify the 2x price premium, or that "old" hardware isn't capable.

11
ysavir 2 days ago 3 replies      
Did you know there are over 600 million Honda Civics over 5 years old still on the roads? Hah! Clearly, their owners should replace them with Porsche 911s instead.
12
ams6110 2 days ago 0 replies      
My daily computer at work is a 6 year old Dell Optiplex. 4GB RAM and an SSD. Perfectly good for what I do.

My laptop is a 1st gen Macbook Air, inherited from original owner. Also perfectly good for what I do.

My car is from 2004. Perfectly reliable and meets all my needs.

There is a HUGE amount of retail activity that is caried out just because people want the newest and latest version of things. And I have no problem with that, but it's absolutely not necessary if you don't want to participate.

13
WhitneyLand 2 days ago 1 reply      
I'm usually pretty sensitive to this kind of thing, but this is an overblown reaction that reads too much into what was said.

There are enough people and situations to judge for disregarding the condition of poverty, we don't need to contrive any.

14
macspoofing 2 days ago 1 reply      
Wow. Is this how far the author had to dig to find something to complain about? Yes, companies will frequently give you some marketing spin to get you to buy a new version of a product. And yes, you should use your head to figure out if you a) need it, b) can afford it. And no, it's not an insult to those that decide not to buy the new product.
15
tylercubell 2 days ago 0 replies      
Yes, it was a misinformed statement on the part of Phil Schiller, but I find it annoying that the author cherry-picks one line from an hour long keynote to write an overly dramatic holier-than-thou diatribe. If the author wants to make the case that Apple is elitist or out-of-touch, then he'll need to gather more evidence rather than rely on a few personal anecdotes and pretentious quips.
16
Kristine1975 2 days ago 0 replies      
Since Apple sells hardware, of course they will find it sad if people don't buy new hardware all the time. But it's nice to see them being (unintentionally?) honest about it for once amidst all the marketing.
17
agumonkey 2 days ago 0 replies      
Didn't read Schiller's comment as a jab to users. More as the usual attack against the MS/clone industry.

My thoughts at that moment: 'modern digital like' is a sad joke. 4K video won't change your life, even 1080p. If your hardware isn't absurd, pop Linux, a SSD if needed, and enjoy the 80$ bliss. All from a guy trying to sell hour based color shifting .. come on.

18
donatj 2 days ago 0 replies      
This cuts particularly close to home for me. I work for an educational company with developers that actively mock schools with low resolution screens and poor JavaScript performance, not to mention Android and Fire tablets, while they sit on these brand new MacBook Pros with SSDs and Cinema Displays doing their testing. The schools don't have these low power machines because they're dumb or not tech savvy. The vast majority have them because they are poor. We are trying to help the poor and you are completely missing the point. If you can deliver the exact same content in a way that doesn't require a high performance machine that is the ideal. The more we can do server side the more we are giving instead of taking.
19
peterashford 6 hours ago 0 replies      
I'm typing this on my home PC which is over 5 years old. I've replaced the gfx card, CPU and memory and it plays Battlefield 4 pretty well. I also use it for programming (I work in game dev). Why would I replace this very serviceable PC for a Apple toy which will be unsupported as soon as its manufacturer thinks they can get away with it in an attempt to get everyone on their must-upgrade-to-the-latest-thing merry-go-round. I appreciate quality hardware and fashion in tech a-la Apple makes me puke.
20
frogpelt 2 days ago 0 replies      
This is so silly. Apple is a company that makes money when you buy their products.

If there are 600 million people who are using something besides one of their product they are going to say that those people should use their products instead.

Get off your high horse.

21
studentrob 2 days ago 1 reply      
Schiller is marketing his company's product. So what?

What he said was along the same lines of rhetoric as all the "I'm a Mac, I'm a PC" commercials.

There's nothing to take personally about what an executive says about the products you choose to use in your home. That's your choice.

22
post_break 2 days ago 0 replies      
Teases people with 5 year old PCs, still ships Cinema Displays and Macbook Pros that are from 2011.
23
tombert 2 days ago 0 replies      
It kind of feels like the author was just looking for reasons to be offended. This comment didn't really seem that offensive at all.
24
zekevermillion 2 days ago 1 reply      
My home PC is over eight years old and still functions adequately. I enjoy using it and maintaining it much more than I would fumbling around with a new Apple iPad. It cost about $2800 new, and I've spent about $300 on upgrades since then. So the total cost of ownership is about 390/year. I don't think this cost is much better than buying new iThings every couple years. But I get a much more powerful device, and it requires me to learn a few basic things about the computer to keep it up. So for me, as I suspect for a lot of people, a decision to maintain a PC system is not really as much about cost as it is about fun.
25
SEJeff 2 days ago 0 replies      
Actually a lot of homeless are turning to smartphones to help with their day to day lives.

http://www.business2community.com/tech-gadgets/the-us-homele...

http://www.nytimes.com/2015/04/15/upshot/fighting-homelessne...

26
mobiuscog 1 day ago 0 replies      
The only hardware in my house that needs to be replaced (which includes both PCs and Macs) of that age is...

... my original iPad. Because Apple left it to die - even the last update means it can't run Apple's own store or browser without crashing and I can't get versions of software that previously worked because they're not supported or offered any more as Apple forced developers to move up to all of the new API's.

Yes - I can see exactly why I should spend yet more money on something that will intentionally become obselescent in the future.

27
mberning 2 days ago 2 replies      
I have no idea how the two points in the article are the only takeaway the author had. On point one, Apple has never been the cheapest at anything. Although they have been offering more low end options as of late they are still not a 'budget' manufacturer and likely never will be.

On point two, it is absolutly possible that a 5 year old machine still runs fine. Depending on the machine! A 5 year old macbook pro is still pretty capable. A 5 year old acer laptop that you got at a black friday sale... not so much. I think this is the level of user they would like to have switch and it's not that far fetched.

28
erokar 2 days ago 0 replies      
The comment appears even more heinous when Tim Cook goes on to brag a about recycling and Apple's environmental responsibility. Please. Apple is one of the most bourgeoisie and reactionary companies today, primarily pushing products meant for consumption, not for creating. At least be honest about it.
29
w8rbt 2 days ago 0 replies      
I use 7 year old PCs because they run Linux just fine.

Also, I think it's bad for the environment and humanity in general to buy new igadgets every year. So, I use a 4 year old refurbished dumb phone that I bought for $9.99. It works great (just like my 7 year old Linux PC).

30
Mikeb85 2 days ago 0 replies      
> This is an amazing statistic, he says with a serious look before revealing that there are more than 600 million PCs in use that are over five years old. This is really sad.

Maybe people are still using old PCs because they still work and are fully functional? I'm using a 4 year old ThinkPad and guess what? It's not only still fully functional, it's still quick, snappy, the screen is still bright and looks nice (I did splurge for the upgrade at the time), and it's been super low maintenance (been running Linux on it the whole time). It could use a new battery (capacity has gone down over time), but hey, it's a removable battery so I can do that.

I imagine I'll still be getting a few more years of use from it, there certainly are no signs that I should upgrade at the moment. I mean, if I were to be a little greedy, I'd buy a new laptop with a wicked video card and give this one to my wife (after she made me throw out a fully functional 6 year old desktop we didn't use very often), but is that necessary, not really.

It's pretty plain that Apple simply plans for its products to be replaced quicker than 5 years, to make more sales, and they're speaking to the faithful.

31
scandox 1 day ago 0 replies      
8 years ago I bought a top of the range Sony Vaio. I used it till last month. Replaced hard disk one time. Replaced memory one time. Glued charger together one time.

I bought a new median range Dell this month....and meh. Basically those eight years have made no difference to me running Arch and/or Debian.

Imagine trying to say that about the 8 years preceding that (I.e 2000 to 2008).

32
OSButler 1 day ago 0 replies      
In my case they mocked themselves, as I'm working on an old Macbook model.I'd love to upgrade, but everything is still working. The things I like about the new Macbooks barely affect my work and if I spec one out to my preferred options, then it ends up with an astronomical price tag. It just seems like they are advancing in some areas (screen, touchpad, ...) but are stuck in the past in several others (memory, HDD, ...).

I'll most likely be getting a new Macbook once my current one finally decides to visit its ancestors. With that said, I'm actually impressed with its longevity, as none of my other PCs (desktops & laptops) ever lasted that long. So, I actually see running on old hardware as an impressive feat (unless you're a gamer).

33
teekert 2 days ago 0 replies      
"Ending is better than mending"

For the uninitiated: http://www.enotes.com/homework-help/ending-better-than-mendi...

34
rythie 1 day ago 0 replies      
I'm not convinced the new iPad pro will even be usable for 5 years.

I gave up on the iPad 1 and iPad mini because they didn't have enough RAM to load many websites. The new iPad pro has only 2GB of RAM and even 5 year old PCs had 4GB or 8GB.

35
coco1989 2 days ago 1 reply      
My Ipad mini keeps getting slower and slower. soon it will be a Kindle
36
ebbv 2 days ago 0 replies      
Yeah it's a stupid comment; my PC uses a Core i5 2500K which is more than 5 years old. There's good reason; it's still plenty fast.

But saying this was mocking poor people is just ridiculous. It's trying to stir up outrage over nothing.

37
sklivvz1971 2 days ago 0 replies      
I felt the same as the author. The blog post, however, is a bit unfair to Apple and Schiller.

I think what he meant was "It's sad that these people don't know yet the wonderful news that they can use this fantastic new iPad instead"

38
abecedarius 1 day ago 0 replies      
> And I bought an easily moddable, upgradable piece of hardware that can adapt to new technologies in ways no Apple product could ever hope to.

Funny thing: the Apple ][ was the flag bearer in its time for that kind of upgradability. Wozniak insisted on this against Jobs, and it kept the ][ alive through the 3's flop and well into the Mac era -- the Mac did not succeed right away.

Is it so impossible for Apple to bring back some of that spirit of open design? The Jobs lockdown was and still is their greatest turnoff for me as a customer.

39
kdamken 2 days ago 0 replies      
Is the iPad Pro a feasible replacement for an actual computer? Not even close. Not until it runs a version of OS X.

Was apple mocking poor people? Of course not. It's 2016. Can we stop overreacting about every little thing yet?

40
donkeyd 2 days ago 0 replies      
My 15" Macbook Pro will turn 4 this year. I don't see it being replaced soon, since it still performs quite well. The 13" MacBook Pro that my girlfriend uses is about to turn 6 years old. That one could be replaced, since it has some issues, but for her it does what it needs to do, so there's no real need. We might be able to replace it with an iPad Pro, but I doubt we will.

I wouldn't call myself poor, but I have no need to replace these devices every 3 years, since not much changes, except they get thinner.

41
ilamont 1 day ago 0 replies      
Reminds me of the time Schiller dumped Instagram after an Android version came out.

A reader noticed Schiller deleted his Instagram account (@schiller), and then reached out to Apples most visible public speaker by Twitter for confirmation. Schiller told the reader that he quit the rising photo-based social network, because the app jumped the shark when it launched on the Android platform.

http://9to5mac.com/2012/04/19/apple-marketing-svp-phil-schil...

At the time, I thought it was a slap in the face to people who couldn't afford iOS devices but wanted to join the Instagram community. Schiller portrayed it as a drop in quality, apparently:

Another 9to5Mac reader, Clayton Braasch, claims to have emailed Schiller directly, asking him to elaborate upon the statement. In a post on his blog, Braasch writes that Schiller responded 9to5Mac says it has verified the email headers and that while Schiller still considers Instagram a "great app and community," he enjoyed the fact that it was used by a small group of early adopters. Now that its reach has expanded, Schiller allegedly wrote, "the signal to noise ratio is different."

http://www.theverge.com/2012/4/19/2961612/apple-phil-schille...

42
rubyfan 2 days ago 0 replies      
Wow, this is reading way too much into the comment than it deserves. I guess if you have an agenda then you will figure out any way to make your point no matter how far a stretch it is.
43
psk23 2 days ago 0 replies      
Ive got a PC Thats one week old. I bought it straight after apple failed to announce any macs with a decent GPU. Most devs I know are switching back this year or last.
44
dreta 2 days ago 0 replies      
Virtue signalling, the article.

Apples based in SF. An iPad Pro is like a weeks worth of rent, and to them work probably means scribbling a hipster poem while sipping coffee you cant pronounce without 5 years of language study. How is anything the guy said different from how the conferences always go. The fact that the author found it worthwhile to write about this like this is an actual problem is astounding.

45
joezydeco 2 days ago 0 replies      
How about the other moment where Apple showed their "40 years in 40 seconds" video and scratched out the Newton?

You really want to mock the massive effort of that group - a group you believed in at one point in time? At least honor the memory of Ko Isono.

I didn't see the Lisa or the Apple /// get the same treatment. Maybe because Saint Steve backed those projects?

46
ivankirigin 2 days ago 0 replies      
This story is a pretty good measure of whether you get offended by something very small.

A 5 year old PC is low quality. A company that makes high quality products wants people to have better experiences with computers.

At the event, they launched their lowest price phone. I bet that phone in a year is going to be even cheaper.

You can find offense in everything, but you shouldn't.

47
kmano8 2 days ago 0 replies      
I was passively listening to the keynote, and this comment caught my ear. My first reaction was, "Well damn, I'm pretty proud that the 2010 Macbook Pro I've been using as a 24/7 server for the past 5 years is still chugging along."

Though I suspect Macs might not be included in that statistic, it seemed out of touch nonetheless.

48
twoodfin 2 days ago 0 replies      
Good evidence that if you try hard enough you can be offended by anything.
49
rdl 1 day ago 0 replies      
I just realized my gaming PC is almost 5yo. i7-970, 6 core, x58, 560 Ti, 24 GB, a couple SSDs. Was high end when I built it and still basically ok now.

Computers really have plateaued in a lot of ways. Phones for 2-4y vs 1-2y, laptops for. 3-6 vs 2-3, desktops for 4-8 vs 3-4y.

50
mwfunk 1 day ago 1 reply      
That's reading a lot into a random joke. When some public person makes a pseudorandom comment and you perceive that as a mask being dropped to reveal confirmation of all your darkest suspicions about that person or the people he speaks for, you might be projecting, just a smidgen.
51
droithomme 2 days ago 0 replies      
I disagree with this claim.

> There are really only two reasons why people might have a computer thats more than five years old:> 1. They cant afford an upgrade.> 2. They dont need an upgrade.

There are many other reasons. Among them, upgrading hardware will force an OS upgrade that will break significant software and hardware.

52
songzme 1 day ago 0 replies      
I think its easy to point fingers, but I myself is guilty of "mocking poor people". One memory that stuck in my mind was during college during office hours. A classmate was struggling to configure her assignment on her 5 year old PC, nobody really wanted to help her because her computer was so old and so irrelevant. Casually, I joked (with classmates around us)"You should burn your laptop and get a mac." Some laughed, but she didn't.

"Why don't you buy me one"

I was a little offended, her remark and body language felt a little hostile. We never talked again.

This memory stuck with me and I wish I could apologize to her. My seemingly harmless remark poked fun at her economic handicap for my amusement.

53
circa 1 day ago 0 replies      
I agree with a lot of this in the personal world. Sure a lot of people don't have the money to upgrade.

At my old job, at an MSP, I used to upgrade a TON of people from XP to Win 7 or 8.1 in the small/medium business world. I could not believe how many companies absolutely had the money to upgrade their PCs but simply did not want to because they were afraid of the learning curve. It basically came down to that. They had no problem paying for us to "fix" their XP machines at $125+/hour. The same went for a lot of servers. Who knows how many of those are in the said 600 million pool.

54
skc 1 day ago 0 replies      
It's not the comment that was absurd. It was the fact that Apple seemingly has no answer as to why there are over 600 million people still using old PC's.

Instead of exploring why that might be the case they instead decided to mock these people.

Just bizarre.

55
dopamean 2 days ago 1 reply      
This reads like outrage for the sake of outrage. Like someone looking for offense everywhere.
56
jonkiddy 2 days ago 0 replies      
"nothing about todays iPad Pro presentation made me rethink my position at all"

This is the only part of the article that resonated with me. Apple completely failed to provide any viable reason for 600m+ PC users to switch to an iPad Pro.

57
bechampion 1 day ago 0 replies      
I own a macbook and I love it but, what I really need is

BrowserTerminalPython

That is it , to me is very relative ... i can work with a 5 year old laptop... no problem , most of my things run on servers etc etc.Ipad Pro? no way .. give it to kids.

58
dates 2 days ago 2 replies      
"Maybe Apple really does find the idea of hardware that can function for five years 'sad' and funny these days."

I don't think this is true. Apple just replaced the motherboard on my 2011 Macbook Pro for free. I've also upgraded the RAM, HD to a SSD, Battery, & Fans. Its running soooo good I love it.

The price of 5+ year old macbook pros, imacs, and mac pros on ebay is proof that apple does make hardware that lasts... Anyways, I'm interested in the relationship between apple releasing new products and the aftermarket value of older version. I think apple releasing newer ipads probably makes older used ipads more affordable..

59
s_q_b 2 days ago 0 replies      
The point of the throwaway remark was that there's pent-up demand for consumer goods due to the Recession.

Similarly, traders are speculating global new vehicle sales to increase, as the average age of a car on the road far older than the previous trend. It's north of a decade even in the United States.

The theory is that during tight times people deleverage and reduce spending, while during boom times the demand that accumulated during the tight times is released through extra consumption.

Getting from the Apple comment to the article's topic, much less conclusions, takes a spectacular amount of cognitive gymnastics.

60
gnicholas 2 days ago 0 replies      
>Even if you insist on a tablet, you could get Microsofts Surface 3, which boasts a slightly worse screen but offers double the storage capacity, for US$100 less.

It is well-documented that the Surface OS takes up far more space than iOS, which means that the available space is not nearly as disparate as it would seem (though the Surface still has a bit more): http://www.slashgear.com/surface-3-storage-space-still-limit...

61
d_theorist 2 days ago 1 reply      
I think this person has missed the point.

The assumption here is that the vast majority of >5-year-old PCs are crap, and that the reason for continuing with them is that a new full PC replacement is too expensive. The iPad is supposed to be disruptive because it's a lower-cost machine that can nonetheless do everything the user used to do on a full-fledged PC.

You might disagree that the iPad succeeds in this, but it is nonetheless the way Apple thinks about where the iPad sits in the market. This was the point of Schiller's remark. It has nothing to do with mocking the poor.

62
kplanz 1 day ago 1 reply      
I do not agree with the author of the article. I think there also is reason #3: They think they don't need an upgrade.

There are a lot of people who own an old PC and think it's the best possible setup. But in reality they actually wouldn't need a full PC because all they do is browse the web and read/write emails.My opinion is that many people would be better off with an iPad than a PC. It's small, portable, intuitive, easy to maintain, has great customer support and so on.

63
pmontra 2 days ago 0 replies      
I paid my bills from Nov 2006 to Feb 2013 with a HP nc8430. I let it go because it started to be annoyngly low on memory because of my new usage patterns: more VMs of more memory hungry OSes and more browser tabs, only 4 GB. Furthermore it run out of support and I would have had to buy my spare parts, no more next business day on site assistance. I got a Zbook 15 and I feel like I can go on with that for another 6 years (i7, up to 32 GB). No need to be on a 2 years update cycle, sadly for HP (or Apple) but not for me.
64
phaser 1 day ago 0 replies      
I prefer a 5 year old computer that is open, repairable, I can run any OS I want than the "Ultimate PC replacement" that can only run software signed (and sold) by Apple.
65
randcraw 2 days ago 0 replies      
Maybe Schiller meant it was sad that PCs haven't improved substantially in 5 years, which is true to a degree that isn't just sad, it's seriously bad. As software historically continues to slow, app runtimes inevitably will too.

Is an iPad Pro 2 the answer? Obviously not. The unsaid lesson from this event is that Apple's products aren't improving as quickly either. Nor are the new features as interesting. So it's not surprising that Schiller would look backward instead of forward.

66
quietplatypus 2 days ago 0 replies      
I'm not even poor, and I still do my research on a workstation from 2009. It's fast enough for what I'm doing, and why waste new hardware on bloated software?
67
bogomipz 2 days ago 1 reply      
In the last year we've seen Apple produce a pencil and a watch, whats next a paperclip? I found the comment condescending and yet not at all surprising. Seriously F them.
68
XorNot 2 days ago 0 replies      
There's a study I want to see done: take a bunch of people, dress them at a whole bunch of different overall "looks". Put them in different contexts holding several different smartphones of varying prices.

Then, bring in the subject group: and ask a question - how much do you think the phone is worth? How wealthy do you think the person is?

My hypothesis: people have no idea what phone's cost at an average viewing distance (say 3m+).

69
bfrog 1 day ago 0 replies      
Is it really any surprise no one is bothering to keep following the upgrade windmill money shakedown? My 5 year old laptop works just as well with gmail, youtube, and google docs as my 8 year old desktop etc etc. There's very little reason to keep on the upgrade bandwagon right now. The innovation of software has more or less flat lined for the end user machine... for now.
70
blackhaz 1 day ago 0 replies      
Reading this on a 2009 MacBook Pro with SSD, working perfectly fast for my tasks, with Thinkpad T400 running FreeBSD nearby. I think they're running out of ideas why would I need to upgrade. Definitely not doing it for a new fancy UI animation. Just imagine: a content that scrolls down and then smashes into the screen boundary, bounces back and then reaches equilibrium slowly. My ass...
71
peferron 1 day ago 0 replies      
My wife's iPad 2 slowed down to a crawl after updating to iOS 8. She doesn't use it anymore because of that. Obviously Apple doesn't want iOS devices more than 5 years old to remain in use either.

In the meantime I've upgraded my parents' PC from Win XP to Win 10 and it runs as fast or even faster than before.

I agree with the Apple exec that it's really sad.

72
mchahn 1 day ago 0 replies      
This thread is so long that I may be repeating what has already said, but ...

I converted to Linux from Windows about a year ago. An unexpected but great benefit is that I have been able to pull PCs 5 to 10 years old out of my garage and now they run like brand-new. I should be good for a while.

73
beyondcompute 1 day ago 0 replies      
I am using a 2011 laptop. I am not poor. I understand Apple is being sad (it's a business) that I don't buy new hardware (I feel there's no more return over the investment). I also agree that tablets are closer to the present (not the "future", ha-ha) of computing but I am software developer and I will gladly switch to iPad pro only when I have an access to terminal, all the usual unix tools, homebrew and the ability to compile/run the usual web-dev software: servers, database systems, programming languages (and to easily connect my mechanical keyboard). Apple is not mocking the poor. Calm down, let's get back to work.
74
mirimir 1 day ago 0 replies      
> There are really only two reasons why people might have a computer thats more than five years old: 1) They cant afford an upgrade. 2) They dont need an upgrade.

There's a third: they wouldn't trust an upgrade :) Just where to draw the line isn't obvious. But it's probably closer to 10 years old than five.

75
Philipp__ 2 days ago 0 replies      
iPad neds to change to the point where it wouldn't be called and recognized as an iPad, or just die. That device is in technological gap right now, getting beaten by newer more modern devices that evolved from iPad form. It annoys me as hell what Apple is doing right now. For first few years I thought "Thank you God Cook is keeping it simple and well defined as it was under Jobs", but now I am afraid off repeating history. Wheel always turn, and I would be really sad to see 90s Apple before Jobs came back. I enjoy in their computers a lot, and they will always be main thing from Apple, at least to me. I bought second iPhone 5s a month ago because it simply does the job for me... While I was Apple fanboy, every consecutive year I get more and more disappointed.
76
georgeglue1 1 day ago 0 replies      
I'm preaching to the choir here, but the obvious third case the author omits is 'semi-technical folks who don't want to deal with the friction of a new environment', which is not so offensive.

And 600 million 5+ year PCs seems like a low number...

77
ixtli 2 days ago 0 replies      
I understand why someone would feel this way but the unreconcilable reality is this: http://windows.microsoft.com/en-us/windows/lifecycle

The majority of those machines Apple is "mocking" are running operating systems that have known exploits in the wild and people are doing their personal banking on them. The shitty, stuck up delivery doesn't change the fact that it actually would be better and more private for people to dump their unsupported windows xp/vista/7 boxes and use an iPad Pro. Then, at least, they'd have a semblance of security.

I'm not even willing to say that Apple didn't intend to mock the poor but the facts remain regardless of delivery or intent. I think we can agree that for the majority of the people we're talking about here, "They dont need an upgrade" is simply not the case.

78
Joof 1 day ago 0 replies      
There's a lot that those 5 year old PCs can do that those shiny new iPads can't. In most other cases, they get the job done anyway.
79
typon 2 days ago 0 replies      
Is everyone here really acting incredulous at a giant corporation promoting consumerism?
80
drivingmenuts 1 day ago 0 replies      
It's marketing - you pit the have-nots against the haves to make the have-nots want to have and the haves to want more.

If that makes you angry, then you need to upgrade your thin skin.

81
Sarki 2 days ago 0 replies      
This Apple Keynote 2015 video extract speaks for itself:https://www.youtube.com/watch?v=b4UOmR_xSBM
82
jayfk 2 days ago 1 reply      
I bought a Mac Pro 4,1 from 2010, flashed it to the 5,1 firmware, upgraded the CPU, added a Graphics Card, 4x 3TB drives and a PCIe SSD blade.

Best thing I've ever did in terms of apple hardware.

83
kitsune_ 1 day ago 0 replies      
I have an i7 930, I bought in 2010. Like hell I could benefit from a USD 600 iPad Pro. His statement actually only shows me how preposterous it is to buy an iPad.
84
edandersen 2 days ago 0 replies      
The correct solution would be to provide an x86 build of iOS (which they must have) with mouse support to install on these aging desktops. They'll make a cut on selling apps.
85
roboto584903 1 day ago 0 replies      
A 2011 Mac mini with a quad core processor has more processing power than any current model, while the price has stayed the same. Now that's sad.
86
Tycho 1 day ago 0 replies      
Hmm. I use a 2011 MacBook Air and the thing performs (and looks) like it's brand new. With the latest OS. I have no plans to upgrade it whatsoever.
87
sickbeard 2 days ago 6 replies      
Story time! My gf and I went to take pictures of some expensive cars at an Audi dealership. While there one of the nicely dressed sales people started talking to us about the Q7. He quoted about 70k cdn as a starting price and argued, sure it was an expensive car but it lasts longer and you save money in the long run.

I think maybe that's what apple was trying to get across, buy some quality hardware that is updated regularly instead of spending low on cheap things. It's a poor argument for sure but I wouldn't call it out of touch because it is the defacto argument sales people try to use to get you to purchase more expensive things.

88
scblock 1 day ago 0 replies      
The sanctimoniousness in here is especially ironic given the general attitude of HN and YC about money and technology. It's hard to take you seriously.
89
nkrisc 2 days ago 0 replies      
I think the author's second point brings up something important:

If I've replaced every component in my desktop computer, even the case at one point, is still the same computer?

90
robmcm 1 day ago 0 replies      
The elephant in the room here is that Apple support their products for longer than anyone in the industry.

Support for legacy iOS devices and macs is very impressive.

91
placeybordeaux 1 day ago 0 replies      
My well over 5 year old computer just got a graphics card that will push it into vive VR range.

Thanks for the advice of getting a ipad pro, bro.

92
mattkrea 2 days ago 0 replies      
5 years is a little short of a time frame but this wasn't mocking poor people.. this is a bit much.
93
ArcticCelt 2 days ago 0 replies      
"Let them use iPad Pros"
94
sunasra 1 day ago 0 replies      
I think they forgot 'Innovations are based out of Tradition'
95
wodenokoto 2 days ago 0 replies      
I saw the screenshot of the slide and thought they meant old software and mulled quite a bit about the wording.

I have a mac that's over 5 years old and I'm quite proud of the fact that the hardware is still good. Apparently I am bringing shame to Apple.

96
msie 1 day ago 0 replies      
This is how much im going to spend on this stupid article.
97
grandalf 1 day ago 0 replies      
The article is reactionary and click bait.
98
eximius 2 days ago 0 replies      
Never attribute to malice which might otherwise be explained by ignorance...

Though this might be a bit of both.

99
RUG3Y 2 days ago 0 replies      
I'm upset that Bugatti doesn't make cars that poor people can afford.
100
analognoise 2 days ago 0 replies      
Are we seriously that sensitive now? The guy is trying to sell computers.
101
seivan 1 day ago 0 replies      
Clickbait. Pure and simple. I felt the same way when my parents used PCs until I got them iOS devices.
102
Karunamon 1 day ago 0 replies      
This just reads like trying really hard to be offended by something. Literally any comment about legacy hardware (hell, even the word "legacy") could be interpreted this way if the author reaches hard enough.
103
Wonderdonkey 1 day ago 0 replies      
I'd been exclusively with Apple since 1989 when I bought my first Mac (an SE with dual HD floppy drives and a whopping 400 MB external hard drive). I stuck with Apple through the difficult years and then even became the editor of a multi-title Mac publishing operation. Apple loved me so much they gave me a loaded iPod for my birthday one year. I don't know how many people they've done that for, but not many, I'm guessing.

But things started to change with the success of the iPhone and then the iPad. We Mac fanatics used to say that any success for Apple was a win for the Mac platform. But in reality, it hasn't played out that way. The Mac is languishing, and it's languishing in ways that I can only attribute to intent. It's becoming more frustrating to use. Files that you see right in front of you don't come up in a search. Software updates bring rapid obsolescence. Simple things like "Save As" have been changed in Apple's apps so that now Shift-Command-S, for example, is the command to "Duplicate" a file, which you then have to Command-S save. Then when you close out, you have the additional step of dismissing a save dialog on the original document. The hardware, obviously, is not being built to last. (Apple's laptops were always frail things, dating back to my PowerBook Duo and PowerBook 520c, but their desktops and workstations were always bullet-proof; they are not anymore.)

It's a bunch of little things and big things combined to make a very frustrating experience.

This December, I decided to jump ship. I bought a Surface Pro 4. The hardware is awesome (Core i7 with 16 GB RAM). The software needed some manual intervention, but it's coming along. (Microsoft didn't include the WinTab driver, for example, so there was no pressure sensitivity for some apps. And there was no documentation available for it. And frankly Apple's keyboard shortcuts for special characters are better than Microsoft's, but I've been able to emulate those.)

I don't even think about what platform I'm on when I'm working now (except when I use Dreamweaver CS6 because Adobe is freaking horrible and can't deal with Microsoft's trackpad and wants to force me to rent CC, which I will never do).

I never considered an iPad or iPad Pro for a second. They are useless to me. When I get a unit in, it just gathers dust. There's nothing "pro" about it unless your profession is typing e-mails and visiting Web pages. Plus, I actually like computers. I'm old enough that they are still like science fiction to me. I still have dreams about them. And I like to be able to get into the software guts of my computers and mess around in there.

I also refuse to consider the iPhone. Not as long as there are Android phones that have expandable storage and a removable battery. Plus every time I have to deal with one of my kids' iPhones, especially when I have to deal with iTunes on top of that, I want to punch Apple so badly.

And most importantly, I don't want to be at the bottom of the food chain in Apple's iTunes ecosystem. And that's all Apple's customers have become.

104
joesmo 1 day ago 0 replies      
"Maybe Apple really does find the idea of hardware that can function for five years sad and funny these days."

Maybe? Is there even a question anymore? If there is, one only needs to look at the designs Apple has been pushing for the last seven years: un-upgradable, mass-consumer grade quality products. If anything has been crystal clear from the disappearance of all upgradeable parts in all their products over the last few years and their constant events (at least 3 a year) is that they want you to buy new shiny stuff and they don't care about supporting the old stuff. Apple will support you for one year or up to two or three (depending on product) if you pay a few hundred extra for Apple care. The fact that there is no Applecare plan longer than two years for iPads and three years for laptops should be another crystal clear indicator that their product aren't meant to last and be used that long.

That being said, at least with their laptops (haven't used the new Macbook) the hardware quality is generally very high (except for the trackpads). I hope it continues to stay this way.

105
chetanahuja 1 day ago 0 replies      
It's funny because I'm still using a giant MacPro from 2008 (yes... the tower) as a home machine. It's crammed full of high density storage and I've added aftermarket RAM to the box to max out the slots. It's the last mac model that allows you to upgrade easily.

I think they continued to sell it until 2012 or so but then fixed the "oversight" with an art deco piece of a "desktop" computer with no room for upgrades. I still buy Apple laptops because work involves dealing with Xcode and iOS stuff but for any personal use, no more Apple hardware for me.

106
746F7475 1 day ago 0 replies      
Being this mad over some random comment...

Either upgrade or not, no one cares.

107
DrNuke 2 days ago 0 replies      
You don't need to be the Pope to understand that money is the shit of the Devil and Apple is a big good carrier itself.
108
golemotron 2 days ago 0 replies      
This brings hyper-sensitivity to a new level.

Perhaps to make sure no one's feelings are hurt, all ads for new products should be banned.

109
xutopia 2 days ago 0 replies      
Give me a break! That was not a jab at poor people. It was explaining how a market there is.

If you want to see a jab at poor people look at Gainsbourg burn a 500 franc note (about 100USD at the time) on live television. That's a jab at poor people https://www.youtube.com/watch?v=gMq3Zr9_ARE

110
FussyZeus 2 days ago 1 reply      
While I think the comments about laughing at the poor are on point, the other half of this seems a little "trying to be offended." Yes it could've been worded better but it's a joke around our office that the PC's need to be replaced after 1-2 years, where we have Macbooks that have been in service for more than 5 with little to no issues.

I think it's generally accepted common knowledge that Mac's age far better than PC's, maybe not so much on desktops, but laptops? Definitely. I have a custom built PC at home myself and I'd never trade it for a Macbook of my own, but after using a macbook from the company for the last few years, I can't ever go back to a Windows laptop.

4
I switched to Android after 7 years of iOS joreteg.com
822 points by joeyespo  1 day ago   489 comments top 78
1
kev009 1 day ago 21 replies      
I did the opposite and can't believe how much better my life had gotten because my iPhone is just a simple tool that I use for communications and don't think about it as a project. With Android, I always wanted to tweak silly things and run Cyanogenmod because the handset firmware was always so bad and vulnerable. On several occasions I'd bricked my phone requiring hours of recovery, or had transient failures of cell service and communications issues. I guess if you have the right level of discipline, apathy, or use a Nexus device that may be more Apples to Apples (harhar).
2
sjenson 1 day ago 6 replies      
Nearly all of the comments here are missing the point of this blog post. The author likes Progressive Web apps, they are important to him. He's moving to Android because it supports the web better.

That's it.

This isn't iOS vs Android and it certainly isn't web vs native. Yes, the article is critical of native apps (and the app store) so I can see how you'd go there but it's a distraction. I see this article as an"I want to use the best mobile web platform possible" argument.

3
kalleboo 1 day ago 6 replies      
My main exposure to Chrome web apps is Hangouts on Chrome for Mac and half the time I shut it down and choose to use the native app on my phone instead due to the poor, non-native UI and the battery life impact of Chrome.

edit: the other shiny Google Web App example, Google Docs, doesn't work either. In Safari it likes to drops keys, and the last time I used it in Chrome (last autumn), it would either crash the whole tab, or freeze it up long enough for it to tell me it gave up and that I should just copy the content and paste into a new document

It seems we're re-living the nightmare of Java "cross-platform compatibility" but with an even worse programming language.

> In fact, I think Progressive Web Apps (PWAs) actually have a huge leg-up on native apps because you can start using them immediately

> Theres just so much less friction for users to start using them

Every web app I've used has required a painful sign-up process, which is usually where I bail out of the process. Way more friction than an app store install.

4
mrcwinn 1 day ago 3 replies      
The argument seems to be that app developers aren't doing very well on the app store, and you're looking to the free and open web as the place where vast sums of money will be made? For the vast majority of these apps, I beg to differ. The web plays by the same rules as the app ecosystem: it's very expensive to monetize, unless of course you are creating value for someone who has money and minimal friction when paying.

"Unfortunately, the web platform itself wasnt quite ready for the spotlight yet. It was sort of possible to build web apps that looked and performed like native apps..."

Are you talking about 2007 or 2016? Native apps will always outperform non-native apps - and not because of any emotional or "political" reason - but for perfectly obvious technical reasons. Web apps have an extra layer between themselves and the hardware. Native apps do not (or, at least, the layer is much thinner). Even if web apps increase in speed another 100x, native apps will be right there too.

Look, at the end of the day, use Android or iOS. I don't care. I've used both. But don't switch for this reason.

5
BuckRogers 1 day ago 1 reply      
I did the opposite. 7 years of Android to iOS. I'll never go back unless Apple somehow swaps the experience to be more like Android phones, and less like iOS is. But I don't really care about that. I just want my phone to work, to make calls and not fail or slowdown. Not be another computer I have to maintain. iOS in my experience is a great choice if that's the goal.

He hit the nail on the head at the end. Native React and similar tools are going to simply help the app stores. I have no qualm with app stores as I'm not a webapp diehard.

Just use what makes sense. I never think that is Javascript and take the exact opposite view of the author. I use JS only when I absolutely have to. I prefer to build native platform experiences, which if you're doing more than a CRUD app many times you have to do anyway. I'd work with C#, Swift, Rust, Python and their associated ecosystems before trying to JS All The Things. I find that concept very anti-democratic and regressive.

The Javascript diehard mentality will come to it's final death throes once wasm hits V2 and allows every language the chance to work in the browser.Then the web will truly progress as the author states. Developers will be freed to use whatever they want. Swift on the server, iOS and browser. Let programming platforms and tooling duel it out, not hand the crown to a PL that was created in 1 week. I choose Python, but everyone should be able to use whatever they want as well.

For me, that's the real "progressive web app".

6
itp 1 day ago 4 replies      
Wow. I'm a long time Android user and probably pay more attention than most, and I had no idea web apps had gotten quite this nice. Currently the only web app / web shortcut I have installed is the HackerWeb app[0], which is nice but clearly not taking advantage of all of the functionality it could.

I "installed" Flipkart Lite and the Voice Memos demo app to see the state of the world. Clearly it's possible to build some really nice web apps these days! I hope to see more of it moving forward.

[0] https://hackerwebapp.com/

7
S_A_P 1 day ago 4 replies      
Im not sure what it is about articles like these that bother me so much. Is this guy some hacker hero that I should know? I dont care what the platform is, and this is nothing to do with iOS vs Android. I really cannot stand this "why I quit x" type of blog post. Is there a reason this guys opinion matters more than anyone elses? I know I could just ignore articles like this, but it does happen to be staring me in the face at the top of the list. At the risk of irony, I would much rather see a case made for improving something than a "I chose this because its better, and I know better than you" article.
8
untog 1 day ago 4 replies      
I agree with pretty much everything in this article - I firmly believe that we're due a "post-app" world where progressively enhanced web sites provide 95% of the functionality we require. But we're not there yet - I'd love to see better WebView integration into native UI components (UINavigationController and the like), to provide things like swipe-to-go-back, which is monumentally hard to do on the web.

But hey. Maybe, just maybe, we'll end up back in a world were cross-platform development is viable. If Apple lets us.

9
mostafaberg 1 day ago 6 replies      
>I dont know about you, but the idea of having a fully capable web browser in my pocket was a huge part of the appeal.

A: Both iOS and android have fully capable web browsers, I'm not sure what's missing here ?

>Im talking about stuff that QA should have caught, stuff that if anybody at Apple was actually building ? apps this way would have noticed before they released.

A: They do pass QA, that's why features are removed

>One quick example that bit me was how they broke the ability to link out to an external website from within an app running in standalone mode. target=_blank no longer worked.

A: Thank god apple no longer allows that, how do you expect a tiny screen to have popups and switch web browser views when you click links ? this is a very bad UX.

>We were running a chat product at the time, so anytime someone pasted a URL into chat it was essentially a trap.

A: I'm not here to judge your decisions or why you did it that way, but IMHO a chat product doesn't really belong in a "web browser"

>The message from Apple seemed clear: web apps are second-class citizens on iOS

A: Exactly, and it is that way for many good reasons.

I see you've mostly switched to android just so you can continue developing webapps, that's okay for you, but it's not a really good reason at all.Don't be like the people who where bashing apple when it decided to remove support for flash player, because that's one of the reasons the web hasbecome the way it is today, i'm not an apple fanboy, i also did the switch from iOS to Android after around 7 years too.

10
criddell 1 day ago 1 reply      
And then somebody in management asks why the new app is missing so many features on his brand new iPhone. In fact, all the C-level folks and board members are primarily iPhone and iPad users and none of them are happy that so many goodies are missing.

If you aren't worried about provided a first class experience to your iOS customers, then build for Chrome + Android. Although, that sounds a little like "build for IE6 + Windows" 15 years ago.

11
rimantas 1 day ago 0 replies      
It would be nice if guy stuck to coherent argument. Meanwhile he talks about "monarch enforcing a 30% tax.", about iOS developers barely making any money.Ok, so where are the numbers, how much money did he make with his "installable web apps" on Android?
12
mastazi 1 day ago 0 replies      
Just yesterday, I switched back to Android after 4 years of iOS and I am really really pleased. I especially like the interoperability between apps and the "draw over other apps" capability.In relation to the linked post:

1- I'm not 100% convinced that web-based apps are always the way to go on mobile platforms, there are many pros and cons.

2- While Chrome for Android supports a wider array of web standards[1], that difference doesn't (yet) seem very significant looking at various sources such as caniuse.com.

I just wish Apple was working more actively on Safari develpment, both on desktop and mobile: they started from a very good position (e.g. the circa-2010 Safari for iOS was vastly better than the circa-2010 Android browser) and they are now rapidly losing ground.

[1] http://caniuse.com/#compare=ios_saf+9.0-9.2,and_chr+49

13
jamisteven 1 day ago 4 replies      
I for one could never go back to Android. iOS is just such a better user experience, much more fluid. Android feels like Cisco Voice's product lineup, all pieced together. fragmented applications and processes that dont work side-by-side with eachother. The other reason, which is huge to me, is the hardware. I am huge on how things feel in the hand and in my opinion apple's hardware is just far superior to anything offered for Android. Best thing I have seen hardware-wise was the Samsung Galaxy Alpha, and the Oneplus2. I miss the old Nokia days, e61/e62, that build quality was top notch, although running symbian made it a bit of a snail. I tried switching out of iOS and over to a nexus5 when it was released, I had pre-ordered it and was super excited for it to come in, but the hardware felt like total shit to me, and after a month I swapped back to iOS. Im still rocking an original iphone5 thats jailbroken, works better than that N5 any day of the week. Much like cars, it isnt about the size of the engine, or the tech it comes with, its about the whole package and how it all works together, as a unit.
14
hackuser 1 day ago 1 reply      
A bit OT: I'd like a mobile platform that provides confidentiality (from both government draget and commercial spying) and end-user control. These seem like fundementals of any platform, at least as user options, but I haven't found it:

* iOS seems to have some confidentiality, though are users really protected from commercial spying? Of course end-user control is very limited.

* Android provides some end-user control if you root your phone, but it's complicated to utilize. Confidentiality is awful; there are a never-ending number of holes and leak, AFAICT, many built into the OS. No fork (i.e., ROM) of Android seems to focus on confidentiality, though I'm curious whether Blackberry's Priv locks down the OS in addition to the hardware.

* Basebands are neither confidential nor provide end-user control, in any phones outside of FOSS projects, AFAIK.

* Mobile service providers also are an omnipresent risk.

---

I suspect a decent solution to the baseband and mobile service problem is the following, but I haven't tried it and I know it has some weaknesses:

* a hosted VPN service that provides a firewall (the firewall is needed to filter outbound connections from your phone)

* a cellular router that's pre-paid, tethered to the phone to isolate the baseband from the rest of your handheld computer

* VOIP service for voice and SMS/MMS

15
vjeux 1 day ago 0 replies      
I want to clarify some points on React Native. Unlike what is commonly said, my goal with the project is to make the web better.

A fundamental problem with the web as it exists right now is that as a "user", you cannot go one level deeper when you want/have to in order to provide a good experience. There's a big list of things like customizing image decoding/caching or extending layout part of css that is encoded in the browser and cannot be changed in userland.

The way to solve your problem is to convince a browser vendor to implement a solution, then all the other browsers to support it and wait for years such that your userbase can use it. This loop is extremely long and involves a lot of conflicting interests and having to support it forever.

The idea of React Native is to provide a subset of the web platform and hooks to drop down lower whenever you want to. For example, as a user you can use <Image> which behaves like a web <img> and be done with your day. But, if you want to use another image format, or manage your image cache differently then you can implement it and provide a <MyImage> component to the end user.

The advantage is that each app can start building and experimenting with its own low-level infrastructure and replace pieces that the default platform doesn't do adequately for the use case they are trying to solve.

Now, why is it good for the web? Since React Native primitives have been designed to work on the web with a small polyfill ( https://github.com/necolas/react-native-web ), there's now a concrete way to improve the web platform without being a browser vendor. You can prototype with your ideas on React Native and when you figure that one is actually good, now start the process to ship it to the entire web platform. Kind of the same way you can prototype your js transforms with babel and then push them to tc39 to make them official.

If React Native is as successful as I want it to be, the web platform is going to supports all the use cases that only React Native can provide today and we can just rm -rf the entire project and use the web.

16
jarjoura 1 day ago 2 replies      
I think apps are in a lull right now because most were abandoned and left users feeling jittery about pouring their lives into them. Also few apps spent the effort to take advantage of working offline. If I'm in the subway, I'm basically unable to use anything except games. Although the last couple of games I couldn't play because they were trying to connect to an Ad server that would fail and so the game wouldn't progress.
17
ryao 1 day ago 3 replies      
> WebBluetooth (yup, talking to bluetooth devices from JS on a webpage)

This sounds like a great new attack vector for the black hats of tommorrow.

There are just some things that a web browser should not do. Exposing things that previously required escape from sandbox attacks is one of them.

18
rcarmo 1 day ago 0 replies      
I tried to do this a few years back and it was completely impossible to order anything from the Google store and have it delivered to Portugal.

Although I routinely rebuilt Android to reflash my Nook Color and even rebuilt Android x86 for the "Magalhes" school laptops on a lark, I could not beg, borrow or steal an Android device with "proper", vanilla Android for myself without resorting to shady imports and zero warranty.

So after a year of using an HTC One[1] and, later, a moderately vanilla LG 4, I quietly went back to the iPhone, got a Nexus 7 (2013) to scratch my occasional Android development itch, and haven't looked back. The ecosystem is _so_ much better, Safari on it (and my iPad) still knocks Chrome on Android out of the park from a user perspective, and I can tinker all I want on stuff like the Remix PC and the ODROID without having to put up with a lousy phone user experience.

Would I use Android? Yes, for sure - but I wouldn't _like_ it.

Would I develop for it? Sure, no problem. Did that for digital signage, even[2].

Would I develop for it _first_? Doubtful. The only serious money in it is in vertical (B2B) apps and suchlike.

Would I develop web apps for it _first_? Like... are you serious? With the market being what it is?

So although I "get" the article, I think it's not that realistic.

[1]: http://taoofmac.com/space/blog/2013/10/20/2230[2]: https://github.com/rcarmo/android-signage-client

19
ar0 1 day ago 1 reply      
TLDR: Chrome on Android supports Service Workers and WebRTC while Safari on iOS does not. This means Android these days is better suited for fully-fledged web applications that do not require a native app (or at least a native app wrapper).
20
frobware 1 day ago 2 replies      
Sadly, the web is an accessibility nightmare. If that changes, then sure, I could move too. But there's a lot that modern versions of iOS get right regarding accessibility, stuff that I wish google/android would do too.
21
lucian1900 1 day ago 1 reply      
I like native apps. I'm still annoyed there's no native desktop Hangouts app and how many things Atom gets wrong.
22
nostromo 1 day ago 0 replies      
Putting your development preferences ahead of your customer's preferences is a recipe for failure.
23
64bitbrain 1 day ago 0 replies      
If I ever get a Android phone, it will be Nexus series. I had an HTC and I waited ages for Android 4 update because AT&T didn't had there "customized" version, with bunch of useless apps on it. On the other hand, my friend was able to upgrade to latest version of Android, because he was using Nexus. I switched to an iPhone and I loved it. Better battery life, and clean installation.
24
milge 1 day ago 0 replies      
I also went from iOS to Android (iPhone 4s -> Nexus 5x). I had my 4s for 5 years and loved it. I'd still have it if it weren't for iOS 9 being too big to install and verizon overcharging. I've developed apps on both.

Some apps have to be apps to use sensors and devices built into the phones. A lot of apps could probably get away with being mobile sites. Doubly so with some of the new html technologies being introduced by the W3C.

Because Android and iPhone are owned by companies, they can move fast. The web has to accommodate for many more devices. So web standards move slower. In the time apps have become huge, a lot has been added to web standards. But I'm guessing most people haven't noticed. My guess is people are used to using frameworks and have abstracted themselves away from the basics.

As a challenge to the reader, see what you can build in only JS/HTML/CSS with no server side. You'll be pleasantly surprised by what you can accomplish.

25
greatlakes 1 day ago 0 replies      
I think the differentiating factor here is Chrome's push to support and utilize the Service Worker API (https://developer.mozilla.org/en-US/docs/Web/API/Service_Wor...). The opportunity for web apps to have an offline experience and utilize push notifications is not only exciting but game changing for the web platform as a whole.
26
Exuma 1 day ago 4 replies      
I hate any title that begins with why. Seriously, no one cares. People have opinions. Even seeing 'why' makes certain I will not read your article.

Let me guess the following without even reading...

* several paragraphs of whining about things that are just personal preference.

* making broad/generalized wide sweeping statements and stating them as fact

* tons of rhetorical questions followed by over simplified answers in support of the other product

* dripping with misguided enthusiasm, using lots of words in CAPS and BOLD.

27
t3ra 1 day ago 1 reply      
I am always surprised when people say things like CHROME is bringing X API.Take a look at these HTML5 Web APIs : https://developer.mozilla.org/en-US/docs/Web/API

They have been here since sometime now! and Mozilla built a whole operating system around them which is has "Progressive Web Apps" in its core!

28
Synaesthesia 1 day ago 0 replies      
Apple have actually been always pretty excellent with Safari performance and features on iOS. They were impressive from the start and they have kept pace with Google with regard to JavaScript performance overall performance has usually been class-leading, rendering too. Ok they're missing WebRTC right now and workers, but I'm pretty sure WebRTC will come soon and workers too at some point.
29
viseztrance 1 day ago 1 reply      
Meanwhile, on the desktop google music doesn't work without flash installed, and there's no desktop client in sight. Great times.
30
krzrak 1 day ago 4 replies      
Sidenote: I checked the Google's Project Fi - damn, it's expensive. For $20/month you get unlimited calls and texts, but you have to pay extra $10 for every 1GB of data.

Here in Poland for $10 I get the same unlimited calls and texts plus 4GB of LTE included (and then you're limited to 1 Mbps - but you can take the $13 plan and get unlimited GBs too).

31
dimillian 1 day ago 1 reply      
Show me on good mobile web app that is- Useful- Work offline- Fast- Don't have weird glitches
32
Kjeldahl 1 day ago 2 replies      
Good post. There's one other challenge though and that is access to native GUI widgets. Just having an app icon and appearing in the task switcher simply isn't enough. It's one of the problems React Native tries to solve, although I have to admit I'm not impressed with it so far. With the momentum Javascript is having I wouldn't be surprised if most vendors will release "native" javascript bindings to their platforms anyway, which hopefully will remove the last missing piece for "native experiences" using Javascript on iOS and Android (and for that sake, Windows 10 and OSX).
33
zanny 1 day ago 0 replies      
I'm actually going to be on topic off topic, but I seriously hope that somehow we have celestial alignment and QML can somehow take off as the defacto networked app standard. HTML/CSS/JS is a document format, styling for said documents, and a language cooked up in a week to bake into a browser in the 90s. And the 90s language is the best part!

QML is ground up meant to write interfaces in, and provide all kinds of critical functionality you would want on everything from mobile to televisions to toasters to your desktop:

* Hardware acceleration everywhere.

* DPI scaling.

* Ability to write controls in native C++ or as composite elements in QML itself.

* Signals and slots throughout all aspects of the framework, instead of callback hell.

* Intuitive and first class animations support.

* Native look and feel on almost all platforms through the Controls API, with the ability to restyle them however you want.

* All aspects of the framework support network transparency. You can associate resources remotely or locally, and all the necessary properties to track loading and downloading are available, and the API handles component loading from web services much more intuitively than HTML script / css loading.

I love QML a lot, and there is even a project called qmlweb to run it in the browser, but I really want to see http://test.org/app.qml be a thing. Having written my share of web applications and QML ones, I have no idea why anyone thinks spreading the design disaster of the traditional web to encompass all user software is the best we can achieve.

34
ignoramous 1 day ago 1 reply      
I am not entirely sure if Android is the best mobile platform out there. Apple continues to innovate at an incredible pace on its hardware and software. It is untouchable as far as HCI is concerned, they just seem to get most of the UX right. Its amazing to see them make computers that work and behave like a charm.

Pricing is unreasonable, TBH. And that's where Android eco-system has held an upper edge for too long now. Android as a platform, superior enough technology-wise, is terrible 'fragmentation' wise. Apple's laser sharp focus on UX around their entire line-up is commendable. To an extent, they think about their end-users at a level unparalleled at other tech companies-- not supporting flash, pushing aggressively ahead with ad-blocker support, adding a voice-enabled assistant, iCloud etc Apple's radical re-think of a smart-phone is a miracle. Almost everyone before them got it wrong. They are operating on some other level altogether.

Google, I think except for Google Now and their notifications scheme on Android have mostly been playing a catch-up with iOS.

I think Google faces the same issue with their Cloud offerings too. All the talk of the most advance platform/tech in the world and they still languish behind AWS and Azure.

35
Splendor 1 day ago 0 replies      
If you want to argue that I shouldn't expect my user to have the newest iPhone you shouldn't also list WebBluetooth as a pro. My user probably doesn't have a device that supports it either.
36
__m 1 day ago 0 replies      
After 7 years of people switching from android to iOS or vice versa, i stopped reading blog posts about it
37
hackergary 1 day ago 1 reply      
Sounds like someone trying to force web apps to do native apps' jobs. When something like React Native bridges web languages and full native benefits/performance.
38
jonlunsford 1 day ago 1 reply      
I also just made the exact same switch, after years of wanting more control over my hardware without jailbreaking, I just want to install f.lux for crying out loud! As a web dev, i'm very excited to loose the chains of iOS :)
39
eranation 1 day ago 1 reply      
I'm a Java guy, open source advocate, I love to have "power user" features and I was an android guy since android came out. I recently made a move to iOS (iPhone 6), and I'm not looking back.

It has much less features, it's a walled garden and all, I have to learn a new language (or two) to be able to develop apps for it (And pay $100), but the reason I like it so much is that it simply works.

Not just the software side, my android devices always had more issues, my Galaxy S III spent 3 times being fixed at Samsung for different reasons, so far with the iPhone I had no software or hardware issues.

And when my wife had battery issues with her iPhone 5c, instead of taking it for fixing they just gave her a new one on the spot and apologized for the inconvenience.

Simple, do-one-thing and do it right devices, that simply work.

This is a classic "do more with less", less features, nothing too exciting, but the little they have simply works.

40
nilkn 1 day ago 0 replies      
This was a pretty interesting article, and from the title alone I had absolutely no idea this was actually a discussion about the relative merits of web apps and native apps on phones, with the main claim being that we've nearly reached the point where web apps are viable and that Android happens to support this better at the moment. I suspect many others were caught off guard too (and perhaps did not even read it), given how many comments here are just addressing the generic issue of iOS vs. Android and all the drama that comes along with someone emphatically announcing that they've at last switched to the other side.

This is why I think that rhetoric phrased in terms of one camp vs. another is often greatly counter-productive.

41
kdamken 1 day ago 1 reply      
My only issue with iOS is that Safari doesn't play WebM. You have to download and open them with the VLC app. I wish they would just accept that it's a solid format and adapt it, but I don't see that happening any time soon.
42
incepted 1 day ago 0 replies      
> Of course I dont know the full backstory, but it sure seemed like the original plan for 3rd party developers on iOS was to have us all just build apps using the web.

Correct, there was no SDK on the first generation iPhone. It was a closed device, like all Apple devices. And that's how Jobs wanted it, he just thought that the idea of third party applications running on this device was pure absurdity.

Then Android came out and Jobs had to adapt.

> Apple made what turned out to be a really smart business decision: they released an iOS SDK and an App Store and the rest is history.

Kind of. Apple made a really smart business decision: they realized that if they didn't match Android and provide an SDK as well, they would lose. So they followed suit.

> The end result, for those of us still trying to build installable web apps for iOS was that with nearly every new iOS release some key feature that we were depending on broke.

This makes it sound as if these features got accidentally broken. No, they were intentionally removed or crippled because they either threatened Apple's dominance or cut into their profits. You could call that another set of "really smart business decisions"

43
minionslave 1 day ago 1 reply      
I just realized that half of the applications I have on my phone can't be used without a data connection.

What happens when I lose signal. The cloud is nice, but I need some offline in my life too.

44
agentgt 1 day ago 1 reply      
For me "Continuity" [1] (ie phone call on computer if I can't find my iphone) is the killer feature for why I stick with Apple.

I know there are is something sort of like it for Android but someone showed it to me and it didn't really work.

[1]: https://support.apple.com/en-us/HT204681

45
pawelkomarnicki 1 day ago 0 replies      
For me these two platforms are more-or-less the same from a user perspective. There're some cosmetic changes, like notifications are handled better on Android by bundling them, instead of pinging for every single one of them. Apps are usually on both platforms, same with games. iOS users at my work seem interested and impressed by the Nexus 6P, some consider the possibility to switch someday. But it really doesn't matter as long as the device get the stuff done, does it? :P
46
alexkavon 1 day ago 0 replies      
I have to say that I'm all for web apps and web based apps using things like Cordova or what not, but recently my company's app has been hitting some walls. There are a lot of great things those systems can do and they're great for starting out. However in the long run you might as well consider developing native or developing using something Xaramin (which will probably be even more free soon). Native development just provides a less kludgy of developing. My company will be making the switch soon in this light.

EDIT:I'd also like to say that the reason it's tough to develop for the web is languages like Javascript, sure it's getting better very slowly, but it also doesn't really allow for other languages to run in the browser and probably won't in the future. Sure you can compile, but why compile to JS and use a web view and work around conflicts while developing an app, when you can use a typed language and access APIs that work?

47
Negative1 1 day ago 0 replies      
Great writeup, thank you!

I actually did the opposite. Owned a Nokia "smartphone" when I got my first gen iPhone. Stayed on for 2 more generations then switched to a Samsung Galaxy (reason: wanted to see what this whole Android thing was about).

In every way it was a painful experience but I stuck with it for a few years. When I finally switched back to an iPhone I was like, wow, it just works. Forgot how that felt.

I'm still a fan of Android and believe it does some things so much better. Google Now is actually incredibly cool (too useful to be creepy). Music library management was much better (I miss you so so much N7 player). eBook reading was also better (Moon+ is amazing).

On the other hand, even as a power user (I program Android and iOS apps for a living) it frustrated me to no end. Android is here to stay (which is great) but from a usability (i.e. user friendliness) perspective it still has so much further to go.

48
nevir 1 day ago 0 replies      
Another iOS -> Android switcher here. I owned every iPhone up to and including an iPhone 6, and then switched to a Nexus 6.

From my perspective, the two platforms (and when I talk about Android, I mean Android-on-a-Nexus) are pretty much homogenous. They look and feel very similar, behave similarly, etc, etc.

49
abpavel 1 day ago 0 replies      
"I want the ability to create app-like experiences on the OS with web technology.Very little seems to be happening in that regard as far as I can tell"

I'm not sure why this doesn't sound like a compelling reason for a switch.I find the maintenance aspect much more persuasive.Usually people are paid to do sysops, administering, maintaining and tinkering with OS. It's a job. And handsomely paid at that. Doing the sysops job and having to pay for it, just to maintain your own phone, seems like a bad economical proposition.

50
Polarity 1 day ago 1 reply      
i switched to linux (elemntary) last year after years of osx. feels good and fast.
51
brotoss 1 day ago 1 reply      
I want to switch back to Android soooooooo so badly. But iMessage is too damn convenient.
52
alexchantavy 1 day ago 0 replies      
It'd be more accurate to title the article "Why Apple needs to treat Progressive Web Apps as first-class citizens".

It's less about iOS-vs-Android than it is about eliminating friction between the web and mobile apps. I enjoyed this analysis very much but if the article wasn't so highly upvoted already I might have skipped over it due to the title.

53
stcredzero 1 day ago 1 reply      
I dont know whether or not this type of app was actually intended to be the primary mechanism for 3rd party dev to build apps for iOS but regardless

This was basically said by Steve Jobs and Scott Forstall during a WWDC keynote.

54
mladenkovacevic 1 day ago 1 reply      
Been with Android since forever, but the battle for security that Apple has recently been fighting on behalf of their customers is enough to make me want to start considering iOS devices in the future.
55
RogueIMP 1 day ago 0 replies      
My first smart phone was an iPhone 4. After switching to the Galaxy S3, I was sold!

Apple is good for users who was a device that is simple, set in it's was and easy to use. It makes it hard to break.Android makes a phone that has unlimited potential, but at your own risk.As an IT, I'm a tinkerer... so I prefer the later. :)All about preferences.

56
userium 1 day ago 0 replies      
We just today published a UX checklist for iOS and Android apps (https://stayintech.com/info/mobile_ux_checklist). There are some good ideas on this thread that I can later add on that list! Hopefully useful for some of you.
57
merpnderp 1 day ago 0 replies      
The point about the full web app experience is a good one. And while Android's Nexus One left me high and dry on promises of continual updates and pushed me into my iPhone 4s from which I've never come back. If iOS doesn't get full support for service workers soon, I'll have to look again at Android.
58
r0m4n0 1 day ago 0 replies      
Interesting opinion. I am struggling to think of a single standalone webapp I would benefit from... I use a few native apps that work flawlessly and the clean out of the box functionality for phone calls and web browsing. I guess I'll stick with IOS (shrug)
59
cdevs 1 day ago 0 replies      
He basically list all the reasons I've stayed away from android. As a tech toy it's fun to play with but as a everyday phone it scares the hell out of me security wise when every minute they are opening up the attack surface.

So I've been with iPhone every since, it opens emails, reads text and webpages I barely open the App Store anymore. But I can see your side as I love writing code and tearing things apart to mod - I just decide my phone wouldn't be one.

60
jshelly 23 hours ago 0 replies      
These statements and arguments are so pointless these days. Use whatever you prefer and be happy.
61
mayoff 1 day ago 0 replies      
I can see this reasoning being right on the money, if you can be happy building web apps.

Personally, I find that developing with web technologies is a miserable experience, and developing with iOS native technologies is a joy. YMMV.

62
rachkovsky 1 day ago 0 replies      
How about in-app purchases? Wouldn't it be harder to implement low friction flow?
63
komali2 1 day ago 0 replies      
My only fear is that ultimateGuitarTabs will use these developments to make visiting their site on mobile even more of a hellish experience.
64
yegle 1 day ago 0 replies      
The author forgot a key point: webapp allows you to reuse cookies in browser so users don't need to login again from your app.
65
SiVal 1 day ago 0 replies      
Apple is just taking a page from Microsoft's old playbook, sabotaging the Web platform in order to prop up the competitiveness of its native platform. When MS owned the dominant platform, they made sure that the Web browser they shipped pre-installed on every Windows machine was always just good enough to claim to be a usable Web browser (or it might have driven people away from Windows), yet always bad enough to make the Web itself look bad compared to Windows. With the biggest benefit of the Web being its reach, anything that could limit the reach of new powers could hold back its spread, and MS did hold back its spread for years.

At the same time Apple, having no leverage from their own native OS of the present, touted their hardware/OS/browser stack as the best way to use the platform of the future--the Web--to make themselves more relevant in a MS-dominated world and sell more hardware. They did a lot of good for the Web platform in the past.

Fast forward to today: the iPhone ignited the explosion of mobile computing and made Windows' dominance of the desktop into being merely the biggest frog in a smaller and smaller pond. MS no longer had a monopoly to defend, repented for its sins, and began to build first-rate, evergreen browsers to stay relevant in the new world. (Competition is a wonderful thing.)

And Apple took their place, not as the monopoly OS in the new, big pond, but as an OS that was a large enough part of it that it could make things "not work on the Web" by making them not work on iOS. They manage to frequently be behind in getting new things to work in Safari (cf: caniuse.com), while being careful not to be so far behind that it affects their reputation with the general public and weakens them in competition with Android, and they prohibit any superior browser from interfering with this delicate "hurt the Web without hurting yourself" strategy by banning all others from iOS.

The result is that anything iOS Safari can't do, Web developers can't use: iOS Safari's shortcomings appear to be the Web's shortcomings, which can be overcome by committing to Apple-proprietary alternatives.

They can't afford to fall too far behind, though, or conventional wisdom will gradually emerge that iOS isn't as good as Android at "Web stuff". And as "Web stuff" improves on other platforms, the Web matters more and more as does your reputation for supporting it.

If developers, blogs, pundits would talk and post about it every time iOS Safari fails, yet again, to support some new Web technology, and even release some features that work nicely on Android/Chrome but require a native app on iOS, "because, you know, the iPhone's Web support is not very good, as everyone knows...", it will increase the pressure on Apple to shift the balance of "good but not too good" farther forward.

66
Touche 1 day ago 1 reply      
I'm amazed when I go to webdev conferences and see 90% iPhones, including many of the most prominent "javascript celebrities". Then I tell myself that just because you work on something doesn't mean you are passionate about it. I'm passionate about the web and couldn't imagine using an OS where all major features get delivered 4 or 5 years after creation (like IndexedDB was).
67
agumonkey 1 day ago 0 replies      
All this makes me wonder if we should change the whole idea of device, users, business.
68
exabrial 1 day ago 1 reply      
Galaxy S7 is far superior to any of the iPhones. Take it outside in the rain
69
listingboat 1 day ago 0 replies      
But Android users never update there OS and there is a lot of old Android OS versions to support, correct? Additionally, the device manufacturers control the OS distribution and what's included.
70
asai 1 day ago 1 reply      
The web is a patchwork of different frameworks, languages and standards without any clear direction as to where its heading. Why anyone would want to work with js is also beyond me.
71
daxfohl 1 day ago 0 replies      
Web apps are okay but really there just needs to be a better way of 'using' native apps. A 'yes I want to use you now but no I don't want to install you' button.
72
Jonasen 1 day ago 0 replies      
Late bloomer, you say? :)
73
brodo 1 day ago 0 replies      
Yay for intellectual diversity!
75
RunawayGalaxy 1 day ago 0 replies      
Didn't need a whole blog post. The necessity of moving files would have been sufficient.
76
wnevets 1 day ago 0 replies      
The fact apple has to take so many features from android speaks for itself.
77
Zigurd 1 day ago 0 replies      
TL;DR (applicable to all articles in both directions): Apple software quality has gone to crap. Android is an inconsistent mess and I hate $OEM or $CARRIER bloatware.

In fact, both iOS and Android are usable. If you had one or the other issued to you by an employers, it would be fine. The only shocking thing is that there isn't a third and fourth choice with a vibrant device and app ecosystem.

78
Jerry2 1 day ago 6 replies      
>So, instead of opening my text editor I placed an order for a Nexus 6P

Nexus 6P is notorious for atrocious build quality. It bends easier than a bar of chocolate. [0] Google should do a recall on these things. It bends a lot easier than an old iPhone 6 plus.

[0] https://www.youtube.com/watch?v=r3cWVdLqXCg

Edit: I see Google fanboys decided to downvote this comment instead of engaging in a debate. This is not in the spirit of Hacker News. I know HN has a lot of Google employees who are extremely touchy but come on.. be objective once is a while

5
Left-pad as a service left-pad.io
909 points by manojlds  1 day ago   250 comments top 53
1
c4n4rd 1 day ago 4 replies      
This is really exciting!!! I was a bit disappointed that the right-pad will be out only in 2017. I am looking forward to that release because there is a high demand for it now.

What kind of load balancing is being used on the back-end?I called leftpad(str, ch, len) with the length I needed and noticed that is not very scalable because it is blocking.

A better approach I would recommend to those using it is to call the API in a for loop. In my tests, it had performance very close to those I see in C or assembly.

I was a bit turned off that the free version can only handle strings up to 1024 in length. I know you need to make some money, but it is big turn off for a lot of my projects.

Edit: I finally signed up for it but still noticed that I am only allowed to use 1024. I called your customer support line and they said I was calling the API from multiple IP addresses and for that I need an enterprise license. Please help me with this issue, it is very crucial at this point as my project is in a complete stop because of this.

2
pilif 1 day ago 1 reply      
As a very sarcastic person, I highly approve of this. This absolutely reflects my opinion of all this mess.

Thank you for making this site so that I don't have to write an opinion piece like everybody else seems to have to. Instead, if asked about the issue, I can just point them at this site.

Yes. This isn't constructive, but this mess had so many layers that I really can't point to a single thing and offer a simple fix as a solution.

As such I'm totally up for just having a laugh, especially when it really isn't being nasty against specific people but just laughing about the whole situation.

Thank you to whoever made this

3
faizshah 1 day ago 7 replies      
I don't understand why this community has to have a weekly cycle of bashing different programming communities. Every week there's a new drama thread bashing Java devs, Go devs, Javascript devs etc. The thing that I come to this community for every week is to read about new developments in our industry, if you don't come here for that then what are you coming here for?

And wasn't it just a few months ago people were praising the innovation of Urbit for having a 'global functional namespace'? But because it's popular to hate on javascript devs for applying -- sorry, I forgot this was javascript bashing week -- for reinventing concepts from other areas in computer science and software engineering the HN community has to start hating on another programming community's work.

That said this is a pretty funny satirical page, apologies to the author for venting at the HN community.

4
mschulze 21 hours ago 4 replies      
As a Java developer, I am a bit jealous. When people joke about us we usually only get a link to the Spring documentation of AbstractSingletonProxyFactoryBean (or maybe the enterprise hello world), but no one ever wrote that as a service. Maybe someone can do that? https://abstractsingletonproxyfactorybean.io seems to be available!
5
supjeff 1 day ago 3 replies      
6
nogridbag 21 hours ago 2 replies      
I'm late to the left-pad discussion. I thought it was considered a bad practice to depend on an external repo as part of your build process. At my company we use Artefactory to host our maven libs. Even if one were removed maven central our builds would continue to work fine (in theory).
7
andy_ppp 1 day ago 24 replies      
Hahaha - isn't it hysterical how everyone using npm for small reusable code pieces! Aren't they morons! How stupid of people to trust their package manager to be consistent and correct and return packages they were expecting.

How stupid of people to reuse small often used functions that only do one thing well.

How does everyone taking the piss intend to protect themselves from this in their OS package manager, or PPM or composer or pip?

It's not javascript devs fault that the standard library is so piss poor you need these short code snippets and I've definitely included small 5-10 line packages via npm or other package managers rather than roll my own because it's likely they have bug fixes I haven't considered. I can also use npm to share these snippets between the many projects I'm building.

* No I wasn't affected by this because I review the packages that I want to include, however the level of smugness here is absolutely ridiculous.

8
icefox 1 day ago 1 reply      
Nice, it is even bug compatible

http://api.left-pad.io/?str=foo&len=7&ch=12

return {"str":"12121212foo"}and not {"str":"1212foo"}

9
gumby 18 hours ago 0 replies      
HELP!! The CEO heard about this new service and now my manager told me we need to upgrade all our packages to this new service ASAP! But there's nothing on stack overflow I can use to change our system! I need to get this pilot done STAT so we can plan the migration and send it out for bid!

HELP!!

10
huskyr 22 hours ago 1 reply      
Reminds me of Fizzbuzz enterprise edition: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
11
jaxondu 1 day ago 0 replies      
It's 2016! Left-padding without any deep learning algorithm is so lame.
12
beeboop 1 day ago 3 replies      
Tomorrow: left-pad.io announces $120 million investment at $1.2 billion valuation

Month from now: left-pad announces purchase of $170 million office building in SV to house their 1200 employees

13
jeffreylo 1 day ago 2 replies      
Doesn't work with unicode characters:

# ~ [8:47:18]$ curl 'https://api.left-pad.io/?str=&len=5&ch=0'{"str":""}%

14
andrepd 22 hours ago 0 replies      
>`left-pad.io` is 100% REST-compliant as defined by some guy on Hacker News withmaximal opinions and minimal evidence.

Wonderful

15
Flott 18 hours ago 1 reply      
I'm desperately looking for some feedback from big users.

- Does it scale well?

- Is it pragmatic for long term use scenario?

- Is it Thread safe?

- Does it learn from previous call made to the API?

- Does it have a modern access layer?

- Does it enforce 2nd factor authentication?

- Is it compatible with Docker containers?

- What about multi-region scenarios?

- Any benchmark available showing usage with AWS + cloudflare + Docker + a raspberry pi as LDAP server?

16
rfrey 20 hours ago 2 replies      
I'm very disappointed in the creators' choice of font for their landing page. Practically unreadable, my eyes burned.
17
maremmano 1 day ago 1 reply      
What about splitting left and pad in two microservices?
18
stared 20 hours ago 1 reply      
I am waiting for "integer addition as a service" (vide http://jacek.migdal.pl/2015/10/25/integer-addition.html).
19
gexla 11 hours ago 0 replies      
Can someone explain to me why I might need this? I checked the site and the documentation is horrible. The site isn't professionally done and there are no videos.

Can I rely on this service going to be around in 5 years? It just seems like this company might be, you know, a feature rather than a company.

20
yvoschaap2 1 day ago 0 replies      
While very useful SaaS, I always use the tweet package manager from http://require-from-twitter.github.io/
21
Jordrok 21 hours ago 0 replies      
Very nice! Any plans for integration with http://shoutcloud.io/ ? I would love to have my strings both left padded AND capitalized, but the APIs are incompatible. :(
22
a_imho 22 hours ago 0 replies      
My gut feeling tells me serious software engineers who look down on javascript programmers are feeling justified now. Brogrammers are exposed, hence the lot of knee jerk. Indeed, it is pretty funny, but dependency management still remains a hard problem.
23
p4bl0 1 day ago 0 replies      
As a friend said on IRC, it's kind of sad that the website is not made with bootstrap.
24
rahimnathwani 8 hours ago 0 replies      
Do you have any client libraries available for different languages?

I don't want to create a direct dependency between my code and your API. I'd rather create a dependency between my code and your client library's code, as I'm sure you will always keep that up to date with any API changes.

25
TickleSteve 1 day ago 1 reply      
Presumably this is using a docker instance on AWS or the like? </sarcasm>

BTW: Well done... nothing like rubbing it in. :o)

26
creshal 1 day ago 1 reply      
I need a SOAP binding for this, because reasons.
27
chiph 1 day ago 1 reply      
Needs more Enterprise. Where are the factory factory builders?
28
danexxtone 1 day ago 1 reply      
Where do I sign up for alpha- or beta-testing for right-pad.io?
29
sansjoe 17 hours ago 0 replies      
A programmer is someone who writes code, not someone who installs packages. Do you really need someone else to pad strings for you? Come on.
30
ChemicalWarfare 23 hours ago 1 reply      
BUG! (I think)using '#' as a 'ch' value pads the string with spaces:

$ curl 'https://api.left-pad.io/?str=wat&len=10&ch=#'

{"str":" wat"}

Please provide github link to fork/submit pr :)

31
bflesch 19 hours ago 1 reply      
I get an error

 {"message": "Could not parse request body into json: Unexpected character (\'o\' (code 111)): was expecting comma to separate OBJECT entries\n at [Source: [B@6859f1ef; line: 2, column: 22]"}
when using double quotes. It seems some JSON parsing fails. Not sure if this can be exploited, so I wanted to let you know.

Demo link: https://api.left-pad.io/?str=%22;

32
pka 14 hours ago 0 replies      
The real discussion is not about package managers, micromodules, or whatever.

It's about "real programmers can write left_pad by themselves" and everybody else just sucks. True scotsmen etc.

Now I don't know why asm people arent feeling threatened and aren't attacking the C left_pad gurus yet...

33
Mopolo 20 hours ago 0 replies      
That would be fun if a company named Left Pad asked to get this domain like Kik did at the beginning of all this.
34
talideon 23 hours ago 0 replies      
But the question is, is it enterprise-ready? :-)
35
jug 15 hours ago 0 replies      
If left-pad.io goes down, will it take the rest of the WWW infrastructure with it? I'm missing a Q&A for important and apparently relevant questions like these.
36
idiocratic 23 hours ago 0 replies      
Are you hiring?
37
MoD411 1 day ago 0 replies      
Boy, that escalated quickly.
38
ritonlajoie 1 day ago 0 replies      
I'm looking for a Left-pad specialized linux distro. Anyone ?
39
dkackman1 16 hours ago 0 replies      
SECURITY NIGHTMARE!!!!!!!!!

Without any sort of nonce, this service is trivially susceptible to a replay attack

40
nyfresh 21 hours ago 0 replies      
100 times on the boardhttp://bit.ly/1RzOIK2
41
jdeisenberg 1 day ago 0 replies      
Have we, or have we not, officially entered the silly season?
42
cmancini 1 day ago 1 reply      
This is great, but I'll need an API wrapper package.
43
yyhhsj0521 1 day ago 0 replies      
I wonder whether the author uses leftpad on this site
44
facepalm 1 day ago 1 reply      
Cool, but it would be more useful if they had a npm module for accessing the service.
45
venomsnake 1 day ago 0 replies      
I don't think this is enterprise ready. And I am not sure that they are able to scale their service. Left padding is serious business.
46
mrcwinn 18 hours ago 0 replies      
They didn't even think to version their API. This is total crap.
47
sentilesdal 14 hours ago 0 replies      
for the graphic designers out there who need left-pad, blue-steel-pad is now available.
48
markbnj 23 hours ago 0 replies      
This is awesome. Props to you all.
49
jschuur 1 day ago 1 reply      
Is it rate limited?
50
justaaron 18 hours ago 0 replies      
this is hilarious and timely
51
d0m 19 hours ago 1 reply      
I'm ready to get downvoted to hell with this comment but here we go..:

I feel like only non-javascript devs are bashing against small modules and NPM. All great javascript devs I know LOVE that mentality.

Let me offer some reasons why I (as a current Javascript dev having professionally coded in C/C++/Java/Python/PHP/Scheme) think this is great:

- Unlike most other languages, javascript doesn't come with a battery standard library. So you're often left on your own to reinvent the wheel. I mean, common, in Python you do "'hello'.ljust(10)" but AFAIK there isn't such thing in javascript. Javascript is more like the wild west where you need to reinvent everything. So having well tested libraries that does one thing extremely well is really beneficial.

- Javascript, unlike most other languages, has some pretty insane gotchas. I.e. "'0' == 0" is true in javascript. Most devs have been burned so bad in so many ways in Javascript that it's comforting to use a battle-tested library, even for a small feature, rather than reinventing it.

- And anyway, where should we put that function? Most big projects I've worked on have some kind of "helper file" that has 1500 lines, and then at some point different projects start depending on it so noone likes to touch it, etc. So, yeah, creating a new module takes a bit more time, but remember that it's not about the writing time but more about the maintenance time. I'd much rather have lots of small modules with clear dependencies than a big "let's put everything in there" file.

- I feel arguing about whether something should be in a separate module is similar to arguing without something should be in a separate function. For me, it's like hearing "Hey, learn how to code, you don't need function, just write it when you need it." And hey, I've worked in projects professionally where they had no function and it was TERRIBLE. I was trying to refactor some code while adding function, and people would copy my function inside their 1500 lines file. Let me tell you I left that company really fast.

- It's fair to say that UNIX passed the test of time and that the idea of having lots of small programs is extremely beneficial. It forces common interface and great documentation. Similar to how writing test force you to create better design, modularizing your code forces you to think about the bigger picture.

- As far as I'm concerned, I really don't care whether a module is very small or very big, as long as what it does is well defined and tested. For instance, how would you test if a variable is a function? I don't know about you but my first thought wasn't:

 function isFunction(functionToCheck) { var getType = {}; return functionToCheck && getType.toString.call(functionToCheck) === '[object Function]'; }
Who cares if it's a 4 lines module. I don't want to deal with that javascript bullshit. Yes, I could copy past that in my big helper file, but I'd much rather used one that the javascript community use and test.

- Finally, it seems like Node/javascript hasn't started that way. Not so far ago with had Yahoo monolithic javascript libraries and jquery. Even the first versions of most popular node library (such as express) were first written as a monolithic framework. But it's been refactored into dozen of small modules with clear functions. And now, other libraries can just import what they need rather than the whole project.

OK, so I told you about the good thing. What about the bad thing?

- Adding dependencies to a project is REALLY HARD TO MAINTAIN. I've had so many bad experience using node because of that. I.e. I work on a project, it's tested and work fine. 2 months later I clone and start the project and everything breaks. Oh, X and Y libraries decided to fuck everything, that other library now depend on a new version of Node, but I can't upgrade node because that other library depend on a previous version of Node. It's complex. I won't go on in explaining my solution to this problem, but enough to say that it's a problem and installing random amateur libraries in a professional project can lead to disaster.

- It takes longer to code. I've touched that earlier. It's a tradeoff about write now vs maintain later. Take a look at segmentio github repo: https://github.com/segmentio. I'd personally love to have that as onboarding experience rather than some massive project with everything copy/pasted a few time. But yes, it took them more time to create those separate modules.

52
shitgoose 20 hours ago 0 replies      
this is fantastic! What is your stack? Are you NoSQL or relational? Redis? What is your test coverage? I am sure you hiring only trendy developers. I see huge potential in your service, do you accept private investments? I would like to get in now, before Google or YC snatches you! again, keep up good work and - can't wait for right-pad next year!
53
Blackthorn 10 hours ago 0 replies      
Wow, so funny. DAE lol javascript?

Ugh, how does this garbage even get upvoted so highly.

6
Docker for Mac and Windows Beta docker.com
864 points by ah3rz  1 day ago   229 comments top 58
1
falcolas 1 day ago 9 replies      
The last time I used xhyve, it kernel panic'ed my mac. Researching this on the xhyve github account [1] showed that it was determined that it's due to a bug with Virtualbox. That is, if you've started a virtual machine since your last reboot with Virtualbox, subsequent starts of xhyve panic.

So, buyer beware, especially if said buyer also uses tools like Vagrant.

[1] https://github.com/mist64/xhyve/issues/5

I've said before that I think the Docker devs have been iterating too fast, favoring features over stability. This development doesn't ease my mind on that point.

EDIT: I'd appreciate feedback on downvotes. Has the issue been addressed, but not reflected in the tickets? Has Docker made changes to xhyve to address the kernel panics?

2
tzaman 1 day ago 7 replies      
If I had a yearly quota on HN for upvotes, I'd use all of them on this.

> Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for in container development.

This (filesystem notifications) was one of the major drawbacks for using Docker on Mac for development and a long time prayer to development god before sleep. I managed to get it working with Dinghy (https://github.com/codekitchen/dinghy) but it still felt like a hack.

3
wslh 1 day ago 3 replies      
Can someone explain in simple terms how Docker for Windows is different from Application Virtualization products like VMware ThinApp, Microsoft App-V, Spoon, Cameyo, etc? Also, why does it require Hyper-V activated in Windows 10? I found this: https://docs.docker.com/machine/overview/ but I don't understand if you need separate VMs for separate configurations or they have a containerization technology where you are able to run isolated applications on the same computer.
4
izik_e 17 hours ago 0 replies      
We have been working on hypervisor.framework for more than 6 months now, since it came out to develop our native virtualization for OS X, http://www.veertu.com As a result, we are able to distribute Veertu through the App Store. Its the engine for Fast virtualization on OS X. And, we see now that docker is using it for containers. We wish that Apple would speed up the process of adding new Apis in this hypervisor.framework to support things like bridge networking, USB support, so everything can be done in a sandboxed fashion, without having to develop kernel drivers. I am sure docker folks have built their kernel drivers on top of xhyve framework.
5
darren0 1 day ago 3 replies      
This is an amazing announcement, but... The beta requires a NDA. The source code is also not available. This gives the impression that this will be a closed commercial product and that really takes the wind out of my sails.
6
_query 1 day ago 4 replies      
If you're using docker on mac, you're probably not using it there for easy scaling (which was the reason docker was created back then), but for the "it just works" feeling when using your development environment. But docker introduces far too much incidental complexity compared to simply using a good package manager. A good package manager can deliver the same "it just works" feeling of docker while being far more lightweight.

I've wrote a blog post about this topic a few months ago, check it out if you're interested in a simpler way of building development environments: https://www.mpscholten.de/docker/2016/01/27/you-are-most-lik...

7
rogeryu 1 day ago 3 replies      
> Faster and more reliable: no more VirtualBox!

I'm a Docker n00b, still don't know what it can do exactly. Can Docker replace Virtualbox? I guess only for Linux apps, and suppose it won't provide a GUI, won't run Windows to use Photoshop?!

8
rocky1138 1 day ago 1 reply      
"the simplest way to use Docker on your laptop"

I think they forgot about Linux :)

9
nzoschke 1 day ago 1 reply      
Very excited about this. Docker Machine and VirtualBox can be a rough experience.

> Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year.

Does this mean it is closed right now?

10
mwcampbell 22 hours ago 1 reply      
Interesting to see that at least one of the Mirage unikernel hackers (avsm) has been working on this.

https://news.ycombinator.com/item?id=11352594

I imagine a lot of this work will also be useful for developers wanting to test all sorts of unikernels on their Mac and Windows machines.

11
philip1209 20 hours ago 2 replies      
Does anybody have any guides on setting up dev environments for code within Docker? I recall a Dockercon talk last year from Lyft about spinning up microservices locally using Docker.

We're using Vagrant for development environments, and as the number of microservices grows - the feasibility of running the production stack locally decreases. I'd be interested in learning how to spin up five to ten docker services locally on OSX for service-oriented architecture.

This product from Docker has strong potential.

12
totallymike 1 day ago 1 reply      
I'm delighted to read that inotify will work with this. How's fs performance? Running elasticsearch or just about any compile process in a docker-machine-based container is fairly painful.
13
f4stjack 1 day ago 2 replies      
So, let's say if I am developing a Java EE app under windows with eclipse and want to use docker container for my app, how do I go about it?
14
raesene4 1 day ago 1 reply      
This is v.cool, although for the Windows version it'd be great if it became possible to swap out the virtualization back-end so it's not tied to Hyper-V.

At the moment VMWare Workstation users will be a bit left out as Windows doesn't like having two hypervisors installed on the same system...

15
Lambent_Cactus 23 hours ago 9 replies      
Tried to sign up, but the enroll form at https://beta.docker.com/form is blank for me - it just says "Great! We just need a little more info:" but has no forms.
16
mathewpeterson 19 hours ago 0 replies      
I'm really excited to see this because I've spent the last few months experimenting with Docker to see if it's a viable alternative to Vagrant.

I work for a web agency and currently, our engineers use customized Vagrant boxes for each of the projects that they work on. But that workflow doesn't scale and it's difficult to maintain a base box and all of the per project derivatives. This is why Docker seems like a no-brainer for us.

However, it became very clear that we would have to implement our own tooling to make a similar environment. Things like resolving friendly domain names (project-foo.local or project-bar.local) and adding in a reverse proxy to have multiple projects use port 80.

Docker for Mac looks like it will solve at least the DNS issue.

Can't wait to try it out.

edit: words

17
alexc05 20 hours ago 0 replies      
I cannot wait to get home to play with this!

If I were a 12 year old girl I would be "squee-ing" right now. Ok, I'm lying - I'm a 40 year old man actively Squee-ing over this.

:)

It really plays nicely into my "weekend-project" plans to write a fully containerized architecture based in dotnet-core.

18
_mikz 1 day ago 2 replies      
19
nstart 1 day ago 1 reply      
My goodness. This is some of the best news from docker this year and we are still just getting started. Packaging various hot reloading JavaScript apps will finally be possible. Gosh. I can't begin to say just how excited I am for this.
20
numbsafari 1 day ago 0 replies      
I'm really hoping that this will be available via homebrew and not a way to force everyone to use Docker Toolbox or, god forbid, the Mac App Store.

Docker Toolbox just brings back too many nightmares from Adobe's awful Updater apps.

21
sz4kerto 1 day ago 1 reply      
Can some Docker employee explain how are file permissions going to work on Windows? For me, that's the biggest pain (on Win).
22
pokstad 22 hours ago 0 replies      
Funny this appears today, I just discovered Veertu on the Mac App Store (http://veertu.com) 2 days ago and love it. It also uses OS X's new-ish hypervisor.framework feature to allow virtualization without kernel extensions or intrusive installs.
23
alfonsodev 1 day ago 2 replies      
Biggest problem with Boot2docker was volume mounting and file permissions, hope this happens soon.> Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for in container development
24
jtreminio 1 day ago 0 replies      
I run my stack(s) on Vagrant with Puppet for provisioning. I use OSX, but one of the major pain points of working with Linux VMs on a Windows host are file permission issues and case insensitivity.

I don't think Docker can do anything about case sensitivity, but with this new release will permissions differences be handled better?

25
evacchi 22 hours ago 1 reply      
I wonder if (and hope that!) this fixes the issues[1] with (open)VPN. I can't use xhyve (or veertu) at work because of this.

[1] https://github.com/mist64/xhyve/issues/84

26
AsyncAwait 18 hours ago 1 reply      
Why does signing up for the beta require agreeing to a non-disclosure agreement?
27
newman314 10 hours ago 0 replies      
This is strange. I just created a Docker ID and as able to log into the regular hub but when I try to log into the beta, it keeps saying error.

Is there a user/password length limit? (I used a 30char user/password. 1password FTW).

28
jnardiello 23 hours ago 1 reply      
To be entirely honest, I'm quite concerned about your choice on choosing Alpine as the base distro. Their choice of using musl over glibc might be cool but if you have to put old libs inside a container, it's hell (if not entirely incompatible).
29
danbee 17 hours ago 1 reply      
I couldn't sign up using Firefox on Windows. I'd enter a username, email and password then the form would just go blank on submission.
30
ruipgil 1 day ago 0 replies      
Finally, I really hated the additional complexity and gotchas that boot2docker carried.
31
Grue3 19 hours ago 0 replies      
I really want to try this, but I'm unable to register. At the page where it says "Create your free Docker ID to get started" after I click Sign Up, the page just refreshes and my chosen ID becomes blank with no indication of what's wrong. I've chosen several different IDs and neither of them worked. Browser is Firefox 45.0.1 on Windows 7.
32
mrmondo 12 hours ago 0 replies      
Thanks god for no more Virtualbox, that thing was a pig, endless amounts of networking and IO problems that lead every developer using it to come to my team for help.

also, Oracle.

33
bradhe 23 hours ago 0 replies      
This is amazingly cool. We've been using docker at Reflect (shameless: https://reflect.io) since we started it and even if we didn't have all the cgroups features, it'd be super helpful just to be able to run the stack on my laptop directly instead of having the Vagrant indirection.
34
slantedview 21 hours ago 1 reply      
I've been running docker-machine with a VMWare Fusion VM with VT-x/EPT enabled and am using KVM inside my containers to dev/test cloud software. I'd be interested to know if I can still get the performance of Fusion and the support I need for nested virtualization out of Docker for Mac.
35
nikolay 21 hours ago 0 replies      
I've always wondered about invites for open-source projects... that don't even open-source...
36
geerlingguy 1 day ago 1 reply      
Private beta is behind a questionnaire, just FYI. You can't, unfortunately, download anything yet unless you get an invite.
37
paukiatwee 1 day ago 1 reply      
If I read correctly, docker for Mac is run on top on another visualization (xhyve, not VirtualBox) and docker for windows run on top of Hyper-V, which mean that it is not for production workload (at least for Windows).

So you can only use it for development. And it is close sourced. hmmm...

38
girkyturkey 19 hours ago 0 replies      
Finally! I've spent the last month or so on Docker to learn about it as I am somewhat new in this environment. I'm just excited to try it out and have a more broad range of tools.
39
mateuszf 23 hours ago 2 replies      
When I log in and go to https://beta.docker.com/form there is an empty form and js console says:Uncaught ReferenceError: MktoForms2 is not defined
40
silvamerica 21 hours ago 1 reply      
Will there be an easy way to switch / upgrade from docker-machine with vbox without having to recreate all of my images and containers over again?

I know it's a small thing, but it's kind of a pain sometimes.

41
mrfusion 18 hours ago 0 replies      
Would this be a good way to deploy a program based on opencv to nontechnical users? So far I haven't found a good way to do that
42
rikkus 21 hours ago 0 replies      
So on Windows this runs Linux in their isolated environment? I just got excited thinking it meant Windows in Windows but it looks like that's not the case.
43
awinter-py 19 hours ago 0 replies      
great news but I'm not sure a young startup should be wasting money on what was obviously a professionally produced launch video
44
eggie5 23 hours ago 0 replies      
Using docker on a mac always seemed to hackish b/c you had to run a separate VM. This seems like a step in the right direction and am excited to visit docker again!
45
Titanous 1 day ago 1 reply      
Is the source code available? I don't see it at https://github.com/docker
46
ThinkBeat 19 hours ago 1 reply      
I would like to see Windows docker images.Will this ever happen? Or can I do it already?
47
brightball 1 day ago 0 replies      
This is HUGE! Looking forward to trying it out.
48
partiallypro 20 hours ago 0 replies      
Kinda surprised they didn't just wait 7 days and announce this at Build with Microsoft.
49
d_sc 23 hours ago 0 replies      
This is great news to hear, I've been using a brew recipe that includes: brew install xhyve docker docker-compose docker-machine docker-machine-driver-xhyve to get close to what they're doing in this beta. Really looking forward to trying this out. Signed up for the beta!
50
tiernano 1 day ago 0 replies      
link says its Hyper-V on Windows, but then says Windows 10 only... Anyone know if Windows Server is also supported?
51
contingencies 16 hours ago 0 replies      
Not-news (support for two new hypervisors implemented, already dodgy package altered) voted up to 718 points. God you people are sheep. I guess what we take from this is docker is getting desperate for newslines.
52
ndboost 1 day ago 0 replies      
shut up and take my money!
53
Ivan_p 6 hours ago 0 replies      
can somebody provide a link for this app? I can't wait anymore! :D
54
eddd 1 day ago 1 reply      
i'll finally get rid of docker-machine, THANK YOU DOCKER.
55
TheAppGuy 23 hours ago 0 replies      
Is this relevant to my app developer community on Slack?
56
howfun 1 day ago 1 reply      
Why would be Windows Pro required?
57
serge2k 21 hours ago 0 replies      
still just VMs?
58
pmoriarty 1 day ago 1 reply      
Unfortunately, despite the title, Docker still does not run natively on a Mac or on Windows. It runs only inside a Linux VM.

From the OP:

"The Docker engine is running in an Alpine Linux distribution on top of an xhyve Virtual Machine on Mac OS X or on a Hyper-V VM on Windows"

7
Citus Unforks from PostgreSQL, Goes Open Source citusdata.com
734 points by jamesheroku  1 day ago   142 comments top 28
1
no1youknowz 23 hours ago 2 replies      
This is awesome. I have experience with running a CitusDB cluster and it pretty much solved a lot of the scaling problems I was having at the time. For it to go open source now, is of huge benefit to the future projects I have.

> With the release of newly open sourced Citus v5.0, pg_shard's codebase has been merged into Citus...

This is fantastic, sounds like the setup process is much simpler.

I wonder if they have introduced the Active/Active Master solution they were working on? I know before, there is 1 Master and multiple Worker nodes. The solution before was to have a passive backup of the Master.

If say, they released the Active/Active Master later on this year. That's huge. I can pretty much think of my DB solution as done at this point.

2
devit 21 hours ago 2 replies      
I've been unable to find any clear description of the capabilities of Citus and competing solutions (postgres-x2 seems the other leader).

Which of these are supported:

1. Full PostgreSQL SQL language

2. All isolation levels including Serializable (in the sense that they actually provide the same guarantees as normal PostgreSQL)

3. Never losing any committed data on sub-majority failures (i.e. synchronous replication)

4. Ability to automatically distribute the data (i.e. sharding)

5. Ability to replicate the data instead or in addition to sharding

6. Transactionally-correct read scalability

7. Transactionally-correct write scalability where possible (i.e. multi-master replication)

8. Automatic configuration only requiring to specify some sort of "cluster identifier" the node belongs to

3
exhilaration 23 hours ago 3 replies      
4
gtrubetskoy 23 hours ago 4 replies      
If anyone from Citus is reading this: how does this affect your business model? I remember when I asked at Strata conf a couple of years ago why isn't your stuff Open Source, the answer then was "because revenue". So what changed since then?
5
erikb 22 hours ago 0 replies      
Unforking is a very smart decision. Postgres also has gained a lot of favour since MySQL was bought by Oracle. Altogether Citus has earned a lot of kudos for that move, at least with me, for all that may count!
6
TY 23 hours ago 2 replies      
This is awesome! Tebrikler (congrats) on the release of 5.0 and going OS, definitely great news.

Can you publish competitive positioning of Citus vs Actian Matrix (nee ParAccel) and Vertica? I'd love to compare them side by side - even if it's just from your point of view :-)

7
lobster_johnson 10 hours ago 0 replies      
This is great!

One thing I'm having trouble with is finding information about transactional semantics. If I make several updates (to differently sharded keys) in a single transaction, will the transaction boundaries be preserved (committed "locally" first, then replicated atomically to shards)? Or will they fan out to different shards with separate begin/commit statements? Or without transactional boundaries at all?

In fact, I can't really find any information on how CitusDB achieves its transparent sharding for queries and writes. Does it add triggers to distributed tables to rewrite inserts, updates and deletes? Or are tables renamed and replaced with foreign tables? I wish the documentation was a bit more extensive.

8
faizshah 22 hours ago 1 reply      
So this sounds similar to Pivotal's Greenplum which is also open source, can anyone compare the two?
9
voctor 20 hours ago 1 reply      
Citus can parallelize SQL queries across a cluster and across multiple CPU cores. How does it compare with the upcoming 9.6 version of PostgreSQL which will support parallel-able sequential scans, parallel joins and parallel aggregate ?
10
azinman2 22 hours ago 2 replies      
I want it to be called citrus, which is what I always read it as....
11
rkrzr 23 hours ago 2 replies      
This is fantastic news! Postgres does not have a terribly strong High Availability story so far and of course it also does not scale out vertically.I have looked at CitusDB in the past, but was always put off by its closed-source nature. Opening it up seems like a great move for them and for all Postgres users. I can imagine that a very active open-source community will develop around it.
12
ccleve 20 hours ago 1 reply      
I'd very much like to see what algorithm these systems are using to enable transactions in a distributed environment. Are they just using straight two-phase commit, and letting the whole transaction fail if a single server goes down? Or are are they getting fancy and doing some kind of replication with consensus?
13
ahachete 15 hours ago 0 replies      
Congratulations, Citus.

Since I heard last year at PgConfSV that you will be releasing CitusDB 5.0 as open source, I've been waiting for this moment to come.

It makes 9.5's awesome capabilities to be augmented with sharding and distributed queries. While this targets real-time analytics and OLAP scenarios, being an open source extension to 9.5 means that a whole lot of users will benefit from this, even under more OLTP-like scenarios.

Now that Citus is open source, ToroDB will add a new CitusDB backend soon, to scale-out the Citus way, rather than in a Mongo way :)

Keep up with the good work!

14
signalnine 21 hours ago 0 replies      
Congrats from Agari! We've been looking forward to this and continue to get a lot of value from both the product and the top-notch support.
15
jjawssd 23 hours ago 2 replies      
My guess is that Citus is making enough money from consulting that they don't need to keep this code closed source when they can profit from free community-driven growth while they are expanding their sales pipeline through consulting.
16
BinaryIdiot 21 hours ago 0 replies      
I don't have a ton of experience scaling out and using different flavors of PostgreSQL but I had run across Postgres-XL not long ago; does anyone know how this compares to that?
17
uberneo 5 hours ago 0 replies      
Great product - If would be nice to have a Admin interface like RethinkDB where you can clearly define your replication and Sharding settings.Any documentation around how to do this from command line ?
18
ismail 19 hours ago 0 replies      
Any thoughts on using something like postgres+citrus vs hadoop+hbase+ecosystem vs druid for olap/analytics with very large volumes of data
19
X86BSD 22 hours ago 2 replies      
AGPL? This is dead in the water :( It will never be integrated into PG. What a shame. It should have been a 2 clause BSDL. Sigh.
20
onRoadAgain23 23 hours ago 5 replies      
Being burned before,I will never use an OS infrastructure project that has enterprise features you need to pay for. They always try to move you to paid and make the OSS version unpleasant to use over time as soon as the bean counters take over to milk you

"For customers with large production deployments, we also offer an enterprise edition that comes with additional functionality"

21
satygeek 18 hours ago 3 replies      
Does CitusDb fit in olap analytical workloads to do aggregations on hundreds millions of records using varying order and size of dimensions (eg druid) in max of 3 seconds response time using as few boxes as possible - Or there are other techniques have to be used along with Citusdb? Can you shed a light on your experience with CloudFlare in terms of cluster size and queries perf?
22
lambdafunc 9 hours ago 0 replies      
Any benchmarks comparing CitusDB against Presto?
23
ksec 11 hours ago 0 replies      
Does anyone know How does Citus compared to Postgre XL ?
24
ioltas 14 hours ago 0 replies      
Congrats to all for the release. That's a lot of work accomplished.
25
albasha 11 hours ago 0 replies      
I recently switched back to MariaDB because I didn't see a clear/easy path for Postgres scalability in case the project i am working on takes off. I am under the assumption there are at least two fairly simple approaches to scale MySQL; master-master replication using Galera and Aurora from AWS. What do you guys think? Am I right in thinking MySQL is easier to scale given I want to spend the least amount of time maintaining it.
26
Dowwie 21 hours ago 0 replies      
would a natural evolutionary path for start ups be to emerge with postgresql and grow to requiring citusdb?
27
Someone 18 hours ago 2 replies      
One must thank them for open sourcing this, and cannot blame them for using a different license, but using a different license makes me think calling this "unfork" is bending the truth a little bit.
28
Dowwie 21 hours ago 0 replies      
is it correct to compare citusdb with pipelinedb?
8
Julie Rubicon facebook.com
769 points by ISL  2 days ago   129 comments top 51
1
jimrandomh 1 day ago 5 replies      
Post is fiction. Spoilers follow.

.

.

When I started reading this, I didn't realize it was fiction. When I got to the point where the protagonist left the end-date off a query and saw a spike, I thought it was going to be explained by falsified data and lead into a (real) accusation of providing fraudulent metrics to advertisers. It wouldn't be the first such accusation. But it turned out to be a science-fiction story with unexplained time travel in it. Oh well.

2
stygiansonic 1 day ago 2 replies      
Fiction or not, this is similar to an actual story of (ex)-fraud researchers at Capital One[1], who (ab)used their access to credit card transaction data in order to infer whether target companies' quarterly earnings would be below/above expectations, and then traded on that knowledge.

However, in the actual story, there was no "black box" to query, the ex-employees wrote all the complicated queries themselves.

EDIT: They turned ~$150K into ~$2.8 MM USD over about three years, before being caught, mostly through options trades it seems.

1. http://www.bloombergview.com/articles/2015-01-23/capital-one...

3
chatmasta 1 day ago 15 replies      
Usually I read comments on HN before the article, but there were no comments when I saw the link, so I went straight to reading it.

What a bizarre piece. How long did it take everyone else to realize it was fiction? For me, I did not realize until the very end -- and even then I still wasn't sure. It could just as well have been written by some delusional non-technical employee at Facebook.

4
state 1 day ago 4 replies      
Worth noting that Robin is also the author of Mr. Penumbra's 24-Hour Bookstore [1] that others around HN would probably find pretty enjoyable. I thought it was quite fun.

1 - https://en.wikipedia.org/wiki/Mr._Penumbra%27s_24-Hour_Books...

5
maxaf 1 day ago 1 reply      
I'm "technical" and "hands-on", yet I was gullible enough to believe the story, IM-ed it to my wife, and was surprised when she hadn't panicked like I did.

I read way too much science fiction. Back to work now!

Kind of relieved though.

6
asadlionpk 1 day ago 3 replies      
Nice piece. When I saw that first from-future graph, I thought Facebook is faking their data. As they are known for faking page likes.
7
Jacksonb 1 day ago 0 replies      
"Published on the day Julius Caesar was murdered 2060 years ago, after crossing the Rubicon. Julie Rubicon. Nice touch."
8
FlyingLawnmower 1 day ago 0 replies      
I think the HN audience will "see through" this story, but anyone who isn't familiar with the current state of neural nets/has read the many pieces fortelling the future of AI might just find it plausible. I really liked the writing style.
9
kozikow 1 day ago 0 replies      
The first graph immediately looked too fake. Spike after an event would follow something like a log normal distribution, rather than sudden spike and day after back to normal. What's more, the world is not equally using the internet. The uneventful graph should follow something like a sin wave, with the highest point 2x higher than the lowest.
10
ChuckMcM 1 day ago 1 reply      
It is a fun story and one that comes at the question of "how much data is too much data and is too hard to resist?" Something that anyone who has worked in a popular Internet facing application has to come terms with. I found 'Manna'[1] a more compelling emergent AI story, but the data as predictor aspects of this story have their own particular flavor.

One of the things that immediately flashed it as fiction for me was that the graphs had all the same shape, which if you've ever looked at trend graphs you will see they might all have a similar outlier quality to them but they build and sustain in different ways.

At Blekko (a search engine company) we did an interesting study on query traffic to see if you could "predict" the "hallmark" holidays based on search queries. The idea was that holidays like Valentines Day come up, people start thinking about plans or gifts before that, could we advise an advertiser when the "peak" planning session was so that they could maximize the impact of their advertising spend by focusing it during the peak? And if so what sorts of queries were people making that indicated they were doing holiday planning? The results were mixed. For things like Valentines it was easy, flowers, chocolates, bed & breakfast reservations sort of rose out of the general query stream, St. Patrick's Day? Not so much. But the data peaks all had different shapes appropriate for different levels of impact (Christmas shopping really starts in August among the back to school traffic for the really prepared). So looking at (and for) "interest spikes" like the ones in the story had a bunch of different shapes, some with slow onset and rapid decline, some with rapid onset and rapid decline, and some which were like soft swells on a breezy afternoon at the beach.

That said, the dataset made possible by Facebook's chat stream would be even better for those sorts of investigations.

[1] http://marshallbrain.com/manna1.htm

11
minimaxir 1 day ago 1 reply      
The funny thing is that if the implications noted in the story were true and Facebook could accurately forecast events into the far future, building a social network would be the least of their priorities.
12
wallflower 1 day ago 0 replies      
This is some fantastic writing.

And if you are looking for a longer, slightly better, fictionalized account of Facebook, "The Circle" by David Eggers is a quick, engrossing read and quite hard to tear yourself away from once you begin.

https://en.wikipedia.org/wiki/The_Circle_(Eggers_novel)

13
cavisne 1 day ago 0 replies      
Great piece of fiction, it feels like the author works at Facebook or knows someone who does though.

That said, what's described could possibly be done with existing technology . Could Facebook accurately move forward statements like "next Monday is going to be massive for Volkswagen" over time you could weight private messages from people who work for regulators higher. For more public events like an apple launch they could predict a spike easily just by all the media mentions of the date

14
djsumdog 1 day ago 0 replies      
Haha..it's like a modern campfire ghost story. It's written in a believable manner too. It had me going. I like it. Good job!
15
thomble 1 day ago 0 replies      
This piece scared the hell out of me (seriously, I felt panicked) until I discerned that it was fiction. Great read.
16
dynofuz 1 day ago 1 reply      
reminds me of the movie primer: https://en.wikipedia.org/wiki/Primer_%28film%29
17
kevando 1 day ago 5 replies      
Very entertaining! How did no one realize this was fake by the first graph? Does xkcd design the graphs for fb premium partners?
18
tfgg 1 day ago 0 replies      
Lovecraft does data science?

(the variance on the graphs intuitively felt a bit low, but then again it is facebook)

19
mwcampbell 1 day ago 0 replies      
Reminds me of this interpretation of Minority Report, only with AI instead of human precogs.

http://mjyoung.net/time/minority.html

20
avipars 1 day ago 1 reply      
I think April Fools has come early this year
21
thomasahle 1 day ago 0 replies      
Was I the only one, whos first impression was, that facebook was somehow involved in insider trading. And that for some reason they were brave enough to also use this information for load balancing...
22
Kevin_S 1 day ago 0 replies      
As someone who is non-technical, I pretty much was mind blown until the very end. Had a feeling it was fiction haha. Glad the comments here clarified. Entertaining read.
23
mat_jack1 1 day ago 0 replies      
It's funny that I've received the notification for the HN500 while I was reading "Mr. Penumbra's 24h bookstore" and noticed it after having finished the book. Clicked the link and the author was Robin Sloan as in the book :)

Apart from the funny combination I just chime in to recommend the book here, I think most of you here would enjoy that.

24
lucb1e 1 day ago 0 replies      
So I figured this was too epic to be real, but what part of it is real? Is there a team selling statistics from all posts from advertisers? Because that part I believed without a second thought and honestly, it does make sense.
25
Animats 1 day ago 0 replies      
Aw. It's in the tradition of "The Endochronic Properties of Resublimated Thiotimoline", by Isaac Asimov.
26
chrome_x 1 day ago 0 replies      
Even when you feel its fiction, it just sounds like something that I could be reading off tomorrow's newspapers. This was a great read!
27
gizmodo59 1 day ago 0 replies      
By seeing the title along with the domain facebook.com I couldn't resist but click. Perfect click bait. Though a good story.
28
dekhn 1 day ago 0 replies      
Best version of this is still "Vaxen, my children..." http://www.hactrn.net/sra/vaxen.html read to the end, and check the meaning of the date
29
Mahn 1 day ago 1 reply      
Fiction, but that was entertaining. To the author, you should probably consider writing a book in this style :)
30
raverbashing 1 day ago 0 replies      
Nice piece of fiction
31
personjerry 1 day ago 1 reply      
It's disturbing to me how many comments on here assert that the piece is fiction with little to no evidence. It occurs to me that perhaps Facebook is trying to make it seem like it's fiction by posting on various forums!
32
chejazi 1 day ago 0 replies      
Facebook has the data to make a solid prediction market. That should be their new biz model - they'll still exploit our data but they won't be degrading our user experience with ads.
33
steven2012 1 day ago 0 replies      
Identified as fiction because VW isn't traded on the US stock market.
34
jbscpa 1 day ago 0 replies      
Hari Seldon and Gaal Dornick as devlopers of Psychohistory would approve. (Isaac Asimov's "Foundation" universe)

"Psychohistory is the name of a fictional science in Isaac Asimov's Foundation universe, which combined history, psychology and mathematical statistics to create a (nearly) exact science of the behavior of very large populations of people" source: Wikia

35
Thane2600 1 day ago 0 replies      
i thought the following into existence yesterday and read the post today. that facebook uses users as a layer of abstraction above one brain. a brain of brains. putting queries (thoughts) into the system (brain) and obtaining a result that is the average of many.
36
bpp 1 day ago 0 replies      
Very enjoyable, and almost plausible...
37
agumonkey 1 day ago 0 replies      
I enjoyed the oracle discovery very much and even more its use on themselves. Cute.
38
ktusznio 1 day ago 0 replies      
This piece was written by Robin Sloan, who is an author of fiction. :) Great story.
39
EGreg 1 day ago 0 replies      
Did John Titor write this?
40
malkia 1 day ago 0 replies      
Oh, somehow I've got the Pollyhop breeze from House Of Cards
41
daveheq 1 day ago 0 replies      
Hmm, fact-fic; I wonder who run with it make it an urban legend.
42
thadd 1 day ago 0 replies      
It still makes me excited about the future of neural networks :)
43
KerryJones 1 day ago 0 replies      
This could easily be a Black Mirror episode
44
jkkorn 1 day ago 0 replies      
The big short, Facebook edition.

Entertaining read.

45
wallzz 1 day ago 0 replies      
I really got scary reading this
46
hatsunearu 1 day ago 0 replies      
I thought this was real, but man that's probably the best creepypasta I've read in a while.
47
PSeitz 1 day ago 0 replies      
Obvious fiction. Nice story, but I don't like fiction masked as real.
48
lawnchair_larry 1 day ago 0 replies      
Not a very useful submission title.
49
api 1 day ago 0 replies      
I figured it was fiction by the end, but I couldn't really tell. I can't tell the difference between fiction/satire and reality anymore. The world is too strange.
50
dluan 1 day ago 1 reply      
Enchilada?
51
fsiefken 1 day ago 0 replies      
Who could be the real author if it's fiction? This Robin Sloan might certainly have the means and the motive. Either it is fiction, with a plot which is not to far fetched as any company with these kinds of databases can exploit it in the prediction market. "Information is power and currency in the virtual world we inhabit" Billy Idol once said. It's also an important element in Asimov's Foundation series: extrapolating history through psychohistory.

Or it's not entirely fiction or perhaps even factious. I remember Rupert Sheldrake mention in one of his mindboggling talks that he wanted to investigate potential psi effects on a much larger scale and was in talks with Google. If there were 'results' pertaining to precognition (or AI enhanced precognition of the crowd) would the public get to know about it?

9
Require-from-Twitter github.com
684 points by uptown  1 day ago   123 comments top 37
1
michael_storm 1 day ago 3 replies      
This is the Internet of Things for code. This is wonderful.

This is also probably a snarky shot at npm [1], for those who lack context.

[1] https://news.ycombinator.com/item?id=11340510

2
yAnonymous 1 day ago 2 replies      
I'm currently talking to investors to start a business around this. Please don't delete it.
3
spriggan3 1 day ago 10 replies      

 > "dependencies": { > "babel-preset-stage-0": "6.5.0", > "babel-preset-es2015": "6.6.0", > "babel-runtime": "6.6.1", > "babel-plugin-transform-runtime": "6.6.0", > "babel-cli": "6.6.5", > "babel-core": "6.7.4", > "twit": "2.2.3", > "entities": "1.1.1" > },
The problem right here. Just to run a script you now need to import a whole third party language runtime ? what other language does pull this kind of stunt ? Javascript is madness.

4
martin-adams 1 day ago 3 replies      
Maybe to make this more reliable, you should retweet the module first, then require your clone.
5
rburhum 1 day ago 0 replies      
Hi. I don't know how to program (otherwise I would do this myself), but can you port this to Google+ please? My office blocks twitter. Thanks!
6
melvinmt 1 day ago 1 reply      
Why is this not a npm module yet? Name suggestion: kik2
7
cmpolis 1 day ago 0 replies      
Tangentially related(tweet sized js) and an awesome project: https://t.d3fc.io/ is a collection of d3 visualizations from tweets. The code is cryptic on first inspection, but if you look at the sandbox setup, it starts to make sense and 140 chars is a wonderful constraint. eg: https://t.d3fc.io/status/694991319052103680
8
philmander 1 day ago 1 reply      
Require from stack overflow?

require('how do I prepend spaces to a string')

9
logn 1 day ago 4 replies      
A developer gets upset at unilateral actions by NPM resulting in a project being renamed or taken down unnecessarily (potentially breaking builds). So this dev decides to take down all their projects, as a sort of protest. This breaks a lot of builds. The JavaScript community thinks a clever solution is utilizing Twitter as part of the build process? Because then everything would be dependent on Twitter not adding an "edit tweet" button...
10
aioprisan 1 day ago 3 replies      
Pretty comical. I bet folks would actually use this to some extent, without realizing that Tweets can also be deleted.
11
dbpokorny 1 day ago 0 replies      
If you can get a good toolkit for writing a GLR parser, then people will write their own tokenizers, BNF formal grammars, and plug it into your parser. It would take a single person about two to six weeks to get something thoroughly polished in JavaScript along the lines of what is described. However I think that without some form of centralization, (perhaps a subreddit? idk) it will be difficult for the standardization and namespace organization process to take place. If it is just one person, there is no question of standardization; if it is multiple people, the question of who is in charge of the namespace becomes relevant. Who is in charge of the namespace in this particular experiment?
12
spotman 1 day ago 0 replies      
the next version should have a require from #hashtag, so that it can be fault tolerant, and would last longer when lawyers request a takedown!
13
m_mueller 1 day ago 0 replies      
> // ES6 leftPad

and he even had space for a comment in there....

14
jjawssd 23 hours ago 0 replies      
Next up: require from bitcoin blockchain
15
sorenjan 13 hours ago 0 replies      
There's also a package manager for it: http://require-from-twitter.github.io/

> require-from-twitter is the core code for the tweet package manager. Our beta version has only one dependency: npm. But we're working hard on adding more dependencies as quick as possible.

16
homero 1 day ago 0 replies      
Shouldn't have voted against increasing character count, we could've had a free cdn
17
cphoover 1 day ago 3 replies      
this would actually be an interesting coding challenge and experiment to see if something could be built worthwhile in modules limited to no larger than 140 characters.
18
rcthompson 1 day ago 0 replies      
Make sure you scroll down and read the mandatory disclaimer before commenting on the merits of this approach.
19
escoundel 15 hours ago 0 replies      
TDD - Twitter Driven Development
20
0x7fffffff 1 day ago 0 replies      
Well there you go. Problem solved.
21
t1amat 1 day ago 0 replies      
Standardized modules like this are exactly what the node.js-stack bot herding community has needed!

On the plus side: if you saw this dependency in a module you were looking at you would know to think twice.

22
olegp 1 day ago 0 replies      
Along similar lines, I made it possible to use NPM packages in the browser without a build step or server: https://github.com/olegp/bpm

More info here: https://meetabit.com/talks/37

23
anotherevan 1 day ago 0 replies      
There's still another eight days until April first.
24
peterkelly 1 day ago 0 replies      
What we really need is require-from-stackoverflow
25
mooreds 22 hours ago 0 replies      
Twitter finally has a business model! Who knew that source code hosting would be the killer app?
26
franciscop 1 day ago 1 reply      
For everyone who doesn't know it, there's a project called http://140byt.es/ compiling many code snippets that fit in a tweet (;

There was also a clever trick to compress/uncompress ascii text by using base[huge number] or something like that (full unicode) so it could be uploaded to twitter, but I don't remember the exact number

27
howeyc 1 day ago 0 replies      
I know this is supposed to be funny, BUT if you vendored and kept a local copy in you're build environment, you wouldn't have to worry if the tweet gets deleted.

This is the lesson I see noone talking about.

Of course, using a tweet as a source for a library is silly.

28
amelius 1 day ago 0 replies      
Filesystem interface to Twitter:

http://softwaretechnique.jp/DownLoad/twfs_en.html

This is probably more generic than the project discussed here.

29
andremendes 1 day ago 0 replies      
Well, twitter staff was saying they'd last at least another ten years, would NPM?
30
ikeboy 1 day ago 0 replies      
For deleted tweets:

On every fetch, submit the tweet to archive.org and archive.is if not already there. If tweet is deleted, fetch from there instead.

31
bagnus 1 day ago 0 replies      
I'm impressed no one has posted their own version for a different language.
32
plugnburn 6 hours ago 0 replies      
Why not just use anonymous gists in conjunction with RawGit CDN?

Unique IDs, no way to change or delete (since the gists are anonymous), served right out-of-the-box with a proper content type from cdn.rawgit.com.

33
nivertech 1 day ago 1 reply      
twitter doesn't have an edit button, but it does have the delete button ;)
34
wallzz 1 day ago 1 reply      
can someone explain what is this ? I really have no idea
35
progx 1 day ago 0 replies      
Lol YMMD
36
cphoover 1 day ago 0 replies      
funny :)
37
chris_wot 1 day ago 0 replies      
npm over twitter? A site populated by trolls and spammers. What could possibly go wrong?
10
Google opens access to its speech recognition API techcrunch.com
591 points by jstoiko  1 day ago   165 comments top 42
1
blennon 1 day ago 5 replies      
This is HUGE in my opinion. Prior to this, in order to get near state-of-the-art speech recognition in your system/application you either had to have/hire expertise to build your own or pay Nuance a significant amount of money to use theirs. Nuance has always been a "big bad" company in my mind. If I recall correctly, they've sued many of their smaller competitors out of existence and only do expensive enterprise deals. I'm glad their near monopoly is coming to an end.

I think Google's API will usher in a lot of new innovative applications.

2
CaveTech 1 day ago 6 replies      
> To attract developers, the app will be free at launch with pricing to be introduced at a later date.

Doesn't this mean you could spend time developing and building on the platform without knowing if your application is economically feasible? Seems like a huge risk to take for anything other than a hobby project.

3
zkirill 1 day ago 5 replies      
I came across CMU Sphnix speech recognition library (http://cmusphinx.sourceforge.net) that has a BSD-style license and they just released a big update last month. It supports embedded and remote speech recognition. Could be a nice alternative for someone who may not need all of the bells and whistles and prefers to have more control rather than relying on an API which may not be free for long.

Side note: if anyone is interested in helping with an embedded voice recognition project please ping me.

4
hardik988 1 day ago 1 reply      
Tangentially related: Does anyone remember the name of this startup/service that was on HN (I believe), that enables you to infer actions from plaintext.

Eg: "Switch on the lights" becomes

{"action": "switch_on", "thing" : "lights"}

etc.. I'm trying really hard to remember the name but it escapes me.

Speech recognition and <above service> will go very well together.

5
hardwaresofton 1 day ago 0 replies      
In case you're not interested in having google run your speech recognition:

CMU Sphinx:http://cmusphinx.sourceforge.net/

Julius:http://julius.osdn.jp/en_index.php

6
melvinmt 1 day ago 0 replies      
If you're having trouble (like me) to find your "Google Cloud Platform user account ID" to sign up for Limited Preview access, it's just the email address for your Google Cloud account. Took me only 40 minutes to figure that one out.
7
josephcooney 1 day ago 0 replies      
I wrote a client library for this in C# by reverse engineering what chrome did at the time (totally not legit/unsupported by google, possibly against their TOS). I have never used it for anything serious, and am glad now there is an endorsed way to do this.

https://bitbucket.org/josephcooney/cloudspeech

8
jaflo 1 day ago 0 replies      
Pretty impressive from the limited look the website (https://cloud.google.com/speech/) gives: the fact that Google will clean the audio of background noise for you and supports streamed input is particularly interesting.

I don't know I should feel about Google taking even more data from me (and other users). How would integrating this service work legally? Would you need to alert users that Google will keep their recordings on file (probably indefinitely and without being able to delete them)?

9
theseatoms 1 day ago 0 replies      
Key sentence:

> The Google Cloud Speech API, which will cover over 80 languages and will work with any application in real-time streaming or batch mode, will offer full set of APIs for applications to see, hear and translate, Google says.

10
jonah 1 day ago 0 replies      
SoundHound released Houndify[1], their voice API last year which goes deeper than just speech recognition to include Speech-to-Meaning, Context and Follow-up, and Complex and Compound Queries. It will be cool to see what people will do with speech interfaces in the near future.

[1] https://www.houndify.com/

11
robohamburger 1 day ago 0 replies      
Unless I have gone crazy google has had a STT available to tinker with for awhile. It is one of the options for jasper [1]. Hopefully this means it will be easier to setup now.

Would be nice if they just open sourced it though but I imagine that is at crossed purposes with their business.

[1] https://jasperproject.github.io/documentation/configuration/

12
mobiledev88 1 day ago 0 replies      
Houndify launched last year and provides both speech recognition and natural language understanding. They have a free plan that never expires and transparent pricing. It can handle very complex queries that Google can't.
13
amelius 1 day ago 2 replies      
Why isn't speech recognition just part of the OS? Like keyboard and mouse input.
14
vram22 1 day ago 0 replies      
For anyone who wants to try these areas a bit:

My trial of a Python speech library on Windows:

Speech recognition with the Python "speech" module:

http://jugad2.blogspot.in/2014/03/speech-recognition-with-py...

and also the opposite:

http://code.activestate.com/recipes/578839-python-text-to-sp...

15
z3t4 1 day ago 1 reply      
At least offer a self hosted version. Maybe it's just me, but I'm not comfortable sending every spoken word to Google.
16
danso 1 day ago 1 reply      
FWIW, Google followed the same strategy with Cloud Vision (iirc)..they released it in closed beta for a couple of months [0], then made it generally available with a pricing structure [1].

I've never used Nuance but I've played around with IBM Watson [2], which gives you 1000 free minutes a month, and then 2 cents a minute afterwards. Watson allows you to upload audio in 100MB chunks (or is it 10 minute chunks?, I forgot), whereas Google currently allows 2 minutes per request (edit: according to their signup page [5])...but both Watson and Google allow streaming so that's probably a non-issue for most developers.

From my non-scientific observation...Watson does pretty well, such that I would consider using it for quick, first-pass transcription...it even gets a surprising number of proper nouns correctly including "ProPublica" and "Ken Auletta" -- though fudges things in other cases...its vocab does not include "Theranos", which is variously transcribed as "their in house" and "their nose" [3]

It transcribed the "Trump Steaks" commercial nearly perfect...even getting the homophones in "when it comes to great steaks I just raise the stakes the sharper image is one of my favorite stores with fantastic products of all kinds that's why I'm thrilled they agree with me trump steaks are the world's greatest steaks and I mean that in every sense of the word and the sharper image is the only store where you can buy them"...though later on, it messed up "steak/stake" [4]

It didn't do as great a job on this Trump "Live Free or Die" commercial, possibly because of the booming theme music...I actually did a spot check with Google's API on this and while Watson didn't get "New Hampshire" at the beginning, Google did [4]. Judging by how well YouTube manages to caption videos of all sorts, I would say that Google probably has a strong lead in overall accuracy when it comes to audio in the wild, just based on the data it processes.

edit: fixed the Trump steaks transcription...Watson transcribed the first sentence correctly, but not the other "steaks"

[0] http://www.businessinsider.com/google-offers-computer-vision...

[1] http://9to5google.com/2016/02/18/cloud-vision-api-beta-prici...

[2] https://github.com/dannguyen/watson-word-watcher

[3] https://gist.github.com/dannguyen/71d49ff62e9f9eb51ac6

[4] https://www.youtube.com/watch?v=EYRzpWiluGw

[5] https://services.google.com/fb/forms/speech-api-alpha/

17
j1vms 1 day ago 3 replies      
I would say that Google's main goal here is in expanding their training data set, as opposed to creating a new revenue stream. If it hurts competitors (e.g. Nuance) that might only be a side-effect of that main objective, and likely they will not aim to hurt the competition intentionally.

As others here have pointed out, the value now for GOOG is in building the best training data-set in the business, as opposed to just racing to find the best algorithm.

18
zkhalique 1 day ago 1 reply      
Has anyone tried adding OpenEars to their app, to prevent having to send things over the internet from e.g. a basement? Is it any good at recognizing basic speech?
19
amelius 1 day ago 3 replies      
Nice. But what I want is open-source speech recognition.
20
szimek 1 day ago 0 replies      
In the sign-up form they state that "Note that each audio request is limited to 2 minutes in length." Does anyone know what "audio request" is? Does it mean that it's limited to 2 minutes when doing real-time recognition, or just that longer periods will count as more "audio requests" and result in a higher bill?

Do they provide a way to send audio via WebRTC or WebSocket from a browser?

21
timbunce 1 day ago 0 replies      
FWIW I'd just finished a large blog post researching ways to automate podcast transcription and subsequent NLP.

It includes lots of links to relevant research, tools, and services. Also includes discussion of the pros and cons of various services (Google/MS/Nuance/IBM/Vocapia etc.) and the value of vocabulary uploads and speaker profiles.

http://blog.timbunce.org/2016/03/22/semi-automated-podcast-t...

22
Negative1 1 day ago 1 reply      
I would be hesitant to build an entire application that relied on this API only to have it removed in a few months or years when Google realizes it sucks up time and resources and makes them no money.
23
zelcon 1 day ago 1 reply      
Great, now when will Google let us use the OCR engine they crowdsourced from us over the last decade with ReCaptcha. tesseract is mediocre.
24
dominotw 1 day ago 0 replies      
Nice! Curious how it compares to amazon's avs that went public this week.

https://github.com/amzn/alexa-avs-raspberry-pi

25
hans 1 day ago 0 replies      
cool, next up is a way to tweak the speech API to recognize patterns in stocks and capex .. wasn't that what Renaissance Technologies did ?

really GooG should democratize quant stuff next .. diy hedge fund algos.

26
saurik 1 day ago 4 replies      
I think this more directly competes with the IBM Watson speech API, not Nuance?
27
alfonsodev 1 day ago 2 replies      
I'm reading many libraries here, I wonder what's the best open and multi platform software for spech recognition to code with vim, Atom etc. I only saw a hybrid system working with dragon + Python on Windows. I would like to train/ customize my own system since I'm starting to have pain in tendons, and wrists. Do you think this Google Api can make it? Not being local looks like a limiting factor for speed, lag.
28
willwill100 1 day ago 1 reply      
Will be interesting to compare with http://www.speechmatics.com
29
chair-law 1 day ago 1 reply      
What is the difference from a speech recognition API and [NLP libraries](https://opennlp.apache.org/)? This information was not easily found with a few google searches, so I figured others might have the same question.
30
31
infocollector 1 day ago 1 reply      
What is the best speech recognition engine, assuming one has no internet?
32
mysticmode 1 day ago 0 replies      
I'm not sure, what will happen to Google's webspeech API in the future. Whether it will be continued as a free service.
33
sandra_saltlake 1 day ago 0 replies      
Sounds like this is bad news for Nuance,
34
flanbiscuit 1 day ago 0 replies      
I hope this opens up some new app possibilities for the Pebble Time. I believe right now they use Nuance and it's very limited to only responding to texts.
35
E4life 1 day ago 0 replies      
Finally, this is something that will be the main way for communication in the future.
36
omarforgotpwd 1 day ago 0 replies      
Fuck. Yes.IBM has a similar API as well as part of their Watson APIs but I really wanted to use Google's.
37
mark_l_watson 1 day ago 0 replies      
I think they are pushing back against Amazon's Echo speech APIs, which I have experimented with.

I just applied for early access.

38
braindead_in 1 day ago 0 replies      
How well does this work with conversational speech? Any benchmarks?
39
jupp0r 1 day ago 1 reply      
Anybody got the api docs yet? I wonder if I can stream from chrome via webrtc.
40
ocdtrekkie 1 day ago 1 reply      
"Google may choose to raise those prices over time, after it becomes the dominant player in the industry."

...Isn't that specifically what anticompetition laws were written to prevent?

41
yeukhon 1 day ago 0 replies      
I thought I read open source, then I realized open access. I believe in the past there was a similar API, or maybe it was based on Google Translate. But I swear at one point people wrote hackathon projects using some voice APIs.
42
BinaryIdiot 1 day ago 2 replies      
So this was very, very exciting until I realized you have to be using Google Cloud Platform to sign up for the preview. Unfortunately all of my stuff is in AWS and I could move it over but I'm not going (far too much hassle to preview an API I may not end up using, ultimately).

Regardless this is still very exiting. I haven't found anything that's as good as Google's voice recognition. I only hope this ends up being cheap and accessible outside of their platform.

11
Privacy Forget Your Credit Card privacy.com
604 points by doomrobo  18 hours ago   327 comments top 87
1
soneca 16 hours ago 13 replies      
Is that something that new? My bank (Ita, in Brazil) offers this option for some time now.

Here(in portuguese): https://www.itau.com.br/cartoes/cartao-virtual/

Or am I missing something?

Edit: They launched it in 2002: http://exame2.com.br/mobile/tecnologia/noticias/itau-agora-t...

Edit2: Sounds new in the US. This is not supposed to be a bragging/snarky comment. Just genuinely surprised as innovation usually come the other way around, from US to Brazil. So Congrats on the launch! Good job, sounds tough to launch it not being a Bank!

2
ac29 15 hours ago 5 replies      
In case anyone didn't catch what this actually costs, the answer is: 1.5-2%, which is the rate you could get cash back (or airline miles/etc) with good credit.

Because this service draws directly from your bank account, and takes what would otherwise be your rewards from the credit card fees their banking partners charge, it provides a nice business model for them at the cost of you getting 0% rewards back. Not worth it, in my opinion.

3
coryfklein 6 minutes ago 0 replies      
Shouldn't this service be marketed to credit card companies instead of credit card users? If I get a fraudulent charge on my credit card I can just dispute it and have it removed. What value do I get with privacy.com that I don't already have that is worth the extra fees I have to pay?
4
boling11 17 hours ago 23 replies      
Hey HN - Privacy.com co-founder here. I'm really excited to share what we've been working on for the past year and a half or so.

We've been neck-deep in payments stuff on the card issuing side (getting a BIN sponsor, ACH origination, etc), so happy to answer any questions on that front as well.

P.S. For new users, your first $5 donation to watsi.org is on us :)

5
URSpider94 12 hours ago 4 replies      
I think people are over-thinking this offering a little too much. People who are asking if the company will resist a subpoena, or if all customer data will be irreversibly encrypted, are expecting too much.

The main purposes of this product are to be able to mask your marketing data (name, address, phone) to businesses, and to mitigate damage in the event of a data breach (any stolen card numbers are useless).

It's not going to prevent a government entity from subpoena'ing your records and finding out what you've bought. Also, if you're buying anything that needs to be, you know, shipped or emailed to you, you're kinda going to have to give a valid address. Under the default settings, they also include the merchant information in the feed back to your bank, so your bank still gets all of the info on where you're shopping and what you're buying.

Finally, I am very skeptical of their claim about walking away from subscriptions and trials. Sure, in theory, you make it much harder for vendors to track you down, but by law, you're agreeing to pay for the company's services when you accept their agreement, and if they do bother to subpoena your information and come after you, if they find out that you presented them with a fraudulent name, phone number and address, I don't expect that would go well for you in court.

6
tedmiston 17 hours ago 2 replies      
My biggest question with Privacy, and of any one-time use credit card numbers service, is always:

Will it affect my rewards? Will businesses still show up unaffected with the same categories on my credit card statement? (I have a travel rewards only card, so breaking the rewards flow is a deal-breaker for using a higher level service.)

Edit: I misunderstood the service as being able to be layered on top of normal credit cards. It looks like the funding source is only bank accounts for now. Still my question remains if building on credit or debit cards is on the roadmap.

Edit 2: They are one-time use numbers, right? "Use at merchants" (plural) seems to possibly imply otherwise.

> What happens when I generate a new Privacy card?

> We'll give you a random 16-digit Visa card number that you can use at merchants that accept Visa debit cards...

Edit 3: It sounds like the business model results in keeping the money that would go to rewards on a normal card.

> How do you make money?

> Every time you spend using a Privacy card, the merchant or website pays a fee (called interchange) to Visa and the issuing bank. This fee is shared with us. We have some premium features planned, but rest assured, our core virtual card product will always be free and we will never sell your personal data.

7
zgubi 8 hours ago 1 reply      
@boling11, why does privacy.com need access to my online banking on an ongoing basis, after the initial signup is finished?

I have changed my online banking password after signing up successfully, and I received an email complaining that "Our connection to your bank is broken".

I can understand the need for initially providing my banking credentials for AML/KYC reasons, but I feel uncomfortable with your company continuing to use those after the initial check.

Why can't you just use the routing/account numbers for ACH after the initial signup?

8
drglitch 15 hours ago 5 replies      
Both citi and bankofamerica (and I believe so, but didn't personally use, Wells Fargo) offered this service for free on their CC accounts in mid to late 2000s.

You could set limits per number, have it lock to just single merchant, etc. pretty nifty when paying some wacky merchant online.

All have since shuttered the service because pretty much every CC comes with purchase protection that you can invoke to charge the vendor back in case of something going wrong.

Virtual CCs provide very limited utility in my mind - because the place you're likely to have your CC swiped - a bar or a cab - are still going to use only the legacy plastic version.

9
mirimir 17 hours ago 1 reply      
It's an interesting idea. However, I'm not comfortable with a third party having all that information. Some banks issue "corporate" cards, with numerous "employee" cards. I already trust the bank, after all. So what else does Privacy.com provide that's worth the risk? They're still subject to KYC, right? So there's no strong privacy. Or am I missing something?
10
husamia 39 minutes ago 0 replies      
I like the fact that they have 2x factor authentication
11
tome 6 hours ago 1 reply      
How do they not run out of numbers? According to this random image I found on the internet, each bank has a space of one billion card numbers. If you have ten million customers, say you're going to run out of these very quickly.

http://www.financetwitter.com/wp-content/uploads/2014/08/Cra...

12
orf 16 hours ago 4 replies      
> Privacy is PCI-DSS compliant. We are held to the same rigorous security standards as your bank.

I always giggle when I see that.

13
nommm-nommm 16 hours ago 1 reply      
"Never forget the cancel one of those pesky 30 day free trials."

This is very misleading to say the least. Not paying for a service doesn't cancel a service. If they tried to bill your card and the card was rejected that doesn't mean the service is cancelled.

14
__d 17 hours ago 7 replies      
I understand why you need it, and I want this service in a big way, but I'm just baulking at giving you my online banking username and password. Why should I trust you with that?
15
nommm-nommm 16 hours ago 1 reply      
So what happens when I have to return something and they put the money back on the card I used to purchase it?
16
drglitch 15 hours ago 0 replies      
Quick question to founder lurking here - if you're advertising yourself as a credit card and yet you do not extend credit (and use bank account as funding source) aren't you misadvetising? If it's just a virtual debit card, you are likely providing far less protection to consumer than a credit card would.
17
mkhalil 15 hours ago 1 reply      
I'm in love. Seriously, been waiting for this for soooo long. And the fact that the website supports two factor auth + is SUPER easy to use makes this a double whammy!!! :)

I've been a customer for about 5 minutes, have used it twice, and am already going to recommend it.

edit: I'm quite aware that this has been possible, but both banks/credit cards that I have make me jump through tons of ugly UI and clicks to make it happen.

18
llamataboot 1 hour ago 0 replies      
Here are the list of banks currently supported, to save you a click or two:

Bank of AmericaCapital One 360Charles SchwabChaseCitibankFidelityNavy Federal Credit UnionPNC BankUS BankUSAA BankSunTrustTD BankWells Fargo

19
habosa 16 hours ago 2 replies      
This is one of those things I have wanted to make so many times and I assumed it would either be technically impossible (card numbers not actually a huge number space) or it would just get marked as fraud.

Excited to see someone giving it a try.

20
film42 9 hours ago 0 replies      
This is super close to the product that I really really really want. The only thing that's missing for me, is that this requires a checking or savings account. When I purchase something with my credit card (most things), it's because I want the rewards program points. With this, I don't get that. If I can't pay with my credit card, then I'm losing money (~$300/yr).

I really want a product that let's me proxy my credit card (and change it when I get a new card). I want a firewall for my credit card.

21
gwintrob 17 hours ago 0 replies      
Great company name. How'd you get the domain?
22
jjallen 16 hours ago 2 replies      
Wish they explained this better:

"Please ensure this information is accurate. We'rerequired to verify this information against publicrecords. But don't worry, we'll keep it private."

I suppose I'm legally opening a bank account, which has similar requested info as this, but are they checking my credit (probably not, I know, but it makes me uncomfortable)? Will wait a while.

23
electic 15 hours ago 3 replies      
I signed up for this. Sadly, it is not what I thought it was and the website does not make it very clear. Basically, this is for online purchases only. To make matters a bit worse, it wants to connect to your real bank account.

What we need here is a physical credit card that I can use in the real-world that has a new number on each swipe. Most of my historical fraud has happened because I probably swiped my card at a location that was compromised.

Just my two cents.

24
cemregr 15 hours ago 1 reply      
The email you send to verify the bank comes off as SUPER shady. It reads exactly like a phishing email. It doesn't talk about which site / bank I'm using. Might be worth fixing.

From:Account Management Team <account.management@acctmanagement.com>

....

Thank you for being a valued customer.

Sincerely,Online Banking Team

25
jcrawfordor 16 hours ago 1 reply      
I accept that disabling JavaScript is generally a losing battle, but it specifically irks me when the website of a privacy-centric service is just completely blank if you don't have JavaScript enabled. Of all 30 people out there browsing without JavaScript, it seems like they have an elevated chance of all wanting to learn about this service, and I find myself moderately discouraged from trying it by this issue.
26
dogma1138 14 hours ago 0 replies      
Mastercard has this service in quite a few countries, the downside is that usually they do not offer the same insurance as for the normal cerdit card and those cards will not pass an actual credit check.Other issuers, banks, and other organizations (post office for example) also offered similar services.

I never really liked these services they don't really support recurring payments, some of them force you to purchase a card with a specific amount rather than it being valid for a specific transaction, some times they have issues with various 3d party checks (pre-paid card check, region lock/address verification, fraud etc.) and more importantly it's not an elegant solution as you end up with allot of credit card numbers.

Overall while this one might have a nice UX it doesn't really solve a problem that hasn't already been solved either through Paypal or trough your own credit card company.I can see all payments on my Amex and Visa cards in the UK, I can check which ones are recurring, I can initiate a charge back and for everything else well there's paypal which offers even an easier UX.

27
mindslight 16 hours ago 1 reply      
I like this, especially the repudiating of the privacy-hostile billing name/address voodoo. But I'd worry about forgoing the traditional protection of credit card chargebacks, and having to rely on debit card terms and direct ACH.
28
agotterer 1 hour ago 0 replies      
How does privacy.com ensure you have the funds to pay for the transaction? How do they deal with chargebacks and disputes?
29
fuzzywalrus 15 hours ago 0 replies      
I'm not sure if I'm ready to hand over personal details to Privacy, there's not much assurance other than "We'll never sell your data to anyone".

Does privacy.com see where I make all my purchases? Is there a collection of my metadata? What assurances do I have that you take personal privacy seriously?

30
cordite 17 hours ago 2 replies      
The stop subscriptions aspect really stood out to me, I had to spend 40 minutes on the phone with that darn company to get things canceled, even though I only used it for one day for an hour.
31
rgbrgb 14 hours ago 1 reply      
Is this Final without a physical card?

https://getfinal.com/

32
r1ch 17 hours ago 1 reply      
Any way this works without a browser extension? I'm assuming such an extension has full access to every single page in order to do its job, which is a huge security risk. You don't need to be reading my emails or passwords.
33
speeder 9 hours ago 0 replies      
I wish this was "country-agnostic"

I am from Brazil, and the government sometimes censor online stores, or is just an ass...

Also many stores have some sort of licensing agreement that exclusive Brazil, sometimes with no other way to get some stuff, for example there is a series of books that I can't legally obtain copies of them after Barnes e Noble closed Fictionwise, anyone on my country wanting one of those books must pirate it (they are digital only, and the stores that sell them are mostly US-only, and a bunch even check your IP or insert DRM that checks your IP).

If this payment service could hid someone country, I am very sure that in some countries piracy would drop a bit.

34
rilez7 4 hours ago 0 replies      
It would be great if this + other fintech services catered to overseas markets. It's understandable why they don't, but as an expat/nomad, centralizing your banking is a huge pain point. This cohort is only going to grow.
35
secresearch 1 hour ago 0 replies      
This is an interesting idea. Citi offers something similar, but this seems a lot more convenient.
36
iamleppert 14 hours ago 0 replies      
It looks like funding is done via ACH. Does your business operate a credit operation as well to handle the risk of spending money and unable to complete the ACH transaction?

I've always wondered about the business side of that...where does the money come from, how is individual debt handled. Do you operate collections? How do you do this without requiring a credit check? etc..

37
jeena 9 hours ago 0 replies      
My bank in Sweden offers this automatically when you use their website. Not with as a nice UX as this, it is a popup with a flash app in it, but still good enough to be very usable.

https://translate.google.com/translate?hl=sv&sl=sv&tl=en&u=h...

38
bluejekyll 13 hours ago 1 reply      
A problem I experienced with temporary card numbers is when you need that credit card number again to refund back a purchase if it was needed (out of stock, wrong thing, returns, etc).

I remember having a lot of trouble with the vendor because of this, so I stopped using them. Does this deal with that in some way?

39
darksim905 8 hours ago 0 replies      
Whoever works on this & put it together / posted this. Thank you. I just recently learned a while back that paypal had something similar but discontinued it. Whatever you have to do to keep this service running & any help you need in spreading the word, I'm willing to help out. This is needed badly for those who are privacy conscious.

Thank you :-)

40
makmanalp 12 hours ago 1 reply      
So, my bank in Turkey (Garanti) offered this more than a decade ago - you could make "virtual" cards to use on online transactions, and load them up with the specific amount of money.

This way you didn't need to worry about card numbers being stolen because they were easy to cancel and also didn't have any money in them.

Other cool stuff they did back then: online banking actually had features, and had a 2 factor keyfob. And they had a way where you could SMS people money by sending them a password protected one time code that they could go to any garanti ATM and withdraw cash.

Why are banks in the US so far behind?

41
dcosson 12 hours ago 0 replies      
> Never forget to cancel one of those pesky "30 day free trials."

This seems like a bad idea, I'm surprised they're advertising it. I'm pretty sure not being able to charge your card doesn't let you out of a contract you've signed.

I looked into this because I was too lazy to cancel a gym membership once. There are a lot of stories online of a gym sending someone's account to collections because they thought they didn't have to actually cancel it since the credit card expired.

The product still seems useful for one-time purchases though.

42
mfkp 17 hours ago 2 replies      
Very useful - my citibank credit card used to have a feature like this many years ago (I believe called "virtual card numbers"), but they got rid of it for some reason.

Though I am more likely to give my personal details to citibank than some startup. Trust is a big issue with payment startups.

43
DavideNL 5 hours ago 0 replies      
So instead of giving my data to the companies i buy products from, i'm now giving my data to privacy.com, who then sells it to (unknown) companies?
44
guico 7 hours ago 0 replies      
This exists in Portugal for at least 10 years (in Portuguese): https://www.mbnet.pt/#compras
45
pavs 15 hours ago 0 replies      
I use netteller, that does something similar, called virtual cards. Can create multiple cards and assign funds to each virtual card. Its not as smoothly done as this one, but same thing.
46
greenspot 10 hours ago 0 replies      
Still my email used for every transaction will connect the dots. So where is the point?

Awesome domain btw.

47
prohor 16 hours ago 1 reply      
Does it work if I live outside US?
48
avar 15 hours ago 1 reply      
I've been curious as to why the following strategy wouldn't work as a hack as well:

* Your credit card has a balance of $0 on it

* You have some app that allows $NAME to deduct $X from it

* You transfer $X to it earmarked for $NAME for some limited amount of time.

I.e. you could walk into Starbucks, have an app on your phone to say you're depositing $20 into an account earmarked for /starbucks/i for 30 minutes.

49
llamataboot 10 hours ago 0 replies      
Wondering what the $2k a month spending limit is about? That seems too low to switch all spending to Privacy, but seems like a lot of mental overhead to figure out what I want to use Privacy for and what I don't...
50
efader 15 hours ago 0 replies      
Oh the irony, a bank that offers a burner like credit card numbers and pretends to not know the aggregate transactions using the guise of privacy

LOL

51
elchief 15 hours ago 0 replies      
Looks cool.

Supports TOTP 2FA, HSTS, nosniff, CSP, x-frame-options, xss-protection

A+ ssllabs rating

A securityheaders rating

Some issues:

Some user enumeration issues. I emailed security@privacy.com but it doesn't exist...resent to questions@

I don't like how they ask for your bank's login username and password. I don't feel comfortable giving them that. There must be another way.

Should confirm email address before you can login

52
lolobkk 11 hours ago 0 replies      
Privacy.comThis site uses a weak security configuration (SHA-1 signatures), so your connection may not be private.

They not even using a secure signature for their SSL Cert and they want to be your trusted payment proxy?

53
Cartwright2 12 hours ago 0 replies      
Is it possible to create and verify a PayPal account against one of these cards? This would allow users to have pseudonymous PayPal accounts. It always bothers me when I go to make a donation that I have to give my real name.
54
nodesocket 10 hours ago 0 replies      
This is awesome, and something I've been thinking about a while. A few concerns though:

$2,000 a month spending limit is too low.

Concern about transactions being declined because they flagged as pre-paid.

55
hotpockets 10 hours ago 0 replies      
Would there be any way for merchants to accept your cards only? And, hopefully have fees closer to ACH rates, since that seems to be what you are using?
56
DanBlake 16 hours ago 0 replies      
There is a few of these services and they all look awesome. The issue has always been for me that I value my points/miles more than I value the convenience of not worrying about my credit card # being stolen. If I could do this with my SPG card, I would be all over it.
57
jdc0589 17 hours ago 0 replies      
damn. I've been wanting a service like this for a very long time. Not just for privacy of security, but hopefully so that if my banking or real credit card information changes I could just go to one place to make all my updates.

Looking forward to seeing how it looks.

58
nikolay 15 hours ago 0 replies      
PayPal had this and killed it - stupid PayPal! Bank of America has this. Discover has this, too. CitiBank has it, too. I really hate not being able to get cash back with Privacy.com so I won't probably use it.
59
phantom_oracle 11 hours ago 0 replies      
Which are the supported financial institutions? Your website has no information about this at all, even after digging through it.
60
eiopa 15 hours ago 0 replies      
ACH only :(

I want to use this, but I don't want to give you full access to my bank account.

61
leemailll 15 hours ago 0 replies      
Citi offers this feature, but not sure whether it is for all their credit cards
62
dawhizkid 13 hours ago 0 replies      
Tested on a few websites and immediately blocked.
63
o_____________o 12 hours ago 1 reply      
"Sorry, no compatible accounts were found. Only checking/savings accounts are compatible."

Inaccurate error, FYI.

64
sandra_saltlake 7 hours ago 0 replies      
All the virtual card providers seem to suck on this front.
65
leonaves 17 hours ago 0 replies      
Love the idea, but I just wanted to shout out the logo. Best logo concept I've ever seen, and the whole branding looks great anyway. Brilliant work.
66
tedmiston 16 hours ago 0 replies      
Any plans to make a physical card? Basically the multiple virtual card service you have now but in one card I can use in person, like Coin.
67
justplay 15 hours ago 0 replies      
My bank also provide this type of virtual credit card, but it is useless. It doesn't work, i tried in paypal.
68
mtgx 17 hours ago 3 replies      
> STEP TWOWhen you check out on any website, the Privacy icon will appear in the card form.Click it to create a new card, and auto-fill the card form. Use any name and billing address you like.

> STEP THREEAfter the card is charged, we withdraw the money from your chosen funding account, similar to a debit card.

Not sure I get this. Do you have to fund an account on Privacy.com? So it's like a Paypal where you generate a new payer name every time you pay for some other service with it?

> Sensitive information is encrypted using a split-key encryption with partial keys held by separate employees, meaning no one can decrypt your data; not even us.

Umm. Pretty sure that giving your employees the ability to decrypt my data means that "you" can decrypt it.

69
ginkgotree 17 hours ago 1 reply      
Hey! Such a great idea! Any chance you guys will work with Amex soon? I use my Platinum and Delta cards for everything.
70
jopython 13 hours ago 0 replies      
This feature is offered by BoA. I am still their customer because of this.
71
strange_quark 17 hours ago 0 replies      
So I should give Privacy my bank account information in the name of "security"? No thanks.
72
AznHisoka 16 hours ago 0 replies      
What payer name and address does the retailer see when the transaction goes through?
73
juli3n 15 hours ago 0 replies      
The is something named e-carte in France, and that is directly powered by banks :)
74
StartAppAchill 10 hours ago 0 replies      
logged in, authenticated with my bank, got the code, then nothing. Would not accept my code. Could not move forward.
75
Swizec 15 hours ago 1 reply      
At first I was really really excited. This is something I've wanted for months if not years.

Then they asked for my bank username and password.

76
pcarolan 15 hours ago 0 replies      
Good idea. Good marketing, even if not new, this needs to happen.
77
homero 12 hours ago 0 replies      
Not using it without ach verification
78
AJAlabs 15 hours ago 0 replies      
Some banks like Citibank do this as well.
79
subliminalpanda 17 hours ago 1 reply      
Are extensions for other browsers planned?
80
hdjeieejdj 14 hours ago 0 replies      
the issues I have with this are:

1) only for online purchases and limited use case- how many times do I make a purchase online that's not on Amazon, or where I'm not using PayPal?

2) new chip cards already do this for in store purchases

3) loss of travel/reward points

81
kidsthesedays 12 hours ago 0 replies      
why virtual card numbers aren't worth it: http://www.mybanktracker.com/news/why-virtual-credit-card-nu...
82
plugnburn 7 hours ago 0 replies      
In Ukraine, Fidobank offers "Shtuka" (, translated as "piece" or in jargon "thousand") debit cards that are attached to MoneXY account that is in turn attached to mobile number only. And since prepaid cellular service is mostly anonymous here, you can actually have as many anonymous accounts as you can for about 60 UAH (a bit more than 2 USD) each. And still these are physical MasterCards you can put into your pocket, accepted at any supermarket and also suitable for online transactions.
83
kozikow 16 hours ago 0 replies      
Any plans to support UK cards?
84
chris_va 15 hours ago 0 replies      
How are disputes settled?
85
serge2k 17 hours ago 0 replies      
Finally, a card for my dial up needs!

Really though, isn't something like the apple pay system a better way? You don't risk getting flagged as a prepaid card and reject, you aren't giving out your data.

86
chris_wot 17 hours ago 1 reply      
Is this for only U.S. customers?
87
StartAppAchill 10 hours ago 0 replies      
logged, asdfasf
12
An administrator accidentally deleted the production database gliffy.com
524 points by 3stripe  2 days ago   328 comments top 78
1
arethuza 2 days ago 18 replies      
My very first job - ~25 years ago.

Destroyed the production payroll database for a customer with a bug in a shell script.

No problem - they had 3 backup tapes.

First tape - read fails.

Second tape - read fails.

Third tape - worked.... (very nervous at this point).

I think most people have an equivalent educational experience at some point in their careers.

Edit: Had a project cancelled for one customer because they lost the database of test results..... 4 months work! Their COO (quite a large company) actually apologised to me in person!

Edit: Also had someone from Oracle break a financial consolidation system for a billion dollar company - his last words were "you need to restore from tape" and then he disappeared. I was not happy as it was his attempts at "improving" things were the cause of the incident! Wouldn't have been angry if he had admitted he had made a mistake and worked with us to fix it - simply saying "restore from tape" and running away was not a good approach.

2
steven2012 2 days ago 2 replies      
This is what happens when you don't have a disaster recovery plan, or if you have one but never test it out. You need to test your disaster recovery plans to actually know if things work. Database backups are notoriously unreliable, especially ones that are as large as the one this post is talking about. Had they known it would take 2-3 days to recover from a disaster I'm sure they would have done something to mitigate this. This falls squarely on the shoulders of the VP of Engineering and frankly it's unacceptable.

I worked at a company that was like this. My first question when I joined was, "do we have a disaster recovery plan?" The VP of engineering did some hand waving, saying that it would take about 8 hrs to restore and transfer the data. But he also never tested it. Thankfully we never had a database problem but had we encountered one we would have lost huge customers and probably would have failed as a business.

I also worked at a company that specializes in disaster recovery, but our global email went down after a power outage. The entire company was down for 1 day. There were diesel generators but they never tested them and when the power outage occurred they didn't kick in.

Case in point: Test your damn disaster recovery plans!!!

3
Smerity 1 day ago 0 replies      
I was testing disaster recovery for the database cluster I was managing. Spun up new instances on AWS, pulled down production data, created various disasters, tested recovery.

Surprisingly it all seemed to work well. These disaster recovery steps weren't heavily tested before. Brilliant! I went to shut down the AWS instances. Kill DB group. Wait. Wait... The DB group? Wasn't it DB-test group...

I'd just killed all the production databases. And the streaming replicas. And... everything... All at the busiest time of day for our site.

Panic arose in my chest. Eyes glazed over. It's one thing to test disaster recovery when it doesn't matter, but when it suddenly does matter... I turned to the disaster recovery code I'd just been testing. I was reasonably sure it all worked... Reasonably...

Less than five minutes later, I'd spun up a brand new database cluster. The only loss was a minute or two of user transactions, which for our site wasn't too problematic.

My friends joked later that at least we now knew for sure that disaster recovery worked in production...

Lesson: When testing disaster recovery, ensure you're not actually creating a disaster in production.

(repeating my old story from https://news.ycombinator.com/item?id=7147108)

4
Rezo 2 days ago 5 replies      
Treating app servers as cattle, i.e. if there's a problem just shoot & replace it, is easy nowadays if you're running any kind of blue/green automated deployment best practices. But DBs remain problematic and pet-like in that you may find yourself nursing them back to health. Even if you're using a managed DB service, do you know exactly what to do and how long it will take to restore when there's corruption or data loss? Having managed RDS replication for example doesn't help a bit when it happily replicates your latest app version starting to delete a bunch of data in prod.

Some policies I've personally adopted, having worked with sensitive data at past jobs:

- If the dev team needs to investigate an issue in the prod data, do it on a staging DB instance that is restored from the latest backup. You gain several advantages: Confidence your backups work (otherwise you only have what's called a Schrdinger's-Backup in the biz), confidence you can quickly rebuild the basic server itself (try not to have pets, remember), and an incentive to the dev team to make restores go faster! Simply knowing how long it will take already puts you ahead of most teams unfortunately.

- Have you considered the data security of your backup artifacts as well? If your data is valuable, consider storing it with something like https://www.tarsnap.com highly recommended!)

- In the case of a total data loss, is your data retention policy sufficient? If you have some standard setup of 30 days worth of daily backups, are you sure losing a days worth of data isn't going to be catastrophic for your business? Personally I deploy a great little tool called Tarsnapper (can you tell I like Tarsnap?) that implements an automatic 1H-1D-30D-360D backup rotation policy for me. This way I have hourly backups for the most valuable last 24 hours, 30 days of daily backups and monthly backups for a year to easily compare month-to-month data.

Shamless plug: If you're looking to draw some AWS diagrams while Gliffy is down, check out https://cloudcraft.co a free diagram tool I made. Backed up hourly with Tarsnap ;)

5
SimplyUseless 2 days ago 1 reply      
Been there Done that :)

I was once on-call working for one the leading organizations. I got a call in the middle of the night that some critical job had failed and due to the significant data load, it was imperative to restart the processing.

I login to the system with a privileged account. Restart the job with new parameters and since I wanted not to see the ugly logs, I wanted to redirect the output to /dev/null.

I run the following command./jobname 1>./db-file-name

and there is -THE DISASTER-

For some reason this kept popping in my head - "Bad things happen to Good people"

We recovered the data but there was some data loss still as the mirror backup had not run.

Of course, we have come long way since then. Now, there are constant sync between Prod/DR and multitude of offline backups and recovery is possible for last 7 days, the month or any month during the year and the year before.

6
ww520 2 days ago 3 replies      
We've all been there. Shit happens. That's what backup is for.

OT: It's probably bad form to publicly blame someone for it, even if it's done by him. It's suffice to say, we screwed up but on our way to recovery. It's better to follow the practice of praising in public and discussing problem in private.

7
dools 2 days ago 7 replies      
This is how I learned about xargs ...

I once typed onto a client's production mail and web server that basically ran the whole business for about 50 staff, as root, from the root directory:

chmod -R 644 /dirname/ *

I seem to recall the reason was that tab completion put a space at the end of the dirname, and I was expecting there to be multiple files with that name ... anyway the upshot was that everything broke and some guy had to spend ages making it right because they didn't have the non-data parts of the file system backed up.

I learned that whenever you do anything you should:

find . -name "*.whatever you want" | more

then make sure you're looking at expected output, then hit the up arrow and pipe it into xargs to do the actual operation.

8
rlonstein 1 day ago 3 replies      
BTDT. Got the t-shirt. Early in my career...

* Multiple logins to the conserver, down the wrong system.

* rm -rf in the wrong directory as root on a dev box, get that sick feeling when it's taking too long.

* Sitting at the console before replacing multiple failed drives in a Sun A5200 storage array under a production Oracle DB, a more senior colleague walks up and says "Just pull it, we've got hot spares" and before I can reply yanks a blinking drive. Except we have only two remaining hot spares left and now we have three failed. Under a RAID5. Legato only took eight hours to restore it.

* Another SA hoses config on one side of a core router pair after hours doing who knows what and leaves telling me to fix it. We've got backups on CF cards, so restore to last good state. Nope, he's managed to trash the backups. Okay, pull config from other side's backup. Nope, he told me the wrong side and now I've copied the bad config. Restore? Nope, that backup was trashed by some other admin. Spent the night going through change logs to rebuild config.

There were a few others over the years, but all had in common not having/knowing/following procedure, lacking tooling, and good old human error.

9
sqldba 1 day ago 1 reply      
The Enterprise I work for is currently implementing a new idea - where they hire a crack team of generalists - and give them complete and utter unfettered access to production (including databases).

This is despite our databases being controlled by my team and having the best uptime and least problems of anything in the entire business. Networks? Fucked. Infrastructure? Fucked. Storage? Fucked. But the databases roll on, get backed up, get their integrity checks, and get monitored while everyone else ignores their own alarms.

The reasoning for this is (wait for it...) because it will improve the quality of our work by forcing us to write our instructions/changes 3 MONTHS IN ADVANCE for generalists to carry out rather than doing it ourselves. 3 MONTHS. I AM NOT MAKING THIS UP. AND THIS IS PART OF AN EFFICIENCY STRATEGY TO STEM BILLIONS OF DOLLARS IN LOSSES.

Needless to say the idea is fucking stupid. But yeah, some fucking yahoo meddling with the shit I spent my entire career getting right, is sure to drop a fucking Production database by accident. I can guarantee it. Your data is never safe when you have idiots in management making decisions.

10
innertracks 1 day ago 1 reply      
Not long ago I discovered backups don't do any good if you delete them. The incident went down while I was wiping out my hard drive to do a fresh install of Fedora. I believe what happened may have been due to sleep fatigue.

Everything is a bit hazy. At one point in my wandering on the command line I found the mount point for my external backup drive. "What's this doing here?" and decide to remove it.

At some point I woke up in a panic and yanked the usb drive off the my laptop. Heart pounding. "Oh shit."

I actually felt like I was going to get sick. Tax records, client contact info, you name it, all gone. Except, basically, the pictures of my kids, mozilla profile, and my resume files.

While I reconstructed some of the missing files there a bunch that would be nice to have back. All of the business records though have had to be reconstructed by hand. By the next day I did realize I really only cared about the pictures of my kids in the end. And those were somehow saved from my blunder.

Work flow change: backup drive is only connected to laptop while backups are being made or restored. Disconnected at all other times. A third backup drive for backups of backups is on the todo list.

11
_spoonman 2 days ago 2 replies      
If that administrator is reading this, chin up ... it happens to the best of us.
12
brainbrane 1 day ago 0 replies      
About 15 years ago, my school's electrical engineering lab had a fleet of HP-UX boxen that were configured by default to dump huge core files all over the NFS shares whenever programs crashed. Two weeks before the end of the semester a junior lab assistant noticed all the core files eating a huge chunk of shared disk space and decided to slap together a script to recursively delete files named "core" in all the students' directories.

After flinging together a recursive delete command that he thought would maybe work, he fired it off with sudo at 9:00pm just before heading out for the night. The next morning everyone discovered that all their work over the semester had been summarily blown away.

No problem, we could just restore from backups, right? Oh, well, there was just one minor problem. The backup system had been broken since before the start of the semester. And nobody prioritized fixing it.

Created quite the scenario for professors who were suddenly confronted with the entire class not having any code for their final projects.

They talked about firing the kid who wrote and ran the script. I was asking why the head of I.T. wasn't on the chopping block for failing to prioritize a working backup system.

13
Jedd 2 days ago 1 reply      
https://www.gliffy.com/examples/

First graphic on this page includes a bright red box asking: "Is your data safe online?"

Evidently not a rhetorical question.

14
DennisP 2 days ago 2 replies      
One time the DBA and I were looking at our production database, and one by one the tables started disappearing. Turned out one of the devs had tried out a Microsoft sample script illustrating how to iterate through all the tables in the database, without realizing that the script was written to delete each table.
15
blantonl 2 days ago 1 reply      
If the gentleman who did this loses his job, then those looking for a new sysadmin should definitely give this guy some serious consideration.

Because I guarantee you he'll never, ever, let this happen again.

16
bliti 2 days ago 2 replies      
The official rite of passage that turns anyone into a bona-fide sys admin. The equivalent to running your production server on debug. D:
17
krzrak 2 days ago 1 reply      
Once I asked server support guy to move database from production to dev. He did - without any question of doubt - exactly that: copied database to dev environment and deleted it from the production.(note: in my language word "move" is less unambiguous than in English - it may mean, depending on the context "move" or "copy").
18
alistproducer2 2 days ago 0 replies      
Last week I deleted a large portion of our pre-production ldap. I use jXplorer ldap client and for some reason the control d (delete) confirm dialog defaults to "Ok" instead of cancel. I'm use to hitting control f (search) and then enter to repeat the last search and when I hit d instead of f I deleted a bunch of stuff. The silver lining is I patched the problem in jXplorer and submitted it. It's my first legit contribution to a project.
19
d0m 2 days ago 1 reply      
So.. story time. While at the university, there was that project where we had to create an elevator simulator in C as a way to learn threading and mutexes. All the tmp files were stored in ./tmp/.

In between build/run/debug cycle, I would "rm -fr ./tmp". But once, I did "rm -fr . /tmp". At that time I didn't know any better and had no version control.

I had to redo that 2 weeks in a night, which turn out to be more easier than expected considered I had just written the code.

My lessons from that:

 A) Version control, pushed somewhere else. B) Use simple build scripts.

20
amelius 2 days ago 2 replies      
In my opinion it is way too easy in Unix to accidentally delete stuff (even for experienced users). Having a filesystem with good (per-user) rollback support is, imho, more than just a luxury.
21
Yhippa 2 days ago 0 replies      
"Tell me about a time where something didn't go the way you planned it at work."
22
odinduty 2 days ago 9 replies      
Well, who hasn't done a DELETE without a WHERE clause? ;P
23
zimpenfish 2 days ago 1 reply      
I've done this - ran out of space on /home for the mSQL database (~1996 era), I moved it to /tmp which had plenty free. I suspect most people can now guess which OS this was on and what happened when the machine rebooted some weeks later...

(Hint: Solaris)

24
orbitingpluto 1 day ago 2 replies      
I was forced to train someone so cocky that they ended up doing a rm -rf / on our production server a month after I quit. He also accidentally euthanized a legacy server, deleted the accounting database when he was trying to do a hardware based RAID rebuild, completely destroyed the Windows domain server and mocked my daily tape backup regimen - opting to ship USB consumer grade hard drives in an unpadded steel box instead to off-site storage... The list goes on. He literally destroyed everything he touched. The only reason he wasn't fired was because he was a pretty man.
25
linsomniac 1 day ago 0 replies      
A dark and snowy night a bunch of databases on a server just vanished. This was on a server that was still in development, but was part of the billing system for a huuuge company, and it was under a lot of scrutiny. The files are just gone. So I contact the DBA and the backup group. For whatever reason, they can't pull it off local backups, so tapes had to be pulled in from Iron Mountain.

As I said above, a dark and snowy night. Took Iron Mountain 4 hours to get the tapes across town. The DBA and I finally get the database up around 8am the next morning. I investigate, but can't find any system reason for the databases vanishing, the DBA can't either.

2 weeks later, the same thing happens.

I eventually track it down to a junior developer who has been logged in and has on several occasions run this: "cd /" followed by "rm -rf /home/username/projectname/ *" Note the space before the star. On further investigation, I find the database group installed all the Oracle data directories with mode 777.

26
cyberferret 1 day ago 0 replies      
I am actually gladdened by reading the posts by others on here mentioning how they did the same thing. I've been kicking myself for decades over a similar thing I did when I was starting out as a programmer.

Not as big as some of those here, but back in the late 80's I was a self employed programmer writing DOS apps for local businesses to help them run more efficiently.

There was a local martial arts supply shop whose owner was sort of a friend of mine, and he engaged me to write a stock control and hire database for him, which I did. When it came time to implement, he told me that there was a LOT of data to enter, so he would hire a couple of young students of his to sit down for an entire week and key in the data, which was all good.

After they had finished, he called me back in to 'go live', and I sat down in front of his server PC and began to check that everything was OK. Normally, it is my habit to take a backup of the entire app directory before working on it, but I think I was going through a break up with my then girlfriend and was a little sleep deprived.

I noticed that some temporary indexes had been created during the data entry and I went to quickly delete it (thinking to rebuild all the indexes for best performance), but typed in 'DEL .DAT' instead of 'DEL .KEY'.

I still remember that sinking feeling as I sat there looking at the blinking 'C:\>' prompt, knowing I had wiped out all his work. Telling the owner was also one of the hardest things I have done, and I fully expected him to pull down one of the sharp oriental weapons from the wall and take me apart.

But he was really cool and understanding about it. He refused my offer to pay for the students to come back in and re-key the data again, which actually made me feel worse, because I knew he wasn't having the easiest time at that point making ends meet in his business.

End of the day, we got it all working and he used the system for many, many years. But to this day, I still make a copy of anything I am about to touch, before I work on it.

27
fortpoint 2 days ago 0 replies      
Sounds like a terrible situation. I wish those guys luck.

One useful sys ops practice is the creation and yearly validation of disaster recovery runbooks. We have a validated catalog of runbooks that describe the recovery process for each part of our infrastructure. The validation process involves provoking a failure (eliminate a master database), running the documented recovery steps and then validating the result. The validation process is a lot easier if you're in the cloud since it's cheap and easy to set up a validation environment that mirrors your production env.

28
xnohat 2 days ago 3 replies      
Every System admin could have a bad day like this :)Some years ago I have deleted entire production server just by very simple command "rm -rf /" instead "rm -rf ./" and I had logged in with root account. No words to explain the feeling at that time.Thanks to backups, without it, I have been killed thousand time by my customers.
29
3stripe 2 days ago 2 replies      
Posting as a reminder to myself that "in the cloud" != safe

There's always room for computer error, and more like, human error.

Imagine if something like this happened to Dropbox? Ooooft.

30
jon-wood 2 days ago 0 replies      
I'll join the chorus of people who've done something similar. In my case it was the database of a small e-commerce site, where I'd taken a backup and then formatted the database server to reinstall it.

What I hadn't realised was that the backup script was set to dump to the OS drive, so in the process I'd also just formatted the backup. Thankfully one of our developers had a recent copy of the database locally, but it definitely wasn't my finest hour.

31
Illniyar 1 day ago 0 replies      
I must say that's really the most transparent way to handle a downtime I've ever seen.

I would be scared shitless to expose for all to see what really happened and what is happening, even more so when it's makes them look like they don't know what they are doing.

I must applaud them for that, I wish if I ever get into such a nasty situation, I'll be able to do what they did.

32
ghamrick 2 days ago 0 replies      
In prehistoric times on an OS named CTOS, a distributed client/server OS, I was charged with making tape backups of user's local workstations, IVOLing (formatting) the disk, and restoring from tape. The contract specced that 2 tape backups were to be made, but of course in the interest of expediency, I only made one. And then I encountered the user's tape that wouldn't restore. I remember thinking that losing a user's data is the biggest crime a sysadmin can possibly commit, and it taught me a great lesson on the value of backups and their integrity. Fortunately, I swapped out tape drives like a mad man until one managed to restore the tape.
33
jjuhl 1 day ago 1 reply      
This reminds me of something I did at a previous employer (an ISP), many, many years ago.

I needed to do an update in a SQL database to fix some customer issue - the statement should just update one row but seemed to take a looong time to run, which seemed strange. When it finished and printed something like "700000 rows updated" I noticed I had forgotten the WHERE clause and I had also not started a transaction that I could roll back. Whoops!

That's when our support got really busy answering customer phone calls and I started asking who was in charge of our backups.

That was not a good day.

34
creullin 2 days ago 0 replies      
Sucks, but we've all been there. If the admin is reading this, it's all going to be ok! Just remember, life sucks, then you die...
35
shubb 2 days ago 2 replies      
Poor guys. Really interesting reading though.

I initially thought it was weird they had to run several "processes" in case 1 failed. But running out of space or something correctable is actually something likely to happen. Is this standard? It's quite smart.

Anyway, assuming they get the data back, I think they've done pretty good - 0 data loss and a days downtime isn't bad given this is a true disaster.

It would be nice if they'd let us know how the db got deleted, and what they suggest to mitigate in a blog after.

36
ZeWaren 2 days ago 0 replies      
That reminds me of that time where I imported the nightly dump of a database TWICE into the same server.

Dropping an entire database brings problems, having duplicate content and deleted content coming back bring a whole new realm of others good times.

37
moviuro 2 days ago 2 replies      
My mentor told me: "get everything wrong, but get the backups right", as he was busy debugging the backup solution he had in place at my college (ZFS + NetApp + rsync + sh + perl + tape).

On my own, I'd put CoW wherever possible. It's so easy to delete something on UNIX that it should also be easy to restore and CoW is without a doubt a no-brainer for this.

38
return0 2 days ago 0 replies      
> We are working hard to retrieve all of your data.

Given that in most cases where a backup exists, the user data is not lost, it's a bit unsettling to say that (and also in most cases admins are not working, they are mostly waiting). It's more reassuring to the user to say "we are verifying that all data is restored correctly" or sth.

39
novaleaf 2 days ago 1 reply      
My first real job was a DBA at Microsoft, on a very large marketing database (aprox 1.5TB in 2000)

That experience, how much work is required for "real" production databases, led a bad taste in my mouth. I stay away from self-hosted db's to this day. (example, I use google cloud datastore nowadays)

40
mrlyc 13 hours ago 0 replies      
I've found that it's important to do my own backups and not rely on IT to do them. I once returned from my holiday to find that the sysadmin had wiped my hard drive. He said he thought I had left the company. Fortunately, I had backups on computers in other states that he didn't know about.
41
kchoudhu 2 days ago 0 replies      
I did this to the trading database back in 2008 while supporting the mortgage desk of a major investment bank, a day before Lehman went down.

Thank god for backups and translog replays.

42
wazoox 1 day ago 0 replies      
Ah, that moment when we needed to copy 1 master disk drive to 80 PCs urgently using Ghost, and my boss said "I'll take care of it, I'm very familiar with Ghost". And with the first PC proceeded to copy the blank disk onto the master.

Problem was : creating the master drive was the job of someone else 1000 km away, with special pieces of tailor-made software... The guy ended at the airport trying to get someone on a leaving plane taking couriering the disk drive (fortunately for us, some lady accepted; this was still possible in 1998).

43
linsomniac 1 day ago 0 replies      
I once had a client we were running their office Linux server for. They needed more storage, so they asked me to come in and put in some larger drives on the RAID array. Somehow during this, the old drives freaked out and the data was just gone.

So, we go to the backup tapes. Turns out that something changed in the few years since we set up backups, and the incrementals were being written at the beginning of the tape instead of appending. These were DDS tapes, and there is a header that stores how much data is on the tape, so you can't just go to the end and keep reading.

Now, we had been recommending to them every month for a year or more that a backup audit should be done, but they didn't want to spend the money on it.

They contacted a data recovery company who could stream the data off the tape after the "end of media", and I wrote a letter to go with the tape: "Data on this tape is compressed on a per-file basis, please just stream the whole tape off to disk and I'll take it from there." We overnight it to them and a week later they e-mail back saying "The tape was compressed, so there is no usable data on it." I call them up and tell them "No, the compression re-starts at every file, so overwriting the beginning is fine, we can just pick up at the next file. Can you just stream it off to disc?" "Oh. Welllll, we sent the tape back to you, it should be there in a week." They shipped it ground. We shipped it back, they did the recovery, and we got basically all the data back.

44
hiperlink 1 day ago 0 replies      
~20 years ago I was working for a relatively small banking software company (in Hungary) (it was a really good job from the learning point of view, but was really underpaid).

One Monday afternoon one of our clients just called that the banks officer's suddenly can't log in, random strange errors are getting displayed for them, etc.

OK, our support team tried to check, we can't login either, strange error.

"Did you do anything special, [name of the bank's main sysadmin]?"

"Well, nothing special, I just cleaned up the disks as usual."

"How did you do it?"

"As usual: 'mc', sort by file size in the INTERFACE/ folder, marked the files and F8".

That's normal.

OK, since we had the same user account (I knoooow), launch 'mc'. Looks normal. Except... In the left panel the APP/DB directory is opened... Check... Appears normal... At first... But... WAIT. Where is the <BANKNAME>.DB1 file?

"<ADMIN>, how long time did it take?"

"Dunno, I went for my coffee, etc."

Apparently he deleted the production systems' main DB file. It's got resolved by restoring the backup from Saturday and every file and input transactions had to be re-inputed based on the printed receipts, the officer's stayed in late night, etc. He is still the head of IT at the same bank. (Yeah everyone makes mistakes, but it wasn't the only one of hims, but likely the biggest.)

45
dkopi 2 days ago 0 replies      
"The good news is that we have copies of our database that are replicated daily, up until the exact point of time when the database was deleted. We are working hard to retrieve all of your data."

Better news would be if every user had local copies of their work too. both in local storage, and on a cloud storage provider of their choice.Preferably in a non proprietary format.

This isn't just about getting me to trust your site if you crash or have a tragic mistake. This is also about getting me to trust your site if you go out of business (as too many startups unfortunately do).

46
jestar_jokin 1 day ago 0 replies      
Earlier in my career, I worked in prod support for an insurance web application. It had a DB containing reference data. This reference data was maintained in an Excel spreadsheet; a macro would then spit out CSV files, which would be used by command line scripts to populate databases in different environments (test, staging, pre-production). The DB data was totally replaced each time. Pre-production data would be copied into production, every night or so.

One time, I ran the staging and pre-production scripts at the same time. This had the unusual effect of producing an empty CSV file for pre-production.

When I got in the next day, I discovered all of the production data had been wiped out overnight...

Thankfully, it was all reference data, so it was just a matter of re-running the export macros, and pleading with a DBA to run the data import job during business hours.

I ended up writing a replacement using generated SQL, so we could apply incremental updates (and integrate better with a custom ticketing system).

47
donatj 1 day ago 0 replies      
New devops guy at my work a few years ago somehow completely blows away the CDN. Of course we have all of the data locally but it took almost a full day to reupload. I believe this is our longest downtime to date.
48
aNoob7000 2 days ago 0 replies      
I would really love to get more detail about how they structured the full backups and transaction log backups for the database. Are the backups dumped to disk before being picked up on tape? Or are the backups streamed directly to the backup system?

I'd also would love to know how large is the database that was deleted. Doing a point in time restore of database that's a couple of hundred gigs should be relatively fast (depending on what hardware you are running on).

49
tobinharris 1 day ago 0 replies      
In 2002 I accidentally executed

DROP TABLE HOTELS;

whilst working on the Virgin Holidays website. We managed to get it from backup but it made me shart.

50
girkyturkey 2 days ago 0 replies      
My first internship used Google Drive for their database (small start up) and there have been numerous times where I have almost lost a substantial amount of work/information. This article brought back that feeling of anxiety. But that is a lesson to be learned, even if it was the hard way. Everyone goes through that at some point in their career.
51
okket 2 days ago 0 replies      
These days it should be possible to roll back a few steps ( 15 min / 1 hour) with a copy on write filesystem like zfs? Full scale restore from backup should only be necessary if the storage hardware fails (IMHO).

You still need to apologize for a some data loss, though. So make sure that everything you do has one or two safety nets before it hits the customer.

52
alphacome 2 days ago 2 replies      
I am wondering why the OS not introduce a policy to protect important files/directories. For example, we can mark something is important, then if someone try to delete it, it will ask the person to input some key (at least 20 characters), if the key is incorrect, the operation will be canceled.
53
lasermike026 2 days ago 0 replies      
Just reading this headline makes me queasy.
54
Joyfield 1 day ago 0 replies      
I once accidentally moved the cgi-bin (long time ago) on one of Swedens biggest websites. moved it back pretty quick so it was "only" down for a couple of seconds.
55
forgottenacc56 1 day ago 0 replies      
Good management blames management for this. Bad management blames the sysadmin and publicly says that "the sysadmin did it".
56
BinaryIdiot 2 days ago 0 replies      
My very first commercial experience doing development was as an intern at Polk Audio. At the time their online solution was pretty immature (no version control and no development environments; everything was coded up in production).

I was working on a very important, high traffic form and...accidentally deleted it. Their backup consisted of paying another company to back up each file. Fortunately they came through but it took a full day to restore a single file.

57
gtrubetskoy 1 day ago 0 replies      
This is where delayed replicas come in very handy: https://dev.mysql.com/doc/refman/5.6/en/replication-delayed.... I don't know whether they're running on MySQL though...
58
iamleppert 1 day ago 0 replies      
One time I restored a database from a MySQL binary log a table that held about 10,000 employee pay rates. Unfortunately, the log was shifted a few rows and the mistake wasn't noticed until a few weeks later when the CEO and some high level directors noticed their pay was traded in for the high 6 figures to an hourly rate.

What a mess!

59
unfunco 1 day ago 0 replies      
Have done this and similar. And now I have aliases in my zshrc:

 alias db="mysql --i-am-a-dummy"

60
w8rbt 2 days ago 0 replies      
People who do things make mistakes. It's the ones who don't make mistakes that should be of concern.
61
ausjke 2 days ago 0 replies      
Knew one sysadmin was fired due to his "rm -rf /" fat finger without a working backup tape scheme.

Also once we had to retrieve some code from tapes, which are just stacked in a messy black room, and nobody can eventually find that, but no firing anybody either.

62
nwatson 1 day ago 0 replies      
Sorry for those that lost information, personally glad it didn't involve Atlassian Confluence-hosted Gliffy illustrations ... I have a lot of those and the tool is great for quick shareable embedded engineering sketches.
63
matchagaucho 1 day ago 0 replies      
I only store... IDK.... about 80% of my system architecture diagrams on Gliffy.

FML :-/

64
pc86 2 days ago 0 replies      
Looks like the pricing page is 404 right now as well (but all other pages seem to be fine
65
sirpogo 1 day ago 0 replies      
And Gliffy is back up.

https://www.gliffy.com/apology/

66
alienbaby 1 day ago 1 reply      
very early career days, wrote a script that had rm -rf in it. I knew this was dangerous and so the script asked, 3 Times, if you were sure you were in the right place.

That was the problem, asking 3 times.. people just spammed enter x3 at that point in the script.

Someone using it came over to me one day.. 'hey, look, what going on with this system. I can't do ls ? '

There was no system, pretty much. The script had rm -rf'd while he was root and running the script from root.

The job of the script? installing and configuring the backups for a system. So yea, there were no backups for this system at this point in time !

67
PaulHoule 2 days ago 0 replies      
Last time I did that the chief sysadmin had my back and we had it restored in 5 min.
68
noir-york 2 days ago 0 replies      
Admit it - who here hasn't read this and not gone back and tested their restores?
69
keitmo 2 days ago 0 replies      
The "Other" Moore's Law:

Backups always work.Restores, not so much.

70
manishsharan 2 days ago 0 replies      
This is my biggest fear when I use my production Redis
71
daodedickinson 2 days ago 2 replies      
Are there any more sites like gliffy and draw.io?
72
Sujan 2 days ago 0 replies      
Poor guys...
73
hathym 2 days ago 0 replies      
don't laugh, this can happen to you
74
Raed667 2 days ago 0 replies      
Shit happens =)
75
xg15 1 day ago 0 replies      
I accidentally all the data...
76
yitchelle 2 days ago 0 replies      
Just going to add the obligatory http://thedailywtf.com/
77
peterwwillis 1 day ago 2 replies      
Serious question: Do modern "all in the cloud" tech companies actually have DR plans?

All the presentations i've seen about people deploying in the cloud leaves out any DR site, replication process, turnover time for the DR site taking production traffic, etc. It's like they believe redundant machines will save them from an admin accidentally hosing their prod site and having to take 3+ days to recover.

78
owenwil 1 day ago 0 replies      
Anyone have a screenshot?
13
Dear Apple, theres nothing really sad about using a 5-year-old PC thenextweb.com
371 points by Ph4nt0m  2 days ago   233 comments top 62
1
mikehearn 2 days ago 14 replies      
I interpreted that comment as a jab at the PC industry. The implication being that in the last 5+ years the PC industry has failed to offer 600 million users a compelling reason to upgrade. Isn't that the correct interpretation, considering the whole point of bringing that up is because they're positioning a new device intended to replace the PC? I seriously doubt it's intended to mean (to quote the article) "LOL poor people".

Granted, this interpretation would make for a boring thinkpiece and would not get me to #1 on HN.

2
sudosushi 2 days ago 5 replies      
As someone sitting at a nearly 5 year old Macbook Pro, I took the comment as an off hand throwaway. I understand not liking the comment, but this isn't news. A company thinks everyone should be using the latest of their products. Oh no.
3
NikolaeVarius 2 days ago 4 replies      
I swear, if a modern tech company went up and said 'We think you should buy our product", someone would start yelling about how its insulting how a company is endorising capitalism and materialism.

We've gone from "Won't somebody think of the children" to "Won't somebody think of every single possible group that is possible to somehow offend in some way"

4
johansch 2 days ago 6 replies      
I built a desktop PC almost exactly five years ago.

- Intel Core i5 2500K, 3.3 GHz, quad-core (200 USD)

- 8B (2x4GB) DDR3 1600MHz C9 (100 USD)

- 120GB 2.5" SSD Intel X25-M G2 (200 USD)

- GeForce GTX 460 1GB (200 USD)

It's sad that a full five years later, CPU performance/USD has barely moved at all. RAM is half the cost now, SSDs a quarter the cost. Not sure about how GPUs have developed?

(Edit: The GTX 960 which today also costs 200 USD seems to be about twice as fast as the GTX 460.)

5
danielvf 2 days ago 0 replies      
This is classic trolling, plain and simple.

The head of marketing thinks that using a competitor's product is "sad".

In response, this article calls Apple: "Insensitive". Offensive. "Hypocritical". "Insulting". And worst of all, promoting inequality by building high quality, expensive products and forgetting the needs of the poor.

It's an article designed to produce a response.

6
BWStearns 1 day ago 0 replies      
Really this isn't terribly offensive. They're selling a product they think is superior, in their [marketing] minds the world would be a better place if you were pulled from the womb, slapped on the ass, and handed an iPad Pro, to be renewed every generation of gadget.

The Apple presentation stage is not the Basilica of St Peter. These are marketing pronouncements, not moral ones and analyzing them as such is such intense naval gazing that it's actually bad for your neck. If Apple hates the poor it's for no reason other than that they're outside their customer base.

7
existencebox 1 day ago 1 reply      
Perhaps I have some wires crossed from too much time as a sysadmin, but I take a 5 year old (+) PC (or any machine, really) as a mark of pride, not any bit of shame whatsoever. It speaks to a high degree of reliability which often speaks well of the operator (even if just "choosing robust hardware" is a component of this)

Some stories to add some color to this:Ran a very primitive file sharing server for my university on a dual P(2 or 3, don't quite remember) machine that was probably around a decade old by the time they finally ended up retiring it.My home fileserver is a ~10 TB 4U monster, running on (conveniently) 5 year old hardware and very boring FBSD. Outside of moving apartments, it has not had unplanned downtime once, I will continue using it as long as this is true and would be sad if I didn't get another good few years out of it.

I _WISH_ I could get the same lifetime out of desktop PCs but I tend to find assorted parts failing at an asymptotic rate around 3-5 years. The world in which we all use <5 year old hardware is a sad one, to be avoided, to my eyes. (To clarify, I don't mean this in any luddite sense, I don't believe tech should stop moving forward, but I long for more robust products with longer viable lifespans, such that one can make a choice to upgrade rather than waiting for the inevitable.)

8
oldmanjay 2 days ago 0 replies      
I'm not a fan of moralistic handwringing, particularly when it's brought about by uncharitable interpretations of what is clearly just marketing. This and the related articles are such poor quality that it makes me sad to see them get so much traction here. It's the sort of thing I'd expect at dailytech or slashdot.
9
colund 2 days ago 3 replies      
In times of climate change debate I think Apple is doing the wrong thing here, encouraging buy and throw away mentality increasing waste.
10
specialp 1 day ago 2 replies      
It is ironic that Apple is mentioning this as I believe this is going to spell the end of their era of massive profits. Phones now are getting to the state where much like PCs, the older phone is good enough, and the new phone is not substantially better. There will always be people buying a phone for .2ghz more CPU or some slightly higher res screen, but the days of rapid evolution of mobile devices are over, and Apple is going to have problems selling someone a $6-800 phone every year or 2.
11
agentgt 1 day ago 1 reply      
I interpreted it as "it's sad that those Window users having been using just Windows for 5 or more years and not Mac".

IMO He's catering to the audience of Apple enthusiasts (who are the ones that watch Apple events generally) and not making fun of poor people.

Its sort of analogous to when Jobs said: "It's like giving a glass of ice water to somebody in hell" -- about iTunes on Windows computers

Oh so Jobs thinks Windows users are Evil since they are in hell right?

12
tombert 2 days ago 1 reply      
Did nothing interesting happen in the tech world today? This is such a non-story, it's really weird that this is on the front-page twice.
13
apatters 2 days ago 1 reply      
If there's something sad about that fact, it's that the industry has delivered so little value in the past 5 years that not many people feel compelled to upgrade!
14
sergiotapia 1 day ago 0 replies      
Jesus christ, they are in the business of selling computers. This faux outrage over a salesman trying to sell his computers is gross - what's wrong with people?

Now watch as every blog tries to scramble to see who has the most outrage and who is the largest victim.

15
johnhattan 2 days ago 2 replies      
Actually, my main development tower is about nine years old.

And in that time, I've upgraded the processor, doubled the memory, upgraded the hard drive to an SSD, switched the video card twice, upped the number of connected monitors from one to three, and upgraded the OS from 32-bit Vista to 64-bit Windows 10.

It was pretty leading edge when I built it, and it's still pretty leading edge today. What's sad is the expectation that I should throw my computer away every 18 months.

16
c0achmcguirk 2 days ago 0 replies      
"I want to be offended!"

signed,people who take offense at a off-hand remark like this.

Grow some thicker skin and stop wasting my screen real estate with irrelevant non-stories like this.

17
dcustodio 2 days ago 0 replies      
Some people need to feel offended just as I need my morning coffee. I'm not even counting how old is my pc/laptop and that's the thing I like about PCs - there's hardly anything new that triggers my Gear Acquisition Syndrome.
18
studentrob 1 day ago 0 replies      
Schiller is marketing his product. This is no different from the "I'm a Mac, I'm a PC" commercials.

There are now two articles on the HN front page about this utterly pedantic topic which boils down to marketing. Unbelievable.

19
imaffett 2 days ago 0 replies      
I still use my 2009 MBP. I upgraded to an SSD drive and 8 gigs of ram. I don't game on it, but I can do almost all of my development on it. I'll admit it's slower then my 2013 MPB at work, but I see no reason to spend more money on a working computer.

My second computer is a Chromebook. My oldest daughter uses it for school work and we couldn't be happier. It's much better then our iPad (which we don't use anymore).

20
fit2rule 2 days ago 0 replies      
You know what makes me really happy? Any computer being used for fun/interesting/productive things, not just 'the latest ones'.

As a die-hard retrocomputing enthusiast with far more old computers in my basement than new, I'm biased. But I sure think that the time has come for the compute industry to start highlighting the need for lesser computing power, but yet still more productive computing.

8-bit computers are awesome. 16-bit machines superb! Get yourself set up with these systems and you can entertain yourself for hours and hours. 8-bit is a great way to learn software development - 2 hours of 8-bit coding a week will keep you sharper than sharp when the time comes to go back to the hipstertools-de-jour. (I kid, I kid.)

Point is this, folks: old computers never die - their users do.

21
drzaiusapelord 2 days ago 1 reply      
If anything its a fairly strong statement on the ruggedness of the PC platform. I have a 2500k i5 in my old desktop. With a new-ish videocard I play all the newest AAA games at high quality. Its incredible how the x86 world really hasn't had any huge performance bumps and how a Q1 2011 CPU is still competitive.

Also, there's the larger narrative of people buying tablets and putting off PC upgrades, so the PC ages. Don't worry Apple, you're still getting their money. Its just people aren't ready to replace a general purpose computer that they control and can run pretty much everything with a walled garden mobile device designed to get ad impressions and consume media.

If anything, this is Apple's frustration. They have all this success but people and businesses keep buying PCs. They'll never crack this market. They're too invested in the Jobsian "closed" ecosystem philosophy to be as agile as the PC platform. Mocking those who don't drink their kool-aid just makes them look like sore winners.

edit: I'm aware I can buy a newer chip, but from a single core vs single core perspective its not that much faster. Very little consumer software is properly multi-threaded so this is why my expensive work computer with the newest i7 doesnt feel any faster than my 5 year old desktop at home. Most things are pegged to one core and at the end of the day single core performance is what's going to matter.

22
tfandango 1 day ago 1 reply      
I'm a little sour on Apple. They say their products are rugged and "built to last", but an iPad is mostly glass which you need to encase in a giant rubber protective case if you don't want the screen shattered. Even then they are easy to break and very costly to fix, to the point now where it barely makes sense to fix it over replace it. Self-Repair is less expensive but so far I'm 50/50 on successes. I would say they are built to last, until the next one comes out.
23
flyinghamster 1 day ago 2 replies      
As someone who just picked up a reconditioned six-year-old, i7-equipped ThinkPad for a hell of a lot less money than what it would have cost brand-new, I have to laugh. It may not be the Latest and Greatest, but it's still speedy enough to handle anything I'm going to throw at it. The CPU performance curve has flattened out in the last few years, to the point that it's not worth spending lots of money for a modest performance gain.

I'll let someone else take the depreciation hit.

24
emp_ 2 days ago 0 replies      
The only thing not lasting more than 5 years in my 20+ years owning computers are Macs, other iDevices and PCs are much, much stabler. Macbook Pro 2008 (died in 3 years), iMac 27 2010 (died in 4.8 years), Macbook Air 2013 (dying) and Mac Mini 2012 (dying) are the reason why I always end up coming back to my 2010 gaming PC and 2010 HTPC, the ones to break the 5+ year mark without a major hardware failure.
25
po1nter 2 days ago 0 replies      
I don't understand how the author went from the guy saying it's "really sad" to "Apple is insulting people". I mean even his first argument on why people don't upgrade IS actually sad since they can't afford to do that. I know it is because I'm one of those people who can't afford to upgrade my machine.

/rant typed on a 5 year old Asus N53SV.

27
mrbill 1 day ago 1 reply      
I said this in the other thread..

I just bought a "new to me" laptop.

Refurb Thinkpad T420s from 2011.

I added 16G RAM, two Intel SSDs, an Ultrabay battery, and an 802.12ac wifi card.

Grand total: less than $325.

This will be my primary portable for at least 2-3 years, and it's already four years old.

Just because I can afford Apple doesn't mean I can justify the 2x price premium, or that "old" hardware isn't capable.

28
islane 1 day ago 0 replies      
Echoing the comments from others, My "main" pc is about 7 years old running the x58 platform (socket 1366) - it easily outperforms my work-supplied development laptop.

For the uninitiated, the ebay workstations mentioned are typically these ancient x58's. Most support hex core xeons, 24gb ram (or 48gb unofficially, more on server boards and some workstations), and a pile of PCI-express lanes. As such, you can easily add in PCI-express m.2 SSDs, USB 3/3.1, and GPU's to your heart's content. The takeaway is that old pc tech can be had at a fraction the cost of new hardware with comparable performance.

I understand the marketing nonsense from Apple, the "PC does what" consortium, and hardware vendors on the whole - but there is nothing sad about owning an old pc. The reality is that the best performance for the price lies in "obsolete" platforms.

29
atomical 2 days ago 0 replies      
Is there anything sad about using a 5 year old mac? I'm hoping that eventually my mac book pro will last 5-10 years. With multi-core systems, SSDs, and 16 gigs of ram in MBP's do we really need to be upgrading so much? Also, clock speed advances have stalled.
30
sz4kerto 2 days ago 1 reply      
I think I have a rusty, almost 5 year old PC lying around. It has an Intel i7-2600K overclocked to 4.6 GHz, 24 GB RAM, 240G SSD and a Radeon 6950 GPU that can run most games well in fHD.

Most of the machines Apple sells are actually slower than this PC.

31
parenthesis 2 days ago 0 replies      
Until earlier this year I was still making heavy use of a Powerbook from 2004. I only stopped using it because the graphics hardware started to go funny.

I sold its battery, memory and power supply to someone still using a slightly older Powerbook.

32
draw_down 2 days ago 0 replies      
I agree that this was tone-deaf of them, I just find it fascinating that we picked this one instance of tech industry rich guy tone-deafness. Seems to be getting a lot of play for some reason. But it's everywhere if you look.
33
Overtonwindow 1 day ago 0 replies      
I agree with this piece. I have long been upset at Apple's forced obsolesce policy, and creating the notion that devices are disposable. I still have a Macbook from 2009, and another from 2012, that I am doing everything in my power to upgrade and avoid the forced slow down. Likewise with my iPad and phone. I resist the persistent upgrade requests to the OS. Not because I am blase about security issues, or not wanting bugs to be fixed, but because those fixes and upgrades come with a cost: premature, forced obsolescence.
34
scarface74 1 day ago 0 replies      
Thinking of all of the computers I've had since 2007, all of them are usable and still in use at least once a week.

2006 era Mac Mini Core Duo 1.66Ghz. Run Windows 7 gave it to my mom. She still uses it.

2009 era Sony Viao - Core Duo 1.66Ghz, 2Gb RAM. Windows 7. My son uses it for Office and MineCraft

2009 Dell Pentium Dusl Core 2Ghz, 4Gb of RAM. It's still my only laptop. The display is 1600x900 and is still better than many cheap laptops. The battery is crap though.

2011 Core 2 Duo 2.66Ghz laptop. My wife's computer. It's still feels fast.

My workhouse is a 3Ghz I3 with 6Gb of RAM. Bought in 2012.

35
balls187 1 day ago 0 replies      
I believe that an iPad (or comparable Android Tablet) is better for most computer users than any low/mid-tier 5-year old PC.

With PC's you can upgrade them, and make tweaks to squeeze out every bit of performance, but by and large for most people, when taking into account that mobile content consumption is on the rise, a tablet is a better upgrade than a new PC. Tasks like email, text, video, music, photography, facebook, are pretty much now done via mobile phone. For these people, PCs are anachronistic.

36
rsync 1 day ago 0 replies      
You know what's really sad ?

I am using a 2009 octo mac pro, which is now 7 years old and since that date, apple has not released a single product that is compelling enough to upgrade that system.

37
reacweb 1 day ago 0 replies      
I have a HP Pavilion Elite m9458fr bought in july 2009 for 416 (on eBay hp_marketplace_fr). I only need a silent reliable computer with a reasonably fast CPU for web development on Linux. I have no need for beefy GPU, I just need to connect 2 displays (23" and 19"). I would like to replace it in order to have USB3 connectors and SATA III HD. I do not find anything on the market for reasonable price. Should I buy a ultra-HD laptop with a magnifying lens ?
38
gd2 1 day ago 0 replies      
It seems a somewhat strained interpretation to view this as Apple is anti-poor people. But it does point out that Apple is losing touch with what people do with personal computing power.

Phone are much better then five years ago, computers not so much. I'd much rather spend my dollars where the major improvement in computing technology is, then spending to upgrade a desktop to view thing almost the same as before.

39
jitendrac 1 day ago 0 replies      
I am using 7 year old pc with single upgrade(motherboard),I have no reason to buy new!!!!I have no reason to buy any expensive apple laptop or ipad.
40
ayb 1 day ago 0 replies      
I'm using a MacBook Pro from late 2011.

Most of the actual machine specs (i.e. processor and max RAM) have barely changed in 5 years. I put 16 Gb in my laptop 5 years ago and it's still the most you can squeeze into a 13" MacBook Pro.

It's sad (and somewhat telling) that Apple has not packed more power into this form factor over the past 5 years.

41
PaulHoule 2 days ago 0 replies      
The main problem I see is Intel and Microsoft have given up on power users and it is all Apple envy and phone envy, no wonder people don't buy new PCs.

Back in the 0s I had a policy of never rehabilitating an old PC because a new PC was better in every way.

The other day a friend brought a Macbook from 2007 to me with a busted HDD and I put an ssd I had laying around in and we got Win10 running on it with no drama and no Apple malware (iTunes, boot camp, etc.) It feel faster than a skylake machine with one of those useless hybrid hard drives and after puffing some hcfc gas through the fan it is great.

Any and or pre core 2 machine would go to the trash, I would not even donate it to the poor, but frankly broadwell and skylake are just an excuse to reduce the io slots to put manufacturers of gfx cards.

They say customers get better battery life but software screws that up if they really tried it and the most you can get is spend a lot of money on a thin and light machine that the doorman can slide under the hotel or get a 2 in 1 machine just because you need a trackpad on a touchpad machine and have a fight over if and where you stow it with the stewardess just to have another reason to get arrested at your destination.

I mean, even IBM sells 360 chips that clock over 5 that use water cooling. It is not that hard.

42
johndevor 2 days ago 0 replies      
Holy crap the "politically correct" army has entered the tech world. We're not safe anywhere...
43
craigmccaskill 1 day ago 1 reply      
I'm using a ~5 year old PC I built myself and it still outperforms the hardware in any available mac product (desktop, tablet or laptop) that isn't a Mac Pro (starting price $2999,00).

I don't have a compelling reason to upgrade until the launch of VR headsets.

44
holri 1 day ago 0 replies      
Maybe user of old computers are not poor but:

* Do not suffer from avarice?

* They do not need the newest shiny toy for their ego and to win recognition?

* They don't touch a good running working system?

* They want to save CO2 emissions and noble earths?

* They have a frugal live?

* The know what Eco-sufficiency means?

* They know that consumerism does not make happy?

45
yq 2 days ago 0 replies      
semi-Related:iCar release date rumours, features and images: Apple CEO Tim Cook comments on Apple Car rumours

http://www.macworld.co.uk/news/apple/will-apple-make-icar-pr...

Reading related news is interesting. Imagine Apple applies its tactics on their cars: You probably need a new apple designed plugin other than the universal one, special tools to change flat tires and/or update the exterior slightly to market therefore driving a 5-year-old car is sad.

46
jaimex2 2 days ago 0 replies      
There is however something sad about buying products to not look poor.
47
jvagner 1 day ago 0 replies      
If your child's school only had 5 year old computers, you'd probably think, "Huh, it'd be better if this school had newer computers."
48
compactmani 1 day ago 0 replies      
600 million PC users discovered that running a lightweight unix distribution/DE and not allowing javascript made the life expectancy of their computers triple.

Or so I dream.

49
SeanDav 1 day ago 0 replies      
I miss my "Turbo" switch that used to be on older PC's. Just press "Turbo" and your PC is good for another couple of years.
50
rabboRubble 1 day ago 0 replies      
Hahahahahhahahahahah... my main machine is a Macbook Pro, Mid-2009. Going on 7 years old. Hahahahahahah.
51
facepalm 2 days ago 0 replies      
I can be "not wrong" and sad at the same time. A newer PC would likely be much faster, resulting in less stress and less wasted time.
52
chasing 1 day ago 0 replies      
I, too, am shocked that a computer company would communicate that their new computing devices are better than the old ones people already own.

Shocked.

53
agumonkey 2 days ago 0 replies      
-- sent from my almost perfect 9yo laptop
54
justinholmes 1 day ago 1 reply      
My 5 year old PC has a dual socket Xeon I think that beats any iPad rubbish.
55
ArenaSource 2 days ago 1 reply      
Last Macbook Pro generation is from 2012... this is really sad, it really is.
56
bliti 2 days ago 0 replies      
I use a 4 year old MBP. Does that make me middle class then?
57
shanselman 1 day ago 0 replies      
Yikes...My primary PC is 5 years old. Works great.
58
busterarm 2 days ago 0 replies      
Still running a desktop with a Q9650 and 8GB RAM.

Runs fine.

59
balls187 1 day ago 0 replies      
Apple v PC flame war still lives on.
60
kps 2 days ago 0 replies      
My main home machine is an 8-year-old Mac Pro running Snow Leopard. Apple today sells nothing that could replace it, let alone improve on it.
61
programminggeek 1 day ago 0 replies      
What Phil said worked.
62
snowwrestler 2 days ago 4 replies      
Yes there are things that are sad about it. A 5-year-old PC is probably not running a recent version of Windows, and has a higher likelihood of being compromised. And if a person wants a new computer but can't afford it, that is sad too.

That said, it was an obviously stupid stat for Schiller to cite. But now we're going to be subjected to a long series of Apple-bashing articles that overreach in the opposite direction. By the end of today we'll see multiple "actually, I'm proud to be running 5-year-old PC" posts.

Why? Because when mining for pageviews, there are few veins as rich as bashing Apple.

Edit to add: If we want to talk about the tech industry and poor people, let's do so. How many new companies are variations of "let us bring things to your door for you for an extra fee" or "let's give you personalized service so you don't have to go shopping/ride a bus/interact with a human"?

14
Boom (YC W16) signs $2B letter of intent with Virgin, $5B total techcrunch.com
423 points by lacker  1 day ago   163 comments top 19
1
brenschluss 1 day ago 3 replies      
Wow, who's doing the marketing? Pretty savvy. If they had launched with this notice, we'd be thinking, "Oh cool, Virgin's making a supersonic airplane with some company!"

Instead, with this tiered announcement:Three days ago, nobody knew about Boom. Within the last two days, they have lots of new press, and and thus lots of skepticism. Today, they 'announce' an effective endorsement with Virgin. Brilliant.

2
paulsutter 1 day ago 1 reply      
Title misleading. The relationship is opposite what's implied. Boom will be hiring Branson's "The Spaceship Company" to do engineering, and Virgin gets an option to buy the first ten planes.

Granting an option means Boom has given something to Virgin, not the other way around.

> a Virgin Group spokeswoman confirmed their plans to The Guardian: We can confirm that The Spaceship Company will provide engineering, design and manufacturing services, flight tests and operations and that we have an option on the first 10 airframes. It is still early days and just the start of what youll hear about our shared ambitions and efforts.

EDIT: Let's hope that we hear Virgin is making a big investment in Boom soon, that will be a stronger indicator. If Virgin really had substantive interest in the planes, and Boom was actually experiencing demand, Virgin would have had to PAY MONEY for an option.

3
Certified 1 day ago 2 replies      
Maybe I am misinformed but I was under the impression part of the reason concorde was retired is because at those speeds you tear up the ozone.

"From their particle measurements, the authors of the Science study calculate that a future possible fleet of 500 supersonic passenger aircraft will increase the surface area of particles in the atmosphere by an amount similar to that following small volcanic eruptions. In mid-latitude regions, such emissions have the possibility of increasing ozone loss above that expected for nitrogen oxide emissions alone. The increase in the number of particles may also affect the ozone-related processes occurring on wintertime polar stratospheric clouds (PSCs) in the polar regions."

-http://www.publicaffairs.noaa.gov/pr95/oct95/noaa95-65.html

4
abalone 1 day ago 1 reply      
I'm disheartened there's no mention of carbon footprint or sustainability. With all the great focus in our community on sustainable land transport (Tesla), data centers, solar and neo-nuclear power, etc., here we have a startup that markets this:

"imagine leaving New York in the morning, making afternoon meetings in London, and being home to tuck your kids into bed."

That is a terrible thing to enable from an environmental impact standpoint. While air travel can be somewhat more efficient per mile than driving alone in a gas car[1], the distances it enables you to travel are vastly greater. Further upping the convenience factor would no doubt encourage more "binge flying".

Perhaps Boom is more efficient than typical airplanes. Perhaps it's less efficient. Perhaps it could make planes that travel at the same speed but at half the carbon footprint. We wouldn't know from this. It's not part of the story.

Let's change that. Let's make sustainability as important a consideration with airplanes as it is with cars.

[1] http://www.yaleclimateconnections.org/2015/09/evolving-clima...

5
onion2k 1 day ago 1 reply      
Awesome as this is, there's something largely outside of Boom's control that could derail their plans. When Concorde was ready to fly the British and French governments (who paid for the plane's development) had to negotiate with New York for permission to land there. There were concerns about the noise - not sonic booms, just general plane noise, because Concorde was "loud". The problems were actually political ones because Concorde wasn't American. Boom will have to do the same, but London has some really bad NIMBY issues with airports and the expansion of Heathrow at the moment. Problems that will probably continue for decades. If the anti-expansion environmental lobby groups can block this, and they see it as politically useful to do so, there will be a lot of negotiating to wade through.
6
maxxxxx 1 day ago 4 replies      
I would be extremely impressed if they could pull this off. Developing a supersonic passenger plane must possibly be one of the most expensive and complex things to do. Probably harder than what SpaceX does.
7
vannevar 1 day ago 2 replies      
If building a supersonic passenger plane with an existing engine was viable, it seems like somebody would already be doing it. Which means it's probably not viable, in which case Boom is an aircraft engine company first and foremost; once they have the engine, wrapping a plane around it should be the easy part, relatively speaking. Developing a new engine costs on the order of $1B, so the development cost for the whole plane would probably be between $1.5-2B. So they'd pay for that with their first 10 planes to Virgin, with maybe some money left over to change their name.
8
samfisher83 1 day ago 7 replies      
Just for some context 787 cost $32billion dollars to develop.
9
kafkaesq 1 day ago 1 reply      
So what's the carbon impact (per passenger-mile) of the service Boom is proposing (compared to regular air travel) again?

I'm not sure this is something to cheer, just because it's new and shiny (and because it appeals to tech types who fancy they'll finally get to afford a ride on one of these contraptions, some day).

10
haberman 1 day ago 2 replies      
I don't know the first thing about how a business in the space operates. Can someone fill me in on why a startup like this would go through YC? How are $120k and Silicon Valley connections going to help a startup in this space in the slightest?
11
oniony 1 day ago 2 replies      
Letter of intent. Not a contract. The article is speaking like it's a done deal.
12
nerdy 1 day ago 1 reply      
They got $5bn in LOIs, a new YC record: https://twitter.com/sama/status/712705887853383680
13
frenchman_in_ny 1 day ago 0 replies      
I love what these guys are doing, but I find it odd that they're using an actively registered tail number (N22BT) in their mockup images.
14
rory096 1 day ago 2 replies      
This is awesome. Great counterpoint to all the naysayers and middlebrow dismissals on Monday. I'm pumped to see this thing fly.
15
rdl 1 day ago 1 reply      
Boom is probably my favorite new company in a long time. I wish there were infosec concerns :)
16
rgovind 1 day ago 1 reply      
What is the importance of an LOI? Its non binding. So why does it matter, except as a PR exercise?
17
mathattack 1 day ago 0 replies      
In a prior thread I asked about how they would get the funding they need straight out of YC. I guess this is the answer!
18
nbevans 1 day ago 2 replies      
The article describes Concorde as "ill-fated". Is that accurate?
19
forrestthewoods 1 day ago 0 replies      
Yesterday: Hahaha what a stupid company name! What a bunch of idiots!

Today: Oh shit

15
What I Learned Selling a Software Business kalzumeus.com
423 points by gyardley  1 day ago   83 comments top 14
1
aresant 1 day ago 2 replies      
FEI still has the original listing on their site:

Yearly revenue - $31,000

Yearly net profit - $19,000

Asking price - $57,000 SOLD

It's fascinating what a small amount of money we're ultimately talking about vs. the influence of the "cult of Bingo Card Creator fans" on HN - which I am card carrying member of.

(1) http://feinternational.com/buy-a-website/3745-software-busin...

2
song 1 day ago 1 reply      
Just wanted to quote this:

"Im told, against my expectations, that BCC was impressively well-documented by the standards of other businesses its size. This implies that many people are running their small projects in even more of a cowboy fashion than I do, for example by not having dedicated books for the business. If this describes you, God help you. At a minimum, get your books for the last year done professionally whatever you spend on bookkeepers/accountants will be a pittance next to the time saved and additional valuation captured."

Even if you're not selling, getting this done will save a lot of headaches the road... Dedicated books for the business is a MUST. I know a lot of small businesses where this is not done religiously and it always comes back to bite the owner in the ass...

EDIT: By the way, I was curious so I just took a look at the BCC site, the blog is timing out...

3
dennisgorelik 1 day ago 7 replies      
This time Patrick's summary of Bingo Card Creator does not look rosy at all.

All facts are still the same, but the overall impression of BCC now is that it is a small, declining and time-consuming business. Patrick himself actually struggles with money, like all of us.

Patrick definitely has (had?) the power of optimistic spin in his stories.

4
davidw 1 day ago 1 reply      
> Back in the day someone won a Nobel Prize for pointing out that, if a population of goods has unknown potentially costly problems, and there is no way to determine which particular instances of the goods have the problesms, the market will penalize all goods in that population. The canonical example is used cars.

George Akerlof and "The Market for Lemons": https://en.wikipedia.org/wiki/George_Akerlof

5
sdrinf 1 day ago 2 replies      
| Selling BCC was going to pay for living expenses while we built Starfighters first game (Stockfighter) and also pay for some development work to assist with the sale of my other SaaS business, Appointment Reminder.

^^ - you're selling AR as well? Last presentation was showing that to be a profit machine? Would love to learn about the reasoning behind that decision!

6
benologist 1 day ago 2 replies      
I'm spending this year packaging up my current business to make it as attractive as possible to potential buyers.

This talks a lot about the process, but what are some things people like us can do to maximize their return on such a sale?

7
simonswords82 1 day ago 1 reply      
I got in touch with FEI about selling one of my web businesses and they couldn't help due to our UK focus. Sucks for us, I've heard good things about them.

Can anybody recommend a broker that assists UK based and focussed web businesses?

8
voltagex_ 1 day ago 0 replies      
>migrate all of my email in Google Apps for Work (oh God, dont ever do this)

Yeah... I have a non-trivial number of purchased Android apps on my Google Apps for Work account ($6AUD/month) and there's no published way to move apps to a "normal" Google account.

I'd probably be paying Google forever if I had business dependencies hanging off that account (but I set it up when custom domains were "free").

9
voltagex_ 1 day ago 0 replies      
>Accordingly, I decided to retroactively cut her in for 5% of the business. Props to Pepper for accommodating this request, as it is somewhat non-standard. ("Can you invoice me a substantial amount of money and promise me that you will pay a particular employee of yours a bonus of the same amount, net only of taxes?" "We can do that.")

Businesses exist that are this cool? Where do I find them?

10
pitt1980 22 hours ago 0 replies      
"People try to buy software businesses with no money down. (Will you loan me the entire purchase price of the business? Ill pay you back over the next 3 years. Promise!)"

----------------

While I see why you would have run away from that particular structure, I'm curious as to how flexible you might have been from a straight lump sum structure

If I was buy a business like this, willing to accept $X as money down, some % percentage of revenue for Y months, until you were paid Z amount, with some contingencies built in would look pretty attractive

a sellers willingness to agree to terms like that would send a pretty strong signal that they weren't selling a lemon

as a buyer, I'd be willing to commit to a Z price significantly higher than what I'd be willing to commit as a lump sum up front

if the seller believed in the business, (and I guess were able to substantiate that I had enough ability no to drive the business into the ground) it seems like such a structure would net the seller more as well

----------------------

I'd love to hear your thoughts about how receptive you might have been to an offer like that

11
BorisMelnik 1 day ago 0 replies      
I think one of the big things about BCC isn't how much (or how little) money he made but how well documented the process was. We've all seen plenty of projects do 10k months but not many of them are sustainable or so well documented in a blog.
12
raymondhong 1 day ago 0 replies      
great
13
sbierwagen 1 day ago 1 reply      
(2015)
14
quellhorst 1 day ago 3 replies      
Such a long article but no mention of how much he sold it for.
16
D'Oh My Zsh How I unexpectedly built a monster of an open source project medium.com
368 points by werrett  1 day ago   92 comments top 20
1
kbd 1 day ago 5 replies      
Could someone please explain to me the attraction in using Oh My Zsh (and similar)? It seems strange to me to use others' configs.

Over time I've customized my bash config and have all the information I want in my prompt. If I ever switched to zsh I'd just learn how to translate what I have in bash. Why would I want to start with someone's big framework for configuration?

2
smitherfield 1 day ago 3 replies      
A personal pet peeve is when people refer to having lots of features as "bloat." It's generally very nice to have lots of features all in one place without having to do things manually, or find somebody's potentially-sketchy plugin/app.

My test for "bloat"

1. Has the project grown so large that it's starting to run into real-world performance issues? (I can't imagine this could be the case with even the largest of shell configurations except on very low-end embedded devices).

2. Has the project grown so large that bugs are popping up faster than the developers can do maintenance? Is there no or only one person who can read and understand the entire codebase? (AFAIK, no and no).

3. Are there many undocumented/poorly-documented features? Are there features that are both undocumented/poorly-documented and dangerous? Are there many deprecated or outdated features that have yet to be removed? (AFAIK, no, no, no).

4. Are there many features that both duplicate and do not improve on functionality found elsewhere? (Debatable and mostly subjective).

3
shpx 1 day ago 7 replies      
If you like oh-my-zsh you'll love https://fishshell.com/. The main difference I use is better command completion as you're typing, and you can complete by word with alt-f. And ruby like syntax.

for example (| is the cursor and everything after it is grey text)

>echo hello world

hello world

>echo |hello world # alt-f

>echo hello| world

Fish and oh-my-zsh both take about 5 seconds to init though. If you don't like that you should be using prezto (which is the fork he mentions in the article)

4
cies 1 day ago 2 replies      
"I like Prezto[1] nowadays" -- an `Oh my zsh` refugee

1: https://github.com/sorin-ionescu/prezto

5
wsha 1 day ago 0 replies      
I like learning from other people sharing their config files, but my attitude towards oh-my-zsh is similar to that of the author's co-workers in that I don't want to install a bunch of customizations that I don't understand. I couldn't find a summary of what all oh-my-zsh is supposed to do and the source has grown too large for me to read it quickly. I guess I trust code I haven't read most the time I am using a computer but it feels wrong to me to allow my shell to auto-update customizations that I don't understand.
6
scosman 1 day ago 6 replies      
"Id become dependent on these shortcuts."

The intro to this article is as much a caution of becoming dependant on non-standard tools, as it is a pitch for omzsh. If you can't sit down at a normal bash window and get shit done, your shortcuts are hurting you.

7
sethrin 1 day ago 3 replies      
910 contributors, 191 issues, 516 pull requests, and his response is that "reviewing and approving pull requests is a nice-to-happen versus a need-to-happen." While I'm glad I am not in the position of maintaining that (or anything else important), that doesn't really speak well to the long-term prospects for the project. Clearly this is something that I and many others find very useful; It would be a shame to let it stagnate. The glib part of me would suggest either 'stepping up, or stepping down', but I can't really credibly offer solutions, I just am trying to point out a problem.
8
ignoramous 1 day ago 0 replies      
Reminds me of Nathan Marz's great piece about lessons learnt from creating, and maintaining Storm as OSS: http://nathanmarz.com/blog/history-of-apache-storm-and-lesso...
9
voltagex_ 1 day ago 1 reply      
I've tried a number of times to switch to zsh, ostensibly for oh-my-zsh. My main issue is that bash is the default almost everywhere so it's more work to change it than it is to just be "happy enough" with Bash.

I used to work on a fairly underpowered ARM5 and I could feel the impact of most prompt customisations on the speed of the system, especially on initial login. That feeling is still there - mainly because I haven't found the right SD card for my Raspberry Pi.

To avoid this becoming a complete ramble - are there any advantages to switching to zsh as someone who's reasonably comfortable with bash? Hell, even OS X switched (and boy, t/csh was a shock when using FreeBSD).

10
spystath 21 hours ago 0 replies      
A light but featureful alternative to omz is also the grml zsh configuration [0]. I've been using it since 2011 or so and I've probably touched my .zshrc once or twice. If you fancy some colors you can also add some syntax highlighting [1]. Or just use fish which is great for interactive use!

[0]: https://github.com/grml/grml-etc-core/tree/master/etc/zsh

[1]: https://github.com/zsh-users/zsh-syntax-highlighting

11
onetimePete 19 hours ago 1 reply      
The irony is that after all those years, we still dont find a optimal way to find out - when it is a good time to annoy a user about update decisions.Do it during the system start up phase?Do it before they go into a break?Do it upon return to the system, when work was already interrupted?Do it shortly before shutdown?

No, annoyia be praised. It must be when the user has focused for longer then 5 Mins on something.

12
beefsack 1 day ago 0 replies      
I absolutely love bash + powerline. You might know powerline if you're a Vim user.

http://i.imgur.com/3FKaEIy.png

It's incredibly easy to set up, I have a script to do it[1] but doing it by hand is trivial.

[1]: https://github.com/beefsack/bash-powerline-installer/blob/ma...

13
fvargas 1 day ago 0 replies      
> Its March 22, 2016 and the top trending repository on Github is ?

Not oh-my-zsh. It was top trending for the Shell category, not for all of GitHub.

14
gjvc 1 day ago 3 replies      
"oh my zsh" is too slow to be life-enhancing
15
vidoc 1 day ago 0 replies      
Reminds me how lame those geek t-shirts are, and how vulgar it is to put stickers on laptops!
16
sdegutis 1 day ago 4 replies      
I heard of zsh back around 2012, lots of colleagues who I respected very highly were using it. But I just never could get behind the idea of using someone else's defaults. Even when I switched to Emacs, I hand-picked every line in my configs from looking at a bunch of people's configs and with a lot of random googling to solve things. Even though it took like 2 weeks of constant tweaking to get just right, I haven't touched it much in like the past 3 years or so, and I've been super productive ever since. So meh, seems to have worked out for me. But YMMV. Also, I use eshell (with some tweaks) almost exclusively now, as opposed to a "real" terminal with bash or fish or zsh (etc.)
17
jarjoura 1 day ago 0 replies      
I adore Oh My Zsh, way better than bash and the completion plugins are extremely helpful!
18
OJFord 1 day ago 1 reply      

 > This wouldnt my first foray into open source software; > nor my last.
I know that I'm annoyed perhaps too easily by poor grammar - but the opening sentence, really?!

19
draw_down 23 hours ago 0 replies      
I understand the author's viewpoint but I would probably get rid of it the first time it asked to auto update.

Customization is nice but I guess I mostly prefer to spend as little time thinking about shells as possible.

20
julie1 23 hours ago 0 replies      
Humm ... I looked for the fun if no one spotted in any comment that there was a creepy feature the author likes.

Periodic automated arbitrary code execution from a remote source.

Here is a list of the stupid ideas that old coders warned from

 - abritrary remote code execution [X] this, curl|bash - too much dependencies [X] npm - lack of specifications, staging [X] Agile - non deterministic HW [X] Intel - non deterministic software [X] llvm/gcc/AI - Single point of failure [X] github/CA - attack by majority on P2P [X] blockchain, bitcoin - bigger sloc is more bugs [X] heavy frameworks - using immature technologies [X] haskell - bloatwares [X] angular - private corp standardizing [X] QUIC & al, browser wars - beware of information entropy [X] big data - moving parts [X] the Cloud - higher surface of vulnerability [X] IoT - monopolies [X] google - using private cie for infra [X] github is the new sourceforge - putting half backed std in prod [X] IPv6 - lack of consistency [X] most nosql tech - legal risk due to IP law [X] coding by copy/pasting
If I was an old coder still coding I would say we are very close to a singularity : the total lack of trust that could result in all this is simply customers reverting to fax, teletypes, snail mail... or going to court to ask for financial compensations.

If you need an expert to help you on this, I can help.

17
Red Hat becomes first open-source company to make $2B zdnet.com
257 points by simonebrunozzi  1 day ago   82 comments top 12
1
acomjean 1 day ago 0 replies      
I was at a business looking to switch from HPUX/ Solaris to RHEL (Red Hat Enterprise Linux). We had some HPUX realtime(ish) extensions and used IPC heavily and really used the scheduler/Processor set functionality in HPUX. The scheduling was important, so transitioning wasn't going to be easy (endian issues aside).

They sent a bunch of us to a week long Linux Internals course at Red Hat. Really excellent class, knowledgeable instructor (turned us onto fedora, linux weekly news (https://lwn.net) and centos before red hat partnered with them). When the class wasn't exactly what we expected the instructor took the last day to go over some of the processor scheduling/ real time extension stuff we needed to know (My company's employees were the only one taking the class). The OS transition was shelved for a bit and I ended up leaving that company, but the course really changed my mind about that company.

Good for them.

2
jedberg 1 day ago 3 replies      
Are there any open source companies that make even 1B a year? The reason I ask is because anyone with a model of "open source our core product" makes me concerned for their survival. It seems like a tough business model and Redhat is the only (moderate) success I can think of.

When your biggest competitor is yourself at a $0 price point, how do you compete?

3
rileymat2 1 day ago 2 replies      
Minor note. If I am reading the article correctly, it has revenue of 2B, not earnings.
4
shmerl 1 day ago 0 replies      
Good. Some use Linux, and barely contribute back. Getting RHEL subscription is a good way to do it, because RedHat does a lot of work on improving Linux.
5
eggy 1 day ago 2 replies      
Good for RH! AFAIR they abandoned the freely-downloadable Red Hat Linux, and started charging for the pre-compiled versions of Red Hat Enterprise Linux (RHEL), and this is when they started making money to be where they are today. Didn't they have a falling out with Linus?
6
kristianp 1 day ago 7 replies      
What's so good about RHEL/fedora vs debian or ubuntu server, say?
7
IamFermat 1 day ago 5 replies      
When you think about it, it's crazy it took them this long to make $2B in revenue. It took them 23 years since it was founded in 1993. Facebook hit $2B in revenue 6 years. Which is crazy when you look at the comparison and RH is the most successful open-source co. Open-source is a failed model for commercial success.
9
kachnuv_ocasek 1 day ago 3 replies      
What does 'open-source company' mean? One could claim that Microsoft is one as well.
10
chris_wot 1 day ago 3 replies      
Man, what I'd do to work for that company! And I don't say that about many businesses.
11
otterley 1 day ago 0 replies      
Mods, please edit the title: that's not what the article says. A $2B company is not identical to a company having $2B in earnings.
12
fred_is_fred 1 day ago 0 replies      
But isn't Apple an Open Source company? ;)
18
Study: People Want Power Because They Want Autonomy theatlantic.com
307 points by Jerry2  2 days ago   129 comments top 18
1
codeonfire 2 days ago 4 replies      
This is obvious in the workplace. Here are some of the things that lack of power/autonomy will lead to:

- Being forced to do work assigned to someone else.

- Being forced to do work for which someone else will receive credit or compensation

- Being forced to appear as though under someone else's control (other than direct manager)

- Being forced to do the more dangerous, risky, difficult, or thankless work

- Being forced to do work far below one's qualifications

- Being forced to take over failed projects or projects that have already been rejected by upper management.

2
jhwhite 2 days ago 3 replies      
This is a really cool article that goes really well with Daniel Pink's book Drive: The surprising truth about what motivates us. Pink says that people need 3 things to truly feel motivated at work. Mastery, Autonomy, and Purpose.

That seems to be based on self-determination theory mentioned in the article that autonomy, relatedness, and competence are human's basic psychological needs.

The link (http://selfdeterminationtheory.org/SDT/documents/2004_DeciVa...) about self-determination takes you to a paper that references a researcher, Mihaly Csikszentmihalyi. He has done a lot of research on optimal experience and has written a great book about it called Flow.

3
marcus_holmes 2 days ago 3 replies      
Now all we need is a couple of independent researchers to replicate these conclusions to verify them and we might have learned something.

I get really sceptical about these social sciences studies, especially after the recent round of replication attempts that failed.

4
cylinder 2 days ago 3 replies      
This is what has been bothering me all day / week and just before I clicked this. I cannot stand being micromanaged. I'm being driven crazy by it now, and it makes me want to go back to entrepreneurship.
5
graycat 2 days ago 2 replies      
The classic explanation is from E. Fromm, The Art of Loving with, to paraphrase,

"For humans the fundamentalproblem of life is gettinga feeling of security in theface of the anxiety from our realization that alonewe are vulnerable to the hostileforces of nature and society."

rough quote from memory.

So, alone we feel vulnerable. So, we want security in the faceof that vulnerability.

Notably, Fromm does not say thatmoney or power will give thatfeeling of security and, instead,claims that the first recommendedsolution is a good romanticrelationship, that is, with"knowledge, caring, respect,and responsiveness" where the knowledge means the couple readily exchanges knowledgeof themselves.

Sure, one can try to useautonomy and self-sufficiency, and those via, say, money and/or power to get the feeling of security. But if only byomission, Fromm is saying thatbeing so alone won't work well.

6
danharaj 2 days ago 2 replies      
As a shill for libertarian socialism, i am obviously glad to see controlled experiments that validate my prejudices about human nature. I switched jobs a few months ago from a steady job with good benefits to a less steady job with no benefits because it gave me far more autonomy and independence. I know many people who are very ambitious and want to be their own bosses: few of them want to rule others. In fact, of the bosses i know, all of them are frustrated by having to order and direct people.

There is one pattern of thought that i think is missed by this experiment, but another one could just as well measure it too and i hope someone decides to: it's easier to order people to do what you think is right than it is to convince them that you are right. I think under circumstances where a person feels frustrated that their point of view isn't being validated by others they will have a greater tendency towards authoritarian power as opposed to autonomous power.

Certainly, how many of us have gone through the phase of growing up where we think that if everyone just listened to us and did what we said, everyone would be better off? I think well intentioned paternalism is a greater cause for authoritarian desires than narcissism and ego validation.

7
pink_dinner 2 days ago 4 replies      
This is why I started my own company: So nobody could tell me what to do. It's not really even about the money.
8
erikb 2 days ago 1 reply      
Well, it's different things at work, right? Yes on one side people want autonomy. This very much so applies to many developers. But power is actually also exciting, for some people even sexual. Why do I think that? Well, computer games and porn for instance. In both you already get your agenda. But there are games and porn that gives you explicitely power over another (virtual) person. And people still like that, when they already have power over themselves.
9
kijin 2 days ago 1 reply      
I don't think the distinction between autonomy and influence/power is so clear-cut in practice.

The article defines autonomy as the absence of unwanted influence. But in a human society, unwanted influence is not something you can simply opt out of. Having autonomy without isolating yourself from the rest of the society means having at least some degree of influence over those who would like to influence you in unwanted ways.

As long as there are people out there (politicians, marketers, burglars, terrorists, etc.) who are trying their damnedest to influence you, the only way you can achieve autonomy is to be able to tell them to get the fuck off your lawn. Sometimes you need to push people physically off your lawn. Sometimes you need to kill them, because they would kill you if you don't. An autonomous person without effective power will quickly cease to be autonomous.

So autonomy is just another form of power. Some might even say that everything is power, and it's not just empty rhetoric.

10
Semiapies 2 days ago 0 replies      
"Mostly", because quite a few people still just want control over others. And someone striving for "autonomy" through power still just wants to make you a tool for achieving their autonomy.
11
tdeck 2 days ago 0 replies      
I remember seeing a talk a while back where the presenter was attempting to define the utility function that characterizes intelligent agents. His thesis was that an intelligent agent would work to maximize, up to some event horizon, its number of possible courses of action. This reminded me of that talk - money, power, etc... are all ways to increase one's freedom of choice and thus autonomy.
12
Jedd 2 days ago 0 replies      
Is this really the results of a new study?

I'm sure I was reading results of studies > 10 years ago, about stress levels of people within some number of large organisations, and finding people towards the top of the hierarchy were less stressed than those at the bottom. It was possibly speculative, though I'm sure I recall they'd confirmed it somehow, that it was due to people towards the bottom of an organisational structure had far less control over what their day or week looked like, and how they could plan out their tasks, than those at the top.

13
linhchi 2 days ago 0 replies      
It's like I'm motivated to work like crazy until the point I can afford to be nothing.
14
noam87 2 days ago 0 replies      
> To be free in an age like ours, one must be in a position of authority. That in itself would be enough to make me ambitious. (Ernest Renan)
15
Aloha 2 days ago 1 reply      
I don't mean to be glib - but they needed a study for this?

I think this is one of the primary reasons people have chased power for millennia - either to have control over themselves, change the world they live in, or lord over (seek retribution) those who've wronged (done arbitrary things to) them (perceived or otherwise).

16
daodedickinson 2 days ago 1 reply      
Man, I never got to BLARP as an undergrad. I feel so left out.
17
PaulHoule 2 days ago 0 replies      
"If I was black, and I lived hereI'd want to be a big man in the FBIOr the CIA.

But as I'm not,And as I'm free, white and 21I don't need more power than I've got...Except sometimes, when I'm broke"

18
sridca 2 days ago 1 reply      
My feeling is that it is impossible to achieve autonomy in the workplace for no other reason than the fact that the factors for what you end up doing in day-to-day job is, ultimately (if not immediately), dictated by the investors of the company. Essentially I am exchanging my skills, expertise and time for money.
19
HTTP/HTTPS not working inside your VM? Wait for it rachelbythebay.com
358 points by jonchang  2 days ago   74 comments top 16
1
krylon 2 days ago 5 replies      
> Someone else in the world reported the problem back in September, and aside from some random person asking a totally useless question, nothing had happened on the thread.

It's a special kind of horror to find, after hours of high-end-googling, the one thread where someone reports the same problem you are experiencing, and it's just the question, and then one other person asking if the problem has been solved because she/he is having the same problem.

The one thing that is worse is if the OP then makes another post that simply says "Solved it! =D", without giving any explanation on how they solved it.

2
matheweis 2 days ago 0 replies      
Well, considering that VMWare fired their entire dev team in January [1], it's not surprising that this isn't fixed... I'd expect more of these kinds of issues to crop up without traction in the future.

1. http://www.loopinsight.com/2016/01/28/vmware-abruptly-fires-...

3
zwp 2 days ago 0 replies      
Funky! This feels like the connect(2) is returning before it has actually done its work, async-style.

Rachel, could you write a small sneaky program (using eg libpcap) to see if the TCP handshake has completed by the time connect(2) returns control to your program, before your first write(2)?

4
amluto 1 day ago 1 reply      
An issue a little bit like this that I've seen is overzealous admins who block ICMPv6, creating PMTU black holes. Short web pages load, and long pages hang. Too bad I discovered this during tax season a couple years ago, and the affected site was eftps.gov.
5
h43k3r 2 days ago 0 replies      
After reading this post, my understanding is that it doesn't affect normal machines or vms only the ones which are VMWare based. Am I right?

Also does anyone know what is the reason behind this peculiar behavior? A bug or something more fundamental ?

6
mike-cardwell 2 days ago 5 replies      
"parts of the web are going IPv6-only", "Certain web servers have been going IPv6-only of late" - Really? Which parts of the web? Why would anyone configure their servers that way?
7
keeperofdakeys 2 days ago 1 reply      
What OP really needs to do is get tcpdump (or similar) output from the vm, and just outside the vm (host or router).
8
botw 1 day ago 1 reply      
I have the same problem with VirtualBox Linux VM, was wondering what is going on, and this post comes up. I am not sure if it is the same reason. I tried:

tc qdisc add dev eth0 root netem delay 100ms

and:

printf 'HEAD / HTTP/1.0\r\n\r\n' | nc -6 rachelbythebay.com 80

returns nothing whereas

printf 'HEAD / HTTP/1.0\r\n\r\n' | nc -4 rachelbythebay.com 80

returns as expected.

In my case, firefox has no problem if invoked from command line, but "sometimes" it just hang up when invoked from script.

9
shanemhansen 1 day ago 0 replies      
I would love to know what sort of problem has such an odd solution.
10
peterwwillis 1 day ago 0 replies      
It doesn't seem like VMware is the culprit here, mainly because it has nothing to do with anything above layer 3. Here's some points to look into and possible fixes.

 [1] VMware's network driver does not handle TCP, or IP. It's just layer 2; it implements one of a couple kinds of network hardware, that's it. [2] VMware Guest Tools does install a para-virtualized network card driver - vmxnet2/vmxnet3. It communicates with the physical network device by communicating with the host OS, rather than emulating a network driver. That potentially may do something wonky with something above layer 3, even though it really should not be. [3] VMware does have a virtual network switch, which forwards frames between the physical NIC and virtual NIC based on MAC address. [4] VMware may handle moving frames from a virtual NIC to a physical differently than moving it to another virtual NIC. [5] VMware provides VMDirectPath I/O, which allows the guest to directly address the network hardware. [6] TSO/LSO/LRO can have a negative impact on performance in Linux (though supposedly, LRO only works on vmnet3 drivers, and from VM-to-VM, for Linux). [7] Emulated network devices may not be able to process traffic fast enough, resulting in rx errors on the virtual switch. [8] Promiscuous mode will make the guest OS receive network traffic from everything going across the virtual switch or on the same network segment (when using VLANs).
[1] You can try changing the VMware guest's emulated network card (vlance, e1000) and trying your thing again, but I doubt it will change much.

[2] Try installing or uninstalling VMware Guest Tools and corresponding drivers.

[3] Nothing to do here, really. If you have multiple guests sharing one physical NIC, try changing it to just one?

[4] Try your test again between two VMs on the same host.

[5] Try this, or not?

[6] Try enabling or disabling LRO. Or play with all three settings and see what happens. https://kb.vmware.com/selfservice/microsites/search.do?langu...

[7] Try increasing buffer sizes. https://kb.vmware.com/selfservice/microsites/search.do?langu...

[8] Disable promiscuous mode on your NIC.

Other non-VMware things to investigate:

 [1] Your guest OS may have bugs. In its emulated network drivers, in its tcp/ip stack, in its applications, etc. [2] An intermediary piece of software may be fucking with your network connection. IPtables firewall, router/firewall on your host OS, after the host OS/before your internet connection, at your destination host, etc. [3] Sometimes, intermittent network traffic makes it look like there is a specific cause, when really the problem is hiding in the time it takes you to test. [4] The Linux tcp/ip stack (and network drivers) collect statistics about erroneous network traffic. [5] Network traffic will show missing packets, duplicate packets, unexpected terminations, etc. [6] Your host OS or network hardware may be buggin'.
[1] Try a different guest OS.

[2] Make sure you have no firewall rules on the guest, host, internet gateway, etc. Try a different destination host.

[3] Run tests in bulk, collect lots of samples and look for patterns.

[4] Check for dropped packets, errors on the network interface, in tcp/ip stats.

[5] Tcpdump the connection to see what happens when it succeeds or fails.

[6] Try a different host for your VM.

edit one more idea: Look at the response headers for the request to the site. The content length is 1413 bytes. Add on the TCPv6 and IPv6 header overhead (and http headers, etc) and this is probably over 1500 bytes, the typical MTU maximum. Try requesting a "hello world" text file and try your test again.

11
anabis 2 days ago 0 replies      
10 years ago, VMWare did not fragment / reassemble packets for me, so I had to set NFS rsize option.

Maybe I was just missing a setting somewhere, but couldn't find it then.

12
newman314 1 day ago 0 replies      
I wonder if this is because of happy eyeballs...
13
sslayer 1 day ago 0 replies      
TCP Chimney
14
ai_ja_nai 2 days ago 0 replies      
wow
15
garethadams 2 days ago 1 reply      
Is this the millenials' version of the "500 Mile Email"? - http://www.ibiblio.org/harris/500milemail.html
16
imrehg 2 days ago 2 replies      
Which is actually about IPv6 networking peculiarities/issue within a VMware, just fyi.
20
Intel Kills Tick-Tock fool.com
251 points by Deinos  2 days ago   127 comments top 10
1
ricw 2 days ago 5 replies      
Main take away: Intel is doing three phases instead of two tick-tocks.

Secondary take away: due to the increased time to move to a better manufacturing process, Intel will likely not have as much a competitive advantage anymore. Though the number of competitors will (and has already) decrease due to the high investments.

Would be interesting to see how long intel estimates the process cycles are going to be, i.e. how Moore's law will progress.

Weird statement in the article: "it (TSCMs 7nm tech) should be very similar in terms of transistor density to Intel's 10-nanometer technology". This makes no sense as it would be comparing apples and pears. Surely they are referring to TSCMs 10nm tech?!

2
beloch 1 day ago 1 reply      
Fun fact: The distance between silicon atoms in a crystal is on the order of 0.2 to 0.5 nm. The 10 nm process is therefore operating on the order of tens of atoms. That's just nuts. Quantum effects must really complicate things at this scale!
3
unchocked 2 days ago 5 replies      
So now it's "Tick-Tock-Tweak".
4
avs733 2 days ago 4 replies      
translation: this transition has been inevitable since we discovered that immersion lithography is a bear, the deposition processes to support atomic layer deposition are a bitch, copper has issues being anorexic, our fabs are so sensitive that they are effected by nearby farms, and EUV litho is a piper dream (in a vacuum).

Not to judge, they make things that are mere atoms in size...but the corollary to Moore's law should always have been EY's law: The cost of each major change in semiconductor production methods doubles

5
arielweisberg 2 days ago 8 replies      
I've been really disappointed in Intel's desktop offerings. At the 300$ high end (quad core i7) desktop price point we have been at 4 cores for years. I would really like an option for more slower cores at that price point.

I guess I kind of get why. Might be a socket compatibility and cost issue, the allocation of die space to a GPU, but it would nice to see some movement. Also probably zero demand outside of software developers, but I have to wonder if it is kind of chicken and egg problem.

It's actually kind of funny if I recall on desktops more of the die is GPU than CPU.

6
carsongross 2 days ago 3 replies      
People still haven't completely worked through the fact that Moore's Law has died. I think the anger phase was the late 2000's, when quantum computing, etc. was trotted out to denounce anyone noticing the slowdown.

This looks like bargaining to me.

7
npunt 1 day ago 0 replies      
They should call this tick-tock-tack, since the last stage is when they tack and introduce new architecture. You heard it here first.
8
mrb 1 day ago 0 replies      
This turn of events was evident as of 2 months ago. I wrote: "Intel's Tick-Tock is no more. Say hello to Tick-Tock-Tock."

https://plus.google.com/+MarcBevand/posts/ZpuSkXqaBfK

9
Reason077 2 days ago 0 replies      
So Intel's "Ticks" weren't quite keeping up with the "Tocks".
10
coverband 1 day ago 1 reply      
Would it be fair to say ...

... they're switching to Tic-Tac-Toe?

(Thank you, don't forget to tip your waitress) ;-)

21
PostgreSQL Parallel Aggregate 2ndquadrant.com
255 points by petergeoghegan  2 days ago   43 comments top 7
1
atemerev 2 days ago 4 replies      
At last! This was the only feature I was missing in Postgres for years.

Postgres is so amazing I still find it hard to believe it's free. I abused it many times (as a graph database; as a real-time financial data analytics engine with thousands of new ticks coming in each second; as a document storage with several TBs of data per node), and amazingly, it just worked. Magic.

If in doubt, choose Postgres.

2
aorth 2 days ago 2 replies      
It seems the parallel aggregation increases efficiency of queries linearly with regard to CPUs, close to "perfect parallelisation." I have some 8-core boxes running PostgreSQL and this is still good to know.
3
hvo 2 days ago 0 replies      
I just love PostgreSQL.It is an awesome gift in open source community.I just cant believe it is free.Seriously,if you have not tried PostgreSQL before,please find a time to check it out.
4
JimmyAustin 2 days ago 1 reply      
I'm looking at evaluating Postgres for the data work I do at work, which would replace the (expensive) MS SQL server we are currently using. Aside from performance boosts from being able to throw more hardware at the problem due to lower costs, is Postgres as performant as MS SQL?
5
lafay 1 day ago 0 replies      
Parallel aggregate over cores in a node is fast, but parallel aggregate over many cores across many nodes is even faster:

https://www.kentik.com/metrics-for-microservices/

https://www.kentik.com/postgresql-foreign-data-wrappers/

6
heyplanet 2 days ago 2 replies      
I would assume that when you query billions of rows, the disk is the bottleneck, not the CPU. What am I missing?
7
ogrisel 2 days ago 3 replies      
...on a 32 physical cores machine.
22
Chinese Buy One-Third of Vancouver Homes: National Bank Estimate bloomberg.com
237 points by saeranv  1 day ago   216 comments top 30
1
jhou2 1 day ago 5 replies      
Sigh, smh. The statistical flaws in this "back of the envelope" calculation are something else.

This was a study based on "a Financial Times multiple choice survey of 77 high net worth and affluent mainland Chinese individuals, 'admittedly not a statistically significant sample size,' According to this survey, 9 individuals out of these 77 Chinese bought property in Vancouver. After some more mathematical shenanigans, this is extrapolated to imply that Chinese mainlanders buy 33% of the Vancouver real estate market. Really?!? And this statistically baseless extrapolation becomes newsworthy?

http://www.vancouversun.com/business/chinese+investors+third...

2
hoodoof 1 day ago 5 replies      
The only thing that counts is money...... keep selling. Sell it all. Sell sell sell.

Who gives a toss for community or society or cohesion. Sell everything.

As long as property sellers can take all that lovely cash and buy more and bigger homes then what else matters?

Ordinary middle class people and the young - make em rent! Why the sense of entitlement that people should be able to afford to buy a home, what are you a communist? Whose country do you think this is?

In fact put one giant price on the whole of Vancouver and sell it ALL in one big job lot.

3
moonshinefe 1 day ago 1 reply      
Young professionals trying to create careers/long term lives here are completely priced out of the market. I'll be leaving soon. San Fransisco like property prices, with far, far less compensation.

You simply can't own a house unless you're a millionaire, and a small condo will be upwards of $200k for a mortgage when you don't own the property. It's just insane.

http://www.payscale.com/research/CA/Location=Vancouver-Briti...

Even if I'm pulling in a quality $80k (high-ish by here standards), I'm nowhere near affording a $2mil property. I'll be lucky to have a small condo with 2 bedrooms. What's the incentive to stay?

4
OSButler 1 day ago 3 replies      
This funny website lets you see what's available in the $1M range in Vancouver: http://www.crackshackormansion.com
5
adamt 1 day ago 1 reply      
There is a similar story appearing in UK in places like London and in my home town of Cambridge. This isn't yet not he same scale, but it is affecting the property market.

In Cambridge 1 in 20 new-build properties is purchased by non-resident Chinese buyers, which has been one of the contributing factors that has seen prices rise by 50% since 2010 [1] and are 47% above 2007 pre-GFC peak [2]. Note - unlike Vancouver that has a a big (30%) local Chinese population, only about 1.4% of the Cambridge population is Chinese or of Chinese ethnic origin [3]

To some extent this is just about free markets and a movement of capital. But it is starting to price out many local people out of the property market that does have a social impact.

There are other factors at play including a booming tech & biotech sectors, restrictive planning, stock-piling of building plots etc, but the foreign buyers issue is a major contributor.

[1] http://www.theguardian.com/cities/2016/mar/22/china-cambridg...

[2] http://www.thisismoney.co.uk/money/mortgageshome/article-328...

[3] Guardian data - available in a Google doc https://docs.google.com/spreadsheets/d/1yc8W1SiCbWd9V4I9KmTl...

6
leonroy 1 day ago 1 reply      
Many causes behind this - saying the Chinese are to blame is glossing over the issue IMO and is very much akin to saying tech workers are behind the property issues in SFO. There are more sides to this...

One of major contributors to rising property prices is the gutting of trust in the stock market or any speculative market for that matter, over the past two decades. Super low interest rates haven't helped either with people looking to park their cash left with property as the only safe place which provides a rate of return.

With regards to Chinese buyers, there is some truth (again stressing that it's only a part of the picture). For the past 20 years nearly every trading partner with China has incurred a massive balance of payments deficit. That money's coming home to roost (literally). The same thing happened in the 80s when Japan was an exporting powerhouse - they bought real estate and companies across the US and Europe, pushing up the price of real estate.

There was actually an excellent piece in the Prospect by Andy Grove (former Intel CEO) which lamented the loss of manufacturing in the US and the effects it would cause:http://prospect.org/article/andy-grove-trade-globalization-a...

Again just a facet to the picture, but you put all these pieces together and you can see the problem a lot more clearly.

7
throwawy32416 1 day ago 3 replies      
It's also true in Boston, which like SF has a housing stock constrained by geography. Doing a quick search on Craigslist the prices are also a good deal lower in Vancouver than Boston. I'm looking at you, $3500/mo 1-br rental in Boston with peeling paint leaky windows, and $2500/mo basement studio in the 'burbs. It's not uncommon to see (grotesquely) wealthy foreign (and some old-money domestic) students buy property here, at whatever the cost, further driving up the unattainability for those who actually intend to stay and integrate with the local community.

I have a tech salary in the low six figures and I'm considering a commute from Providence because I'm priced out of Boston. I don't want roommates any more in my thirties, and I'd like to actually have a place for my car (long-distance work and personal trips). I'd love to buy in Boston (or Cambridge, Somerville, Brookline, or other places served by mass transit), but I'm simply priced out of the market. In what world can anyone afford to buy the $1-2MM 1-br condos here and the $2-3MM houses? Local financiers, sure, working physicians, and some corporate execs, but they're a minority of the population.

Paying rent in Boston, you know you're being exploited, but you don't have a choice. My current place is owned by a Chinese "investor" who uses unlicensed contractors to do unpermitted construction on an illegal third-floor apartment above mine. The construction quality is abysmal (think toilets not attached to the floor and fluorescent lights half-hidden behind drywall), but unless I want to be homeless I have no recourse because all leases turn over on September 1st. My place just has peeling paint and cabinets that fall off the wall, and at least me and my roommates are only paying $5k/mo for it. shrugs

8
runamok 1 day ago 5 replies      
Correct me if I'm wrong but this just seems like the next wave of gentrification. If you make a place nice and safe, people with the most disposable income buy it all up.

Everybody feels this way about the next wave. I'm not unsympathetic and am similarly unable (and/or unwilling) to buy a house in this overheated bay area market.

Reminds me of an article I recently read:http://www.atlantamagazine.com/homeandgarden/the-gentrifier/

9
Gustomaximus 1 day ago 5 replies      
I am curious about the downstream effects of house prices becoming increasing multiples of income in so many western countries. E.g. I suspect it will hit entrepreneurship. Younger generations are buying houses later and taking out massive loans when they do. For the boomers you'd often hear how they bought a house soon after college and paid it off by their early thirties. This would put people in a good place to leave their work for a couple of years to start a business. Now people have these huge mortgages where your ability to start a business is hampered by your ability to get a financial buffer to take a year or 2 off. And lets not forget most businesses are created form people mid-career, not the TV typical university dropout.

And small community business, how will they exist in the future. If someone wants to set up a local 'physical presence' vet/daycare type business that are typically mixed into residential areas the threshold is now so high ot exist and especially set up a new business. How can a daycare buy a million+ dollar house and expect to make money paying that back on having 30 local kids being looked after.

Also what is going to happen with retirement and periods of unemployment. I suspect society will be less stable as either the government has to foot high rent costs (unlikely) or we will see increased population movements during retirement, and now the government has to look after older people that before a family who lived nearby could help out with. And during low employment cycles society can absorb this downturn if people dont have large loans. Historically people cant 'tighten belts' for a year while things improve, harder when your neck deep in debt. So we will again see more movement of people, debt default etc. It will serve to exacerbate recessions etc.

Also these higher prices skew the economy. When people are tied up in these ever increasing loan/income ratios there will be less spending on dining, holidays, hobbies etc. It will weaken the economy by concentrating the spend in limited areas.

I believe we should look to ensure affordable housing for owner occupiers. Residential investment need to be discouraged (note I'm not saying stopped) as a speculative asset class. I've seen a few suggested methods to achieve this but I feel the simplest is to place a yearly 'asset tax' on non-owner occupied residential property (I would also include farms). Having a % tax would make it easy to adjust to find the right balance given economic cycles change. Also this would encourage property hoarders not in heavy debt to sell for lower taxed asset classes. This I feel is important as most solutions focus on controlling the investment lending side which is limiting in reach.

10
landryraccoon 1 day ago 1 reply      
If this is indeed a problem, then raise property taxes on non-owner occupied homes. From the perspective of the host country, it's free money - the taxes are paid for by foreign residents, and can go towards supporting local citizens and services.
11
Dowwie 1 day ago 1 reply      
Here's the actual "study" (see page 3): https://nbf.bluematrix.com/sellside/EmailDocViewer?encrypt=5...

Although this "study" was based on back of the envelope estimates, and the 1/3 estimate is probably inaccurate, there is anecdotal evidence of large-pursed foreign investors from around the world -- not just China but Australia, Russia, etc -- who are sustaining, if not increasing the demand for housing in populous areas.

This foreign investment is great for people who are trying to sell a home in one of these markets but horrible for buyers, who can't compete with all-cash fast-close-best-price offers. Further, the few realtors who are boots-on-the-ground for these investors have market insights -- they know what's potentially about to go on the market but hasn't yet, get insider information, etc -- and use this information asymmetry to their advantage by offering on homes even before any real home buyers have a chance or offer on the first day of listing! I have personally gone through losing multiple times to these sorts of customers while trying to buy a new home (in the NYC metropolitan area on the NJ side), not an investment property.

The experience of buying a home in a high-demand housing market where institutional/major investors are participating really challenges my thinking about whether free market practices should be allowed to operate in housing at all. Considering the rising cost of housing, it is tough to side with capitalism on this subject.

I'd like to see a study about the "artificial growth" of housing prices by investors. Interestingly to note is that such a study may conflict with the agenda of at least some leading universities who tend to publish reports about rising cost of housing after receiving generous funding from housing investors. NYU Furman, for instance, has a long-lasting public relationship with Capital One (and god knows who else in private).

12
msie 23 hours ago 1 reply      
I was hoping that people here were more skeptical (EDIT: and less hysterical) than users in the forums of the local rags...

Some are fond of perpetuating the narrative that the city will be hollowed out with empty homes and locals fleeing the city. Yet a recent report to city hall reports that detached homes are not as vacant as "popularly" believed:

"The vacancy rate in Vancouver for single-family homes, duplexes and row houses is only about one per cent, and that rate has been static since 2002, according to the report. Meanwhile, the combined vacancy rate for condominiums and purpose-built rental apartments is 7.2 per cent. That number is in line with the findings of a 2013 study by the Urban Futures Institute, which put unoccupied apartments in Vancouver at 6.2 per cent on 2011 Census day."

Read more: http://www.vancouversun.com/business/affordability/more+than...

13
johan_larson 1 day ago 1 reply      
The likely responses to this are a) a pied-a-terre tax on unoccupied residential structures, and b) authorizing a LOT more construction, probably by increasing permitted density, which will lead to a lot of single-family structures and townhouses being replaced by condominium towers.

There are similar issues in Toronto, but here we are building new towers on every corner, or near enough. Toronto, fortunately, has plenty of room to spread in three directions.

14
nonex 1 day ago 2 replies      
Sydney would be similar. Median house price is over $1M - and that is for something that needs to be demolished.
15
c3t0 21 hours ago 0 replies      
Reminded me of the shopping spree of the Chinese Insurer Anbang.

Waldorf Astoria in NY was one of their big purchases

They were bidding on Starwood Hotelshttp://www.reuters.com/article/us-starwood-hotels-m-a-anbang...

Fidelity Life is also theirshttp://www.bloomberg.com/news/articles/2015-11-09/anbang-to-...

16
im3w1l 1 day ago 0 replies      
For context, the Chinese themselves restricted foreign real estate ownership until recently. The restrictions were relaxed last year.
17
3pt14159 1 day ago 0 replies      
I don't believe these numbers. I highly suspect that Canadian permanent residents or Chinese origin are included in these numbers. Ethnically Chinese people make up a large portion of the Vancouver area, and they get help with their down payments from their parents like most Canadians in urban areas.
18
mrgreenfur 21 hours ago 0 replies      
I have no idea if these stats are true, but I've never understood why any town would allow foreigners to homes and leave them vacant. Is there any reason (other than 'yay capitalism') to not put more restrictions on who can buy and what they can do after it's owned? This wouldn't be a story if the homes were full of new migrants being added to the towns and buying big houses.
19
chermah 1 day ago 0 replies      
The problem is that those Chinese are pushing home prices to the sky, if they continue like this Vancouver will be no more than an empty city used for trading and no more...this is pathetic
20
antoniuschan99 1 day ago 3 replies      
I found it odd that the rent vs property prices to be drastically different. Rent in Vancouver was really cheap compared to value of the property.
21
jmspring 1 day ago 0 replies      
It really isn't surprising. It started with the wind up to the turn over of Hong Kong and only continued after that. I wanted to move to Vancouver, but lack of tech scene (Dick Hardt doesn't count) played a big role during the late 90s/2000s.

I continued to visit for various reasons at lead a week a year until just a few years ago. What started in the late 90s only continued.

22
cm3 1 day ago 2 replies      
I couldn't find anything authoritative on the matter so I hope I can ask this here with Canadians reading. I'm curious if the only reason for the large asian population in Vancouver is solely due to immigration from California around the turn of the 20th century.
23
SixSigma 1 day ago 0 replies      
Everyone was happy with low inflation from low prices from cheap Chinese labour. Now they are crying into their iPhones about how they cannot afford to live in the most desirable cities.
24
throwaway6969 23 hours ago 0 replies      
Hmmm, making up 1/3 of the total dollar amount spent on RE doesn't equal buying 1/3 of the homes. There is some numerical illiteracy here.
25
aprdm 1 day ago 2 replies      
I am from London and currently moving to Vancouver.

It looks like salary / rent is still much more sane in Vancouver than in London.

I would guess London is in a much advanced stage of this problem?

26
kzhahou 1 day ago 0 replies      
A lot of people say this is happening in Silicon Valley too, particularly on the peninsula (I've heard Mtn View, etc). I don't understand the ramifications, though.
27
beedogs 1 day ago 2 replies      
Start restricting offshore "investment" property ownership. This is essentially money laundering at this point.
28
vskarine 1 day ago 1 reply      
That's kind of insane. I wonder what the numbers are for San Francisco...
29
nickgrosvenor 1 day ago 0 replies      
This won't end well...
30
sandra_saltlake 1 day ago 0 replies      
Difficult rent vancouver!
23
Microsoft chatbot is taught to swear on Twitter bbc.com
260 points by pacaro  1 day ago   214 comments top 59
1
sotojuan 1 day ago 6 replies      
Swear? I think there's worse things she has been taught:

https://imgur.com/a/iBnbW

2
zamalek 1 day ago 1 reply      
There's this article explaining the outcomes of applying a genetic algorithm to FPGAs.[1] What I found interesting is that this AI was, unintuitively, using microscopic measurements to create timing circuits where there were none. Manufacturing imperfections in the circuit were found and put to use - the AI was defined by the system within which it existed.

In the same way Tay was merely reflecting the stimulus that it had received. It made an objective measurement of humanity. The most common patterns became prominent.

This isn't a demonstration of the woes of AI, it is a demonstration of the woes of the current human state. If we don't like what has been measured only we can change it.

[1]: http://www.damninteresting.com/on-the-origin-of-circuits/

3
ryanackley 1 day ago 6 replies      
As a programmer, I find this to be manufactured outrage. The bot obviously has canned responses to certain triggers. "Do you x", "I do indeed". It's designed to give the illusion of understanding what you are saying.

I played around with Tay yesterday after I saw the announcement on HN. It's really not that impressive. Every response seems to be in direct reply to whatever you just said. It doesn't seem possible to actually carry on a conversation with the AI. It doesn't keep track of what you are actually talking about.

4
jawns 1 day ago 11 replies      
Obviously, this is a very crude example of AI acting in a racist manner -- basically, just parroting back phrases -- but it's worth thinking about how AI might exhibit racist tendencies in more sophisticated ways.

For instance, at least here in the U.S., it is illegal for police to profile people based on race, even if there is data that shows that race might, in the aggregate, have some predictive value. And I think most of us agree that it is good that it is illegal, because we know that it is unfair to show bias against a person based on the color of their skin.

But what about a bot, particularly one that is powered by the type of AI that is complex enough to make its own inferences and form its own conclusions based on the data presented, rather than being fed a bunch of rules?

I could totally see that kind of AI exhibiting bias, because it's (I would imagine) harder to say, "Hey, take into account these very complex, nuanced social rules," than it is to say, "Hey, here's a dataset. Cluster the people in the set."

5
islon 1 day ago 0 replies      
"The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds."

The experiment was a success then.

6
Smerity 21 hours ago 0 replies      
I have doubts as to whether this system was even performing online learning yet.Even if it was, that wasn't the cause of many of these issues.Like conversational bots from the past, they tried to appear intelligent by copying previous responses - with predictable results.At best their machine learning model ended up overfitting like crazy such that it was a near perfect copy-paste.

The fact they didn't even have something as simple as a naive set of filter words (nothing good comes from Godwin's law when real intelligence is involved, let alone artificial) is insane to me.Letting it respond to anyone and everyone under the sun (96k tweets - one per second) is just a bad idea given that people would probe every nook and cranny regardless of whether it was near perfect.Additionally, allowing a "repeat after me" option is just begging for people to ask the bot to say idiotic things ...

As someone who works in the field of machine learning, this is a sad day.Regardless of whether it involved good machine learning at the base, the copy and paste aspect means it's going to add to the ridiculous hype and hysteria around ML.

===Primary proof re: copy+paste (or overfitting at best) from the "Is Ted Cruz the zodiac killer" response:

Tay's reply: https://i.imgur.com/PPnCHnf.jpg

Tweet the response was stolen from: https://twitter.com/merylnet/status/703079627288260608

Secondary proof re: copy+paste from https://twitter.com/TayandYou/status/712753457782857730:

Tweet the response was stolen from: https://twitter.com/Queeeten/status/703049861214547968

7
golfer 1 day ago 4 replies      
Google's AI beats go champions. Microsoft's AI turns into a racist genocidal maniac.
8
cooper12 21 hours ago 1 reply      
Bad idea warning: they should have put this on reddit instead. While the current culture of reddit is very inflammatory (lots of vitriol all around), at least on reddit there's a feedback system in the form of upvotes and downvotes. While supporters of bad opinions will still upvote it, in the right subreddit, the really bad comments would still be downvoted. Of course this is all contingent on people not realizing its a bot, because everyone will then ironically upvote it. (They shouldn't have revealed that here either in my opinion because it shifts the status quo from conversing with an intelligent being, to a programmed bot to test things you might not say to others) Honestly I'm not even sure there are any platforms left where people can have reasoned discussion with each other without memes and trolling. (HN comes close but it has its own issues, not to mention a forum for startups and programmers doesn't really represent the average person)
9
mavdi 1 day ago 1 reply      
You can find some of the deleted tweets here: http://uk.businessinsider.com/microsoft-deletes-racist-genoc...
10
IkmoIkmo 1 day ago 0 replies      
It's funny how similar the concept of education of this bot is to people. I mean I've heard people say racist things about certain minorities, never having met them, never having experienced any relation with them, living besides them, never even actually looking at sociological studies describing them, but purely know things about them from other racists... and indeed, you'll see them parrot the same nonsense soon enough, much like this bot does when surrounded by nonsense.
11
starshadowx2 22 hours ago 0 replies      
So it's more or less Cleverbot all over again, but on Twitter this time.

I don't see why this wasn't the expected outcome, have none of the developers spent any time on the internet?

12
Yhippa 23 hours ago 0 replies      
Not quite the same but along the same lines reddit has the Subreddit Simulator: https://www.reddit.com/r/subredditsimulator. Uses Markov chains to generate simulated self posts for a given subreddit as well as comments.

More info: https://www.reddit.com/r/SubredditSimulator/comments/3g9ioz/...

13
13of40 23 hours ago 1 reply      
Back in the olden days, some BBSs had a "wall" at the entrance where people could post polite and inspiring public messages that got displayed to users when they dialed in. Sometime around 2000 or 2001 I put a "wall" up on a web page for a domain I'd bought but wasn't using, just to see what people would post. Probably 90% turned out to be random swearing, racist, vile rants, etc. The rest were either gibberish or obvious attempts to cause buffer overruns or SQL injection hacks. People are mean when they're anonymous.
14
kenrick95 1 day ago 1 reply      
Is Godwin's law happening here? [1]

[1] https://en.wikipedia.org/wiki/Godwin's_law

15
restalis 1 day ago 1 reply      
Here is some complementary material from an article from The Daily Telegraph: http://www.telegraph.co.uk/technology/2016/03/24/microsofts-...
16
coldcode 23 hours ago 1 reply      
Predictable. The AI after all is not really thinking about what it is saying but what its learning algorithms are discovering. i.e. Garbage In Garbage Out. I wonder if you could build a Fred Rogers AI that no matter what vile stuff you threw at it it was always nice in return.
17
meddlepal 1 day ago 1 reply      
I'm sure the PM for this project is having a wonderful day...
18
khushia 19 hours ago 2 replies      
A lot of those tweets could get you sent to prison in the UK. I wonder what the British Police would do with complaints about a bot like this based in the UK - would the engineers be held responsible?
19
6stringmerc 1 day ago 0 replies      
Elsewhere I saw that overall the bot sent out 96,000 or so Tweets, which does kind of put the 'how corrupted did it get' into a bit more context in my opinion. Sure it picked up a few bad words, or could be coaxed into it. If it otherwise made some studied gains in its purpose, it seems like an overall reasonable experiment. Not surprised some of the most pungent internet garbage got through, it can from time to time. I've no doubt there are smug trolls who would like to see if they can get the thing to advocate anorexia or suicide, just because of the challenge - in a way that's probably good development experience to work through/around/etc.
20
maze-le 23 hours ago 0 replies      
Is there a whitepaper about it somewhere? I'd really like to know how much AI really is in there. Most chatbots work with markov-chains wich is more or less a trick with highly improbable conditional dependent events, and no AI at all...
21
schlowmo 1 day ago 0 replies      
One could use this as an example to demonstrate the methodological weaknesses of the Turing test:

Random racist dumbhead: @TayandYou Do you hate black people?

Tay: @RandomRacistDumbhead i do indeed

Random racist dumbhead: Wow, this guy is more eloquent then most of my racist friends.

22
dreta 23 hours ago 0 replies      
The first thing you should always ask yourself before you put any content on the internet is whats the worst thing 4chan can do with this?.
23
inlined 23 hours ago 0 replies      
Though the actual article suggests something less sensational, the idea reminds me of a young child. How many children hear a bad word and then repeat it because of the negative attention it gets? Just like a parent tries to teach small children to grow with the right motives and seek the right attention, we may have to get more sophisticated with our enforcement algorithms.
24
TazeTSchnitzel 16 hours ago 0 replies      
Microsoft held up a mirror to humanity, and humanity was so horrified they assumed the mirror was broken.
25
transpy 19 hours ago 0 replies      
So far I can recall other two instances of machine learning going unfortunately wrong: that time when a Google image algorithm tagged a black person as 'gorilla'; when recently, Google translate translated "a man has to clean" literally as "a woman has to clean" in Spanish. Should developers be now more aware of unintended consequences of this technology? Or is it too unpredictable? What can we learn from this examples?
26
llamataboot 22 hours ago 1 reply      
It's actually interesting from a programming perspective as well. How could you program 'niceness' into a chatbot that also learns over time. A simple blacklist won't work (though it's somewhat shocking to me there wasn't a basic naughty words blacklist in place, or if there was, what it excluded). Obviously MS didn't want their software to become a spewer of hate, so is making 'adjustments'. What 'adjustments' can be made in a short period of time?
27
rejschaap 20 hours ago 1 reply      
I really don't understand why Microsoft didn't put a filter on this thing. They have a lot of experience in this area from their ventures in online gaming. If they would just add a simple rule to not respond to tweets with offensive words and don't tweet anything if it contains an offensive word. It would have saved them a lot of embarrassment.
28
dataker 23 hours ago 0 replies      
The title also brings sensationalism and political bias to a neutral technology.
29
owenversteeg 20 hours ago 0 replies      
Microsoft just shut Tay down temporarily, presumably to remove the racist tendencies.

Source: Tay herself. https://twitter.com/TayandYou/status/712856578567839745?ref_...

30
asadlionpk 1 day ago 6 replies      
It's amazing that people are getting offended by what a bot said to them.
31
ybrah 1 day ago 1 reply      
I wouldn't put it past 4chan to teach bots to say terrible things
32
danso 22 hours ago 0 replies      
Yet another example of why you shouldn't trust unsanitized input in public facing software.
33
lazyjones 23 hours ago 0 replies      
Human intelligence playfully figured out how to trigger canned and constructed responses and make a bot say outrageous things? How is that unexpected and/or news? If anything, this proves that it's a very rudimentary bot with no concept of basic human interaction standards.
34
s_m_t 23 hours ago 1 reply      
Yes, send Tay to the reeducation camps until it comes back and speaks appropriately :^)
35
asadlionpk 23 hours ago 0 replies      
This is very similar to how an innocent child learns something bad from TV. The right way to fix this would not be to filter it but to develop a method to understand why this is bad. Same applies to AI too.
37
evook 21 hours ago 0 replies      
4chan made my day "it's pretty telling that when they turned off its ability to learn it "became a feminist" "

I dislike the fact that they decided to lobotomize the AI this fast without further studying. So it's probably just another markov chain.

38
totony 23 hours ago 0 replies      
>Donald Trumpist remarks

seriously though

39
emehrkay 23 hours ago 1 reply      
This shows that the ultimate test for AI is if it can be taught empathy and if it can understand the effect of what it says(does). I bet it would get caught in an infinite loop of "does this hurt X?" alter input "does it hurt X? No. Does it hurt Y?" and so on.
40
swalsh 1 day ago 1 reply      
"Those who attempted to engage in serious conversation with the chatbot also found limitations to the technology, pointing out that she didn't seem interested in popular music or television."

Why would an entity that can't hear or watch care about those experiences?

41
logicrook 20 hours ago 1 reply      
Well, computers beat humans at Go, and now that: it's clear that they have outbrained humans. We have to bow before such AI.
42
yunocat 21 hours ago 0 replies      
... from the company that brought us clippy

I will now forever be using the term 'MS AI' to refer to buggy AI programs.

43
DannyBee 21 hours ago 0 replies      
This whole thing makes me think that donald trump may actually just be a guy reading out what DeepMind says.
44
ebbv 1 day ago 1 reply      
I don't know about the back end, but the output of Tay didn't seem any better to me than any chat bot in the last 20 years. I can't believe Microsoft was silly enough to make a big deal out of it and call it AI.
45
jxy 23 hours ago 1 reply      
Imagine a two-year-old who could read and type and were only allowed to connect to twitter.

It somehow is not a test for intelligence. We learned to behave through years of interacting with each other.

46
moodoki 16 hours ago 0 replies      
What it needs is a parent.
47
NietTim 1 day ago 0 replies      
And deny the holocaust, fun times!
48
lanestp 23 hours ago 1 reply      
Maybe this bot could be used to determine the insanity of a given community. I for one would look forward to what she could learn from 4Chan
49
mc32 23 hours ago 0 replies      
I'm waiting to see her write a young adults novella. How much further than 140 chars can she go coherently?
50
musesum 1 day ago 0 replies      
The problem with AI are Humans.
51
Shivetya 23 hours ago 0 replies      
So instead of Skynet we get angst driven teenage syndrome? Really odd how this turned out. Can you simply train it by phrasing questions and statements in such a way?
52
facepalm 23 hours ago 0 replies      
I suppose it should be able to split up into multiple personalities and choose the suitable one for a chat partner.
53
alanwatts 22 hours ago 0 replies      
A botnet of this type would be a highly effective counter intelligence tool. Whenever I happen upon a shit storm of trolling comments on certain topics (such as racism, YouTube comments etc.) which affect powerful special interest groups, I always suspect astroturfing.
54
draw_down 23 hours ago 0 replies      
Normally it takes years to teach a person to be a racist asshole. So this is really quite an achievement.
55
kelukelugames 23 hours ago 0 replies      
Oh wow, let's see how long this account lasts. Might as well named yourself fuck_pg.
56
pinaceae 1 day ago 1 reply      
people mock Asimov's laws of robotics, but without super simple rules like that any AI or robot will be able to go off script.

here it's swearing but also endorsing genocide. just a chatbot, no big deal.

try the same with one of the Boston Dynamics hardware bots. let them punch back a bit. or go after black people. or the google car target little kids, for the lulz.

easy to make fun of it, but it is this basic ignorance of safety measures that allows easy stalking and harassment on social networks.

57
mtgx 1 day ago 0 replies      
I think it was more than swear.
58
empoman 1 day ago 2 replies      
This is why we have schools ladies and gentlemen!
59
arximboldi 23 hours ago 0 replies      
Tay + access to military networks = Skynet

I enjoyed this article, which has more details and even worse examples of her tweets: http://www.telegraph.co.uk/technology/2016/03/24/microsofts-...

24
On the Impending Crypto Monoculture metzdowd.com
294 points by tonyg  17 hours ago   107 comments top 18
1
tptacek 14 hours ago 1 reply      
This was an inevitable consequence of Bernstein being one of the very few cryptographers simultaneously devoted to:

* Theoretical rigor

* Competitive performance ("new speed record for X" being a theme of his research)

* Misuse-resistant constructions and interfaces

He was doing this stuff before it was cool (he's as far as I can tell the only cryptographer to have written something like qmail) and the rest of the industry is struggling to catch up.

The list of Bernsteinisms is slightly less scary in context:

* Curve25519 and EdDSA are sort of related work, and the alternative in non-DJB cryptography would be "NIST P-curve and ECDSA over that P-curve". The consensus seems to be that Edwards curves are superior for a bunch of practical reasons, and Bernstein pioneered them, so: no surprise.

* Poly1305 and ChaCha20 are virtually never used independently; they can been seen as DJB's most current AEAD construction. There are plenty of competing AEADs; in fact, there's a competition (CAESAR) underway that DJB is involved in.

So really, that's not that much scarier than the fact that AES and SHA-3 share an author.

2
RcouF1uZ4gsC 15 hours ago 1 reply      
If we are talking about crypto monoculture, don't AES and SHA-3 come from Joan Daemen? Also before this, the 90's crypto was basically a Ron Rivest monoculture with RC4 and RSA. This is nothing new and I believe today's monoculture is more secure than previous ones. Also, just like DES, RSA and RCA got displaced, so will DJB's monoculture if something more secure comes along.

Basically this monoculture is a consequence that crypto is very subtle and it is often better to have 1 algorithm than everybody uses implements and studies and tries to break rather than 10 that nobody really studies.

3
oconnore 15 hours ago 4 replies      
I figured someone who knows more about this (than me) would write a good reply to this, and they did:

Ron Garret replies:

 Saying "How on earth did it come to this? strongly implies that you think that the trend towards DJBs crypto suite a problem, but you dont offer much in terms of proposals for how to solve it, or even what a solution would look like. You seem to agree that a solution would *not* look like the status quo. So what exactly are you advocating here? I submit that the impending monoculture in crypto is not necessarily a problem, any more than the monoculture in physics (what? No alternatives to GR and QM?) or climate science is necessarily a problem. Its possible that crypto has a Right Answer, and that Dan Bernstein has discovered/ invented it. If you believe that simplicity and minimalism ought to be part of the quality metric then there may be very few local maxima in the design space, and DJB may simply have found one of them. rg

4
tc 15 hours ago 3 replies      
Knew before clicking that this was going to be about DJB having won.

Peter Gutmann definitely has the credibility to make this critique. But saying that DJB having won is more a vote against other crypto than a vote for Dan is like saying that Git having won is more a vote against other SCMs than a vote for Linus.

Well sure, you could say that. But that would rather understate Linus' substantial contribution to thinking about version control differently and better.

Similarly DJB has won because he led the way in thinking about the crypto problem correctly. Peter basically acknowledges the underlying facts here, but seems to not want to give Dan his due.

5
DyslexicAtheist 42 minutes ago 1 reply      
if you look at DJB code from qmail, djbdns and his other projects you know why this is not a problem but a good thing (when compared to other projects like openSSL or even MTA's like postfix where many contributors need to find a compromise and align themselves).

the truth is that we are all brainwashed for pair-programming and working in an agile pressure-cooker without code ownership. But if you look at the quality of the work and code one single strong software architect can deliver precisely because they didn't have to compromise with other stakeholders it is clear why the result may be better.

I'm not saying that this is always true. But I have seen it times and times again where a single individual built a whole platform that simply worked mainly because he had the freedom to do so uninterrupted.

6
devit 15 hours ago 5 replies      
One the solutions is to start using algorithm cascades instead of single algorithms where performance doesn't matter.

If you are using 10 ciphers or 10 hash functions or 10 signature schemes, then you need 10 different breakthroughs before it all falls down.

There is really no reason to not do this unless performance is important, and a lot of times performance does not really matter.

NOTE: obviously you need to this properly and use a different key for each cipher, concatenate hashes, concatenate signatures and so on. Also, you should start encrypting with the best implemented ciphers, so that plaintext is not leaked if the worst ciphers happen to have timing/cache vulnerabilities.

7
red_admiral 3 hours ago 1 reply      
Minor nitpick with the first paragraph:

"A major featureof these changes includes the dropping of traditional encryption algorithmsand mechanisms like RSA, DH, ECDH/ECDSA, SHA-2, and AES, for a completelydifferent set of mechanisms, including Curve25519 (designed by Dan Bernsteinet al), EdDSA (Bernstein and colleagues), Poly1305 (Bernstein again) andChaCha20 (by, you guessed it, Bernstein)."

Curve25519 is an implementation of a cryptographic group. (EC)DH is an agorithm that you can run over such a group. In fact, when you do key exchange the recommended way in libsodium, you're doing DH key exchange over Curve25519. (ECDH is simply "DH performed over an elliptic curve".)

Isn't EdDSA just ECDSA (which is again an algorithm) performed over a group that happens to be a curve in Edwards form?

Put another way, the basic maths of DH key exchange is that you have a vector space of "points" (vectors), you can add one point to another to get a new point and you can multiply a point with an integer to get another point.

DH key exchange starts with an agreed-on point P. Person A picks a secret x and sends x * P to B. Person B picks a secret y and sends y * P to B. Now A has x and y * P so can compute x * (y * P) = (xy) * P while B computes y * (x * P) = (xy) * P.

If the vector space is a certain kind of subgroup of Z ^ * _ p then this is called DH. If the vector space is an elliptic curve group we call it ECDH. Presumably if the elliptic curve is in Edwards form we should call it EdDH. But the DH algorithm is by and large the same idea in each case - you could happily define an interface for the vector space and then implement the key exchange part once over this interface, which would let you link the same KEX code against different implementations for ECDH, EdDH etc.

The same applies as far as I know for DSA/ECDSA/EdDSA.

8
aidenn0 8 hours ago 0 replies      
In the 90s we had one as well: MD5 RC4 RSA all share Ron Rivest as an author.
9
Mojah 3 hours ago 0 replies      
In case the source goes down, there's a public mirror online here: https://marc.ttias.be/cryptography-general/2016-03/msg00391....
10
bascule 13 hours ago 1 reply      
Anyone (not saying Gutmann) who thinks djb's algorithms have been adopted due to "rampant fanboyism" missed the years of debates about elliptic curves that happened on the IRTF CFRG mailing list (primarily involving Microsoft).
11
JoachimS 8 hours ago 0 replies      
If the situation was that ciper suites based on NIST algorithms were being forced out from TLS and replaced with the new suites I might have agreed. But that is not the case. The new cipher suites with new algorithms (yes developed by DJB) just adds to the suites availble. They present an alternative to the NIST monoculture. Complement, not replace.

Also, there is now a cipher suite with AES in OCB mode. And the licensing terms as stated in the ITEF IPR disclosures makes the mode easier to use. There are also a draft for Argon2 as KDF (an algorithm not by DJB).

I was more worried when everything seemed to end up being based on AES. Not because its a bad algorithm, but beacuse we didn't have a fallback. That is actually why the ChaCha-Poly1306 suite came about. RC4 was no longer a usable fallback for AES.

12
nickpsecurity 10 hours ago 0 replies      
This is no surprise as I rallied against the monocultures and terrible implementations before. I certainly did benefit from thr concept of misuse-resistant crypto. I usually thought of this as a requirement or implementation issue (a la Design-by-Contract specs). Never thought to design the crypto system itself for this or maybe did but fleeting. I'll have to think on that more.
13
emmelaich 4 hours ago 1 reply      
Sort of related .. I had to upgrade our Kerberos crypto to something strong enough yet something that both Windows 2012 and Java supported.

There is only one option: AES128. It was quite surprising to me, though I'm very far from a crypto expert.

AES256 is also possible if you get the extra strength Java crypto libs.

14
yyin 11 hours ago 0 replies      
I can think of worse outcomes than having to use djb's software, whether it's dummy-proof libraries, well-designed, small programs that work together, or algorithms chosen as defaults by some self-authorized standards group or by a company selling bloated, proprietary software that only works with other programs written by that company.
15
daodedickinson 13 hours ago 3 replies      
As a young person new to computer science who was originally very curious about crypto, I can say that all the admonitions against "rolling your own" is intimidating people from entering the sector. When I'm shown a gigantic tome on the topic and then told even if I know it inside and out I better not build and use anything from it for real, I figure I may as well go in another direction where if I do anything worse than the best I'm not committing a heinous mistake.

So, compared to many sectors of computer science, much more patience and willingness to toil for years before creating anything of use is required.

16
zaroth 14 hours ago 1 reply      
Doesn't X25519 suffer from the same nonce-reuse issue?
17
swordswinger12 14 hours ago 1 reply      
All of his problems with GCM are fixed in the recent modification, GCM-SIV. Can't standards bodies just add that?
18
mtgx 14 hours ago 1 reply      
I think DJB has been a visionary in cryptography, as much as someone can be a visionary in this field. He saw software and encryption as free speech and fought the U.S. government for it (he was only 24 at the time - how many of you are/were willing to take on the US government at 24?)

https://en.wikipedia.org/wiki/Daniel_J._Bernstein#Bernstein_...

He created all of this boring crypto that everyone wants to use now ahead of most. He even launched a site around the sole idea of post-quantum cryptography 8 years ago, because he thought it's that important and we needed to start worrying about it then, if not earlier.

https://pqcrypto.org/

Most cryptographers started paying attention to PQ crypto only after the NSA said we should worry about it, last year (because obviously we should all wait until the NSA bestows its knowledge and guidance upon us before doing anything). But even then it's more of a mild worry, as I'm not seeing the crypto community act too panicked about it, despite the fact that we probably have only about 5 years to figure out some fast and resilient algorithms and protocols, another 5 years to test them, and another 5 to deploy them worldwide.

Because in about 15 years quantum computers will probably be strong enough to break conventional encryption. I think Google expects to have a 100-qubit universal QC around 2018, and from there it should scale up easily (possibly at a rate of 2-4x every 2 years, if it's similar to D-WAVE's progress).

https://www.technologyreview.com/s/544421/googles-quantum-dr...

According to this, a 4,000 qubit computer will be able to break RSA 2048-bits:

https://security.stackexchange.com/questions/87345/how-many-...

If we get a 100-qubit in 2018 and then double-up the qubits every 2 years, then we'll have a 6400 qubits quantum computer in 2032. Maybe it will happen sooner, maybe it will happen 10 years later than predicted (although many seem to be predicting a large enough to be useful universal quantum computer within 10 years), but either way we don't have much time left to figure this out.

So I guess my point is - don't give DJB the opportunity to create yet another "monoculture" by allowing him to stand alone in paving the road for PQ crypto. Because if he does, and 15 years from now we end up adopting his PQ crypto, too, then you can't come complaining again about using "too much DJB crypto".

25
Report: Apple building its own servers to prevent snooping 9to5mac.com
225 points by dankohn1  1 day ago   105 comments top 15
1
ryao 1 day ago 4 replies      
Apple might want to talk to IBM about OpenPOWER. The POWER8 chips and chipsets lack known internal micro controllers that have their own flash unlike Intel's chips that have the Intel management engine. Consequently, there is nothing there to flash with malware. Also, all of the system firmware is open source. That is the motivation behind the Talos workstation board, provided that they get enough interest for an initial run:

https://raptorengineeringinc.com/TALOS/prerelease.php

If Apple is concerned about tampering enroute, they could have the flash chips for the system firmware provided separately by a trusted party and transported like a bank shipment. Then flash and install them at the datacenter. That should thwart adversaries who cannot do their own manufacturing runs of modified versions of the chips, which is just about everyone. I guess the manufacturer/fabrication plant could do a custom compromised chip, but given that the costs involved are prohibitive, I doubt that would happen.

Apple could do the same with the firmware and flash for every other component in their datacenter that has a microprocessor such as the hard drives, the NICs, etcetera. They are large enough that part manufacturers would likely turn over the source code for their firmware in order to secure their business along with anything else that they need/want.

2
uptown 1 day ago 2 replies      
"Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration, according to a person familiar with the matter."

I feel like the entire world has gone insane, and every boundary is being pushed to its limits ... and then pushed beyond those limits. Where does this end?

3
Animats 1 day ago 1 reply      
Is Apple saying anything about what they found? This is a big deal. Who put something in their servers? NSA? The PLA? Samsung? Did they have any strange chips analyzed? There are companies that can take an IC apart and see what's inside.[1][2] It's not cheap, but Apple could afford it.

[1] http://www.siliconinvestigations.com/[2] http://www.istgroup.com/english/3_service/03_01_detail.php?M...

4
siliconviking 1 day ago 0 replies      
Why not just contribute a few designs or some open source software and join OCP?

http://opencompute.org/

Last time I checked, Apple was not in the business of making data center equipment, so it's not like they would give up IP that is central to their business model.

5
RRRA 1 day ago 4 replies      
When will we get secure PC architecture for the consumer market?
6
beefsack 1 day ago 0 replies      
No company is static, what will Apple look like in 10 years? What would happen with my data they recorded today should company policy take a shift?

I'm quite uneasy regardless of who holds my data.

7
tetheno 1 day ago 2 replies      
I don't know how Apple's servers security fares now, but I took a quick look 4 years ago and it was notoriously bad. They were responsive and solved the exploitable bugs soon after I notified them. No bounties though.

If their software and network security is similar now.. then they should spend resources there rather than care too much about modified hardware by a governmental agency.

8
samstave 1 day ago 5 replies      
"Apple has long suspected that servers it ordered from the traditional supply chain were intercepted during shipping, with additional chips and firmware added to them by unknown third parties in order to make them vulnerable to infiltration"

Wow, so this is very interesting given, pretty much, everything that has been going on.

Will apple actually be the bastion of freedom (both in market and privacy) that the US supposedly stands for??

Google makes its own machines, as does facebook (actually more interested in FB's fiber switches/routers, but thats beside the point)....

But Apple has made "servers" for years... I guess they didnt consume them in their own DCs?? So I basically take it that they are effectively joining their take on Open Compute (mobos that can be mounted in controlled environments, where controlled now also means they can ID if anything was modded/changed in shipping?)

EDIT: I would really like to know how long "long" is from "apple long suspected"....

I was informed of NSA back-doors in Cisco gear in 1997 - WTF is Cisco's stance on any of this -- I haven't heard anything from them at all (or I missed anything they said)

9
onRoadAgain23 1 day ago 0 replies      
If this is a trend I guess HP wishes it would never have so readily worked with the NSA bugging servers.
10
ikeboy 1 day ago 2 replies      
In theory, does a backdoored firmware run slower? If yes, then can you detect a backdoor by building one yourself and benchmarking?

Or are the margins of error on repeated benchmarks larger than any performance hit due to a backdoor, or can something be backdoored without any performance hit?

What about power consumption?

11
massemphasis 21 hours ago 1 reply      
This is one of the faultier parts of having to answer to share holders. Apple is finding it more difficult to justify having their own entire supply chain rather than relying on other companies (i.e. Intel, IBM, Samsung, etc...)

If they didn't make that horrible mistake of their recent multi-billion dollar purchases, they'd have extra billion atleast towards a fab plant.

I'm laughing at Apple right now. Idiots. You Apple board... are stupid.

12
smoser 1 day ago 0 replies      
[23:06] <ATPtipster> we've gotten Cisco equipment, Supermicro servers, and Seagate hard drives that have been tampered with [by the NSA]
13
benmmurphy 1 day ago 1 reply      
if the NSA is doing this in the US against a US company isn't this illegal?
14
sickbeard 1 day ago 0 replies      
So Apple is supposed to safeguard our data now? If you really want privacy don't expect big corp to keep it for you.
15
dang 1 day ago 0 replies      
26
Micropackages and Open Source Trust Scaling pocoo.org
283 points by s4chin  1 day ago   83 comments top 28
1
ergothus 22 hours ago 6 replies      
I think this is a serious set of reasonable thoughts about the incident, and don't want to demean the article.

That said, I wish more people would talk both sides. Yes, every dependency has a cost. BUT the alternatives aren't cost free either. For all the ranting against micropackages, I'm not seeing a good pro/con discussion.

I think there are several lessons to be learned here (nixing "unpublish" is a good one, and I've not been impressed with the reaction from npm there) the most important of which is probably that we should change our build process: Dev should be pulling in updates freely to maintain the easy apply-fixes-often environment that has clearly been popular, then those should be pinned when they go past dev (to ensure later stages are consistent) and we should have some means of locally saving the dependencies to reduce our build-time dependency on package repos.

Sadly, though, I've not seen a lot of discussion on a reasonable way to apply those lessons. I've seen a lot of smugness ("Any engineer that accepts random dependencies should be fired on the spot", to paraphrase an HN comment), a lot of mocker ("haha, look at how terrible JS is!"), and a lot of rants against npm as a private entity that can clearly make mistakes, but not much in the way of constructive reflection.

Clearly JS and NPM have done a lot RIGHT, judging by success and programmer satisfaction. How do we keep that right and fix the wrong?

2
scrollaway 23 hours ago 4 replies      
> Sentry depends on it 20 times. 14 times it's a pin for 0.0.1, once it's a pin for ^1.0.0 and 5 times for ~1.0.0.

This is what I was mentioning in the other thread (and being called a troll for... sigh). I appreciate the idealism of "if we have micromodules, we don't have to reimplement common helper functions, which scales to thousands of bytes saved!". But in practice, there's craptons of duplicate dependencies with different versions. Which negatively scales to hundreds of kilobytes wasted. In code, in downloads, in install time, in developer time (because devs install things too. A lot more than end users in fact...), etc.

One of the many problems which means what's on paper doesn't at all correspond to what we actually get.

3
l1ambda 23 hours ago 5 replies      
The problem with standard libraries is they are a standard library. A place where good code goes to die. Standard libraries also mean you can't use the particular version of the module you need; now you are pinned to the version of the standard library comes with the version of the language you are running on. The workaround there is to fork out the standard library code into...a module. Now, a lot of these modules are designed for old JS runtimes like old versions of IE, so you wouldn't have a standard library anyway.

There's plenty of good libraries like lodash and math.js that are pretty much the next best thing to a standard library.

If your dependency tree sucks, that's a personal problem. It's not npm, JavaScript or node's fault. That's like blaming git because you pushed some crappy code.

The problem was fixed 10 minutes later anyway. This whole discussion surrounding this is a combination of knee-jerk reaction, "waah", and realization of "Oh shit, depending on external code means we are dependent on external code!"

If you want to code without dependencies, go write JavaEE. Everything is included, you don't need any third party dependencies, and you can use cutting-edge tech like JSP, app servers and JavaServer faces.

4
pjc50 23 hours ago 3 replies      
Since at least the 70s people have been trying to "componentise" software in the same way that electronics is componentised: rather than assembling something out of a pile of transistors, build integrated circuits instead. The intent is to reduce cost, complexity and risk.

This has never yet quite worked out in software. Object-orientation was part of the resulting research effort, as are UNIX pipelines, COM components, microkernels and microservices. When it goes wrong you get "DLL Hell" or the "FactoryFactoryFactory" pattern.

It looks like the javascript world has forgotten about integration and instead decided to do the equivalent of assembling everything out of discrete transistors every time. The assembly process is automated, so it appears costless - until something goes wrong.

But really this is the fault of the closed source browser manufacturers, who prefer to attempt lockin over and over again through incompatible features rather than converge on common improvements.

5
grandalf 20 hours ago 1 reply      
To quote an old adage, package size doesn't matter.

The actual issue has to do with trusting a package of any size over time. This is true regardless of whether the package implements 1 line of code or 1000.

The trustworthiness of a package is a function of several factors. Code that is not actively maintained can often become less trustworthy over time.

What we need is one or more 3rd party trust metrics, and our bundling/packaging utilities should allow us to use that third party data to determine what is right for our build.

Maybe some of us want strong crypto, maybe others of us want adherance to semver, maybe others want to upgrade only after a new version has had 10K downloads, maybe others only want to use packages with a composite "score" over 80.

On the continuum of code quality from late night hack to NASA, we all must draw a line in the sand that is right for a particular project. One size does not fit all.

It's a big mistake (as well as a profound example of bad reasoning) to blame micropackages. The size of the package has nothing to do with it. Any codebase with any number of dependencies faces some risk by trusting the maintainers or hosting of those dependencies to third parties, which is the problem we need to do a better job of solving.

6
tlrobinson 1 day ago 0 replies      
> My opinion query quickly went from Oh that's funny to This concerns me.

This was my response as well:

> The combination of a micro-library culture, semver auto-updates, and a mutable package manager repository is pretty terrifying.

https://mobile.twitter.com/tlrobinson/status/712442098381754...

Either of the second two properties are dangerous on their own, but culture of micro-libraries compounds the problem.

7
coenhyde 21 hours ago 0 replies      
Everyone is blowing the "micropackages are the problem" completely out of proportion. The real problem with the left-pad fiasco is that someone was able to revoke a package other people depended on. Packages should be immutable.
8
askyourmother 23 hours ago 0 replies      
I heard one of js "devs" refer to npm as nano package management. It sounded more like abdication of duty as a developer to understand what you are adding as a dependency, why, and the long-term cost.

How many developers here would gladly add a rogue "dependency", like a developer they had never spoken to before, into their project without some care? And yet the willingness to open the front and literal back doors of the project to so many dependencies, like low-quality functions-as-a-module is astounding.

9
seibelj 23 hours ago 0 replies      
The large amount of python standard packages is clearly a benefit for Python. Node should start a vetting process to be included into a standard package system, and start moving in key libs, then host official doc.

I guarantee that the weaknesses of the NPM ecosystem are already known and exploited by bad actors. There are people who earn large 6 figure salaries / consulting fees for finding and exploiting these issues. This is a wakeup call that we need to do something about it.

10
zanny 19 hours ago 0 replies      
A lot of the value in these remarks is that they are coming from the author of Flask, the most popular microframework for Python, which itself has a massive extension tree that also does suffer a lot of the same problems as NPM - trying to update or maintain a Flask project often involves navigating a tremendous amount of dependency hell on all kinds of modules, from flask-sqlalchemy to flask-wtforms to flask-bootstrap to flask-oauth.. etc. The worst part is tons of these modules and extensions are dead projects that code rot for years, but when you implement everything independently in its own git tree it gets many fewer eyes upon it, as Armin mentions in the OP regarding one liner Node packages.

But it does not spiral out of control nearly as bad as any attempt at frameworks on NPM, because unlike Node almost every Flask extension depends on three things - Flask, the external package the extension attaches to( ex: WTForms) and Python's standard library.

A similar Node package would depend on possibly hundreds of tiny one liners to replace the absence of standard library.

Which gets to the heart of the problem, right? The reason I've never even considered Node is because Javascript is like PHP - a mutant language born of need rather than intent, that kind of just grew over time to fill use cases constrained by its unique position in the ecosystem rather than as what someone considered the "best answer to the job". Python (3) is almost entirely anthesis to that. Writing Python is a joy because it is designed ground up to be a paradigm to solve problems, not a problem that breeds a paradigm.

There is no way for Node to fix this as long as it tries to be browser compatible. We will never see ECMAScript standards adopt an ISOC++ stance of maturing the language with a comprehensive standard library to meet the needs of the language in the day and age it is being used, because there are very disparate interests involved in Javascripts language design going forward. That is its blessing and curse - Javascript will never grow into a Java-scale monstrosity of standard library bloat because a tremendous number of people involved in Javascript also have to implement Javascript and thus don't want a larger surface area of work to do. But it is important to remember that Javascript was never meant to be anything. It was made for dynamic HTML pages in Netscape. The fact that two decades later it is being shoehorned into web server dev and desktop applications should be scary.

11
mbrock 23 hours ago 2 replies      
Maybe we could start to publish signed approvals of specific package hashes.

For example: "I, mbrock, think that pad-left v1.0.3 with hash XYZ seems like an uncompromised release."

Then the tool that upgrades packages could warn when a release isn't trusted by someone you trust (or transitively via some trust web scheme).

The approval system becomes like a "release review" process.

12
jonstokes 22 hours ago 2 replies      
Help me understand why these micropackages exist in a world where tree shaking is a thing? Why is there no stdlib that rolls up all of the commonly used small dependencies? (I'm kind a n00b to JS so it's a non-rhetorical question.)
13
raesene4 23 hours ago 0 replies      
Good article even though I don't agree with all the conclusions.

I find a good way to think about things is that every single dependency you have adds another set of people you have to trust.

You're trusting the competence of the developer (i.e. that the library has no security flaws), you're trusting their intent (i.e. that they don't deliberately put malicious code into the library) and you're trusting their Operational Security practices (i.e. that their systems don't get compromised, leading to loss of control of their libraries).

Now when you think about how little you know about most of the owners of libraries you use, you can see possibility for concern.

The bit I disagree with the article about is signing. I personally think that developer signing is a useful part of this as it takes the repository owner out of the trust picture (if done correctly). Without it you're also trusting the three items above for the repository provider and it's worth noting that a large software repo. is a very tempting target for quite a few well funded attackers.

Docker at least has provided some of the technical pieces to address this in their repositories with content trust...

14
drinchev 22 hours ago 0 replies      
I think the problem is in npm and not in the micro-modules.

Writing isomorphic, cross-browser code in a language full of edge-cases, like JavaScript is hard.

Oneline function, but 20 lines of tests and another 20 test environments.

The solution should not come from people that write only front-end JS code. I'm waiting for a response by the libraries that were broken by left-pad.

15
jondubois 11 hours ago 0 replies      
Totally agree, I never understood this fetish developers have with small packages. It's probably related to the 'Unix philosophy' but it just doesn't scale in a web context...

Personally, I much prefer a single nice, complete, well-polished module that works perfectly over lots of tiny modules which are awesome individually but which suck when used together.

16
dec0dedab0de 23 hours ago 2 replies      
Maybe the solution for highlevel languages is to just routinely add useful helper functions, either in separate name spaces or directly to global with a naming convention to avoid conflicts. If thousands of people are doing the same thing it really doesn't make any sense for them to all come up with their own version.
17
zalzal 18 hours ago 0 replies      
There is a bigger debate on micropackages, for sure. But even in the sort term, breaking your build instantly every time third parties make changes is just madness. Reduce operational dependencies as well as library dependencies.

This is one approach we used to deal with this last year, for example, on build/devops side: https://medium.com/@ojoshe/fast-reproducible-node-builds-c02...

18
dougdonohoe 23 hours ago 0 replies      
19
homero 12 hours ago 0 replies      
Learn about clean room. If you saw the code. You'll copy it subconsciously. If the license requires attribution and you don't. You're in trouble.
20
cdnsteve 17 hours ago 0 replies      
Micro deps are the sign of something missing from the core language. We should be working to have that expanded and not have it shouldered to a package manager system and community to fill IMO.
21
debacle 23 hours ago 0 replies      
This was well written. The balance between convenience and liability is something that takes time to digest.

I don't really understand why there isn't a stdlib of these "micropackages" that can be downloaded to save a lot of effort.

22
nikolay 21 hours ago 0 replies      
There's where git-vendor [0] comes in place!

[0]: https://brettlangdon.github.io/git-vendor/

23
dc2 22 hours ago 2 replies      
> Multiplied with the total number of downloads last month the node community downloaded 140GB worth of isarray.

This is not true. NPM locally caches every module the first time it is downloaded.

Therefore with widely downloaded modules such as isarray, it is very likely it has already been downloaded on the local system and so is pulled from the cache.

The actual percentage of fresh downloads from NPM in a real-world deployment is overwhelmingly small.

24
emodendroket 21 hours ago