hacker news with inline top comments    .. more ..    13 Feb 2017 Best
home   ask   best   5 months ago   
1
The web sucks if you have a slow connection danluu.com
1255 points by philbo  4 days ago   595 comments top 96
1
ikeboy 4 days ago 9 replies      
>When I was at Google, someone told me a story about a time that they completed a big optimization push only to find that measured page load times increased. When they dug into the data, they found that the reason load times had increased was that they got a lot more traffic from Africa after doing the optimizations. The teams product went from being unusable for people with slow connections to usable, which caused so many users with slow connections to start using the product that load times actually increased.
2
gabemart 4 days ago 16 replies      
Something I have had at the back of my mind for a long time: in 2017, what's the correct way to present optional resources that will improve the experience of users on fast/uncapped connections, but that user agents on slow/capped connections can safely ignore? Like hi-res hero images, or video backgrounds, etc.

Every time a similar question is posed on HN, someone says "If the assets aren't needed, don't serve them in the first place", but this is i) unrealistic, and ii) ignores the fact that while the typical HN user may like sparsely designed, text-orientated pages with few images, this is not at all true of users in different demographics. And in those demos, it's often not acceptable to degrade the experience of users on fast connections to accommodate users on slow connections.

So -- if I write a web page, and I want to include a large asset, but I want to indicate to user agents on slow/capped connections that they don't _need_ to download it, what approach should I take?

3
Someone1234 4 days ago 11 replies      
I found out this the hard way.

T-Mobile used to offer 2G internet speeds internationally in 100+ countries included in Simple Choice subscriptions. 2G is limited to 50 kbit/s, that's slower than a 56K modem.

While this absolutely fine for background processes (e.g. notifications) and even checking your email, most websites never loaded at these speeds. Resources would time out, and the adverts alone could easily exceed a few megabytes. I even had a few website block me because of my "ad blocker" because the adverts didn't load timely enough.

Makes me feel for people in like rural India or other places still only at 2G or similar speeds. It is great for some things, not really useable for general purpose web browsing any longer.

PS - T-Mobile now offers 3G speeds internationally; this was just the freebie at the time.

4
geforce 4 days ago 4 replies      
Sad thing is that most of the web sucks on rather fast connections too. Pages being almost 5mb of data, making multiple dozens of requests for librairies and ads. Ads updating in the background, consuming evermore data.

I don't notice it much on my PC, since I've got a FTTH connection, but on LTE and 3G, it's very noticeable. Enough that I avoid certain websites. And that's nowhere near slow by his standards.

I do agree that everyone would benefit from slimmer websites.

5
etatoby 4 days ago 3 replies      
I design and write my company's framework, that other devs use to write websites and webapps.

I base my work on existing technologies (lately Laravel, which means Symfony, Gulp, and hundreds of other great libraries) but I always strive to:

1. Reduce the number of requests per page, ideally down to 1 combined and compressed CSS, 1 JS that contains all dependencies, 1 custom font with all the icons. Everything except HTML and AJAX should be cacheable forever and use versioned file naming.

2. Make the JS as optional as possible. I will go out of my way to make interface elements work with CSS only (including the button to slide the mobile menu, various kinds of tooltips, form widget styling, and so on.) Whenever something needs JS to work (such as picture cropping or JS popups) I'll make sure the website is usable and pretty, maybe with reduced functionality or a higher number of page loads, even if the JS fails to load or is turned off. Also, the single JS file should be loaded at the end of the body.

2b. As a corollary, the website should be usable and look good both when JS is turned off, and when it's turned on but still being loaded. This can be achieved with careful use of inline styles, short inline scripts, noscript tags, and so on.

3. Make the CSS dependency somewhat optional too. As a basic rule, the site should work in w3m, as pointed out above. Sections of HTML that make sense only when positioned by CSS should be placed at the end of the body.

I consider all of this common sense, but unfortunately not all devs seem to have the knowledge, skill, and/or time allowance to care for these things, because admittedly they only matter for < 1% of most website's viewers.

6
SwellJoe 4 days ago 2 replies      
I travel fulltime and my primary internet is 4G LTE. But, even though I spend $250 per month on data, I still run out, and end up throttled to 128kbps for the last couple days of the data cycle. The internet is pretty much unusable at that rate. I can leave my email downloading in Thunderbird for a couple of hours and that's usable (gmail, however is not very usable), and I can read Hacker News (but not the articles linked, in most cases). Reddit kinda works at those speeds. But nearly everything else on the web is too slow to even bother with. When I hit that rate cap, I usually consider it a forced break and take a walk, cook something elaborate, and watch a movie (on DVD) or play a game.

So, yeah, the internet has gotten really fat. A lot of it seems gratuitous...but, I'm guilty of it, too. If I need graphs or something, I reach for whatever library does everything I need and drop it in. Likewise, I start with a framework like Bootstrap, and some JavaScript stuff, and by the time all is said and done, I'm pulling a couple MB down just to draw the page. Even as browsers bring more stuff into core (making things we used to need libs for unnecessary) folks keep pushing forward and we keep throwing more libraries at the problem. And, well, that's probably necessary growing pains.

Maybe someday the bandwidth will catch up with the apps. I do wish more people building the web tested at slower speeds, though. Could probably save users on mobile networks a lot of time, even if we accept that dial-up just can't meaningfully participate in the modern web.

7
nommm-nommm 4 days ago 1 reply      
What really has baffled me lately is Chase's new website. They did a redesign around, maybe 6 months ago, to make it "more modern" or something, I guess.

Now the thing just loads and loads and loads and loads. And all I want to do is either view my statement/transactions or pay my bill! Or sometimes update my address or use rewards points. That's not complicated stuff. I open it up in a background tab and do other stuff in-between clicks to avoid excessively staring at a loading screen.

I just tried it out, going to chase.com with an empty cache took a full 16 seconds to load on my work computer and issued 96 requests to load 11MB. Why!?

I then login. The next page (account overview) takes a full 32 seconds to load. Yep, half a minute to see my recent transactions and account balances. And I have two credit cards with zero recent transactions.

I am just baffled as to who signed off on it!! "This takes 30 seconds to load on a high speed connection, looks good, ship it."

8
iLoch 4 days ago 13 replies      
> Why shouldnt the web work with dialup or a dialup-like connection?

Because we have the capability to work beyond that capacity now in most cases. That's like asking "why shouldn't we allow horses on our highways?"

> Pretty much everything I consume online is plain text, even if it happens to be styled with images and fancy javascript.

No doubt, pretty much everyone who works on web apps for long enough understands that it's total madness. The cost however, in supporting people so far behind as to only be able to serve them text is quite frankly unmanageable. The web has grown dramatically over the past 20 years both in terms of physical scale and supported media types.

The web is becoming a platform delivery service for complex applications. Some people like to think of the web as just hyper text, and everything on it should be human parse-able. For me, as someone who has come late to the game, it has never seemed that way. The web is where I go to do things: work, learn, consume, watch, play. It's a tool that allows me to access the interfaces I use in my daily life. I think there's a ton of value in this, perhaps more than as a platform for simple reading news and blogs.

I look forward to WebAssembly and other advancements that allow us to treat the web as we once treated desktop environments, at the expense of human readability. It doesn't mean we need to abandon older + simpler protocols, because they too serve a purpose. But to stop technological advancement in order to appease the lowest common denominator seems silly to me.

9
diggan 4 days ago 8 replies      
Something that sticks out looking at the table. How can some sites simply FAIL loading? I mean, there is something inherently wrong with our web today, where if my internet is very slow and _could_ load a page in 80 seconds if I just leave it like that, the server itself could have configured the timeout to be 60 seconds. So I can never load the page?!

The assumption is here that both points of the connection is based on earth. When we have these hard timeout limits, how will stuff even remotely work when we are a interplanetary species or even from orbit around earth?

10
whiddershins 4 days ago 2 replies      
After spending a month in Mexico, including regions with spotty/inconsistent service from one minute to the next, I think the problem goes deeper.

Browsers are IMO terrible at mitigating intermittent and very slow connections. Nothing I browse seems to be effectively cached other than Hacker News. Browsers just give up when a connection disappears, rather than holding what they have and trying again in a little bit.

The only thing I used which kept working was DropBox. DropBox never gives up, it just keeps trying to sync and eventually it will succeed if there is any possibility of doing so.

I understand the assumptions of the web are different than an app like Dropbox, but I think it might be a good idea to reexamine those assumptions.

11
20years 4 days ago 1 reply      
Most of the web really sucks on fast internet connections too. Thanks to so many web developers thinking every dang thing needs to be a single page app using a heavy JavaScript framework. Add animation, badly optimized images and of course ads and it becomes really unbearable.

We keep repeating our same mistakes but just in a different way.

12
E6300 4 days ago 1 reply      
> The main table in this post is almost 50kB of HTML

Just for fun, I just took a screenshot of that table and made a PNG with indexed colors: 21243 bytes.

13
Filligree 4 days ago 3 replies      
Not related to the contents of the article, but please add a max-width styling to your paragraphs. 40em or so is good.
14
schoen 4 days ago 1 reply      
Joey Hess (joeyh) has been writing about this for a long time (because he uses dial-up at his home). Here is a recent thread about a 2016 blog post on this:

https://news.ycombinator.com/item?id=13397282

15
fenwick67 4 days ago 3 replies      
By far the worst site I regularly use, from a page loading perspective, is my local newspaper.

It takes about 10 seconds before it loads to a usable state on a T1 connection.

If I pop open an inspector, requests go on for about 30 seconds before they die down. It's about 8MB.

http://www.telegraphherald.com/

16
tetha 4 days ago 0 replies      
I might need a reality check here because this is feeling weird.

I'm currently building a web-based application to store JVM threaddumps. This includes a JS-based frontend to efficiently sort and filter sets of JVM threads (for example based on thread names, or classes included in thread traces). Or the ability to visualize locking structures with d3, so you can see that a specific class is a bottle neck because it has many locks and many threads are waiting for it.

I'm doing that in a Ruby/Vue application because those choices make the app easy. You can upload a threaddump via curl, and share it with everyone via links. You can share sorted and filtered thread sets, you can share visualizations with a mostly readable link. This is good because it's easy to - automatically - collect and upload thread ddumps, and it's easy to collaborate with a problematic locking situation.

So, I'd call that a fairly heavy web-based application. I'm relying on JS, because JS makes my user experience better. JS can fetch a threaddump, cache it in the browser, and execute filters based on the cached data pretty much as fast as a native application would. Except you can share and link it easily, so it's better than visualvm or TDA.

But with all that heavywheight, fast moving web bollocks... Isn't it natural to think about web latency? To me it's the only sensible thing to webpack/gulp-concat/whatever my entire app so all that heavy JS is one big GET. It's the only sensible thing to fetch all information about a threaddump in on GET just to cache it and have it available. It's the only right thing to do or else network latency eats you alive.

Am I that estranged by now by having worked on one low-latency, high-throughput application by now? To avoid confusion, the threaddump storage is neither low-latency, nor high-throughput. Talking java with 100k+ events/s and < 1ms in-server latency there.

17
Entangled 3 days ago 0 replies      
Can't browsers provide a service like

txt://example.com

that shows web content in plain text, no images, no javascript, nothing, something like readability but directly without loading the whole page first?

It would also be good for mobile connections.

* Wikipedia should be the first site to offer that txt: protocol, Google second.

* Btw, hacker news is the perfect example of a text only site.

18
AngeloAnolin 4 days ago 0 replies      
"Googles AMP currently has > 100kB of blocking JavaScript that has to load before the page loads"

Wasn't it that Google was claiming that by using AMP, you can actually make web pages load faster as it is a stripped-down form of HTML[1].

From what I am hearing from the author (Dan), bare html with minimal JS and CSS should (in theory/reality?) load pages faster.

https://moz.com/blog/accelerated-mobile-pages-whiteboard-fri...

19
jaclaz 4 days ago 0 replies      
A unit of measure I find appropriate is the "Doom", 2015 prediction:

https://twitter.com/xbs/status/626781529054834688

20
Tade0 4 days ago 3 replies      
Kudos to the author for making the post readable using a 32kbps connection.

My apartment does not have a landline, not to mention any other form of wired communication, so my internet connection is relegated to a Wi-Fi router that's separated by two walls(friendly neighbour) and a GSM modem that, after using the paltry 14GB of transfer it provides, falls back to a 32kbps connection.

Things that work in these circumstances:

- Mobile Facebook(Can't say I'm not surprised here).

- Google Hangouts.

- HN (obviously).

- A few other videoconferencing solutions(naturally in audio only mode).

Things that don't work, or barely work:

- Gmail.

- Slack(ok, this one sort of works, but is not consistent).

- Most Android apps.

- Github.

EDIT: added newlines.

21
jlardinois 3 days ago 0 replies      
> In the U.S., AOL alone had over 2 million dialup users in 2015.

I've seen this figure a few times before, and I wonder every time who these users are. Specifically I'm curious what the breakdown is between people who

- Really don't have a better option available (infrastructure in this country is unbelievably bad in some places, so I wouldn't be surprised at a large size for this group)

- Are perfectly happy with the dialup experience so they don't switch to something better

- Don't know there are better options so they stay with dialup

- Don't even realize they never cancelled AOL and are still having it auto-debited every month

- Some other option I didn't think of

22
nedt 4 days ago 0 replies      
One thing that isn't mention is webfonts. On 2G I can load the whole page, CSS, JS and some images, but can't read anything because the fonts aren't loaded yet.Here is a gallery of a couple of examples: https://imgur.com/gallery/wfjoT
23
stevoski 4 days ago 1 reply      
My team has just started work on a new SaaS product. We are taking articles like this to heart and aiming to keep pages light and fast. We are using very little JavaScript.

Let's see if the market rewards us or punishes us for this approach...

24
smacktoward 4 days ago 2 replies      
Looking at that first table, one question jumps out at me: what the heck is Jeff Atwood doing on pages at Coding Horror that makes them weigh 23MB?

I mean, I'm all for avoiding premature optimizations, but 23MB for one page is just... wow.

EDIT: As a sanity check, I just tried loading the CH home page from a cold cache myself. Total weight: 31.26MB. Yowch.

25
gwu78 4 days ago 0 replies      
"Pretty much everything I consume online is plain text..."

Yes.

My kernel, userland, third party software and configuration choices, the entire way in which I use the computer, are optimized for consuming plain text.@1

As a consequence, the web is very fast for me compared to a user with a graphical browser. This is why every time some ad-supported company claims they are offering a means to "make the web faster" it makes them appear to me as even more dishonest. They are, at least indirectly, the ones who are responsible for slowing it down. They are promising to fix a problem they created, but will never really deliver on that promise. Conflict of interest.

@1 I find there is no better way to optimize for fast, plain text web consumption than to work with a slow connection. It is like when a batsman warms up with weights on the bat. When he takes the weights off, the bat feels weightness, and the velocity increases. When I spend a year or so on a slow connection and adjust everything I do to be as bandwidth efficient as possible, then when I get on a "fast" connection, the speed is incredible.

I also use the same technique with hardware, working with a small, resource constrained computer. When I switch to a larger, more powerful one, such as a laptop, the experience is that I instantly have an enormous quantity of extra memory and screen space, for free. I do not need a HDD/SSD to work. My entire system and storage fits easily in memory.

Now if I do the opposite, if everyday I only worked on a large, powerful computer with GB's of RAM with a fast connection, then switching to anything less is going to be an adjustment that will require some time. I would spend significant time making necessary adjustments before I could get anything else done.

26
franciscop 4 days ago 1 reply      
I totally agree. I used to have a really bad mobile connection up until a few years ago (Spain), and still when I use up all my mobile internet it reverses to 2G.

So I know the pain and decided I wouldn't do the same to my users as a web developer. I created these projects from that:

- Picnic CSS: http://picnicss.com/

- Umbrella JS (right now website in maintenance): http://github.com/franciscop/umbrella

Also I wrote an article on the topic:

- https://medium.com/@fpresencia/understanding-gzip-size-836c7...

Finally, I also have the domain http://100kb.org/ and intended to do something about it, but then I moved out of the country and after returning things got much better and now I have decent internet so I lost interest. If you want to do anything with that domain like a small website competition just drop me a line and I'll give you access.

27
Pica_soO 4 days ago 1 reply      
Wasnt this "backwards" compatability the reason blizzard was always so succesfull? Using old but sturdy tech, that would work on the slowest of machines.

Actually one could make a whole slowMo WebStandard from this. No Pictures, just svgs, no constant elaborate javascript chatter, no advertising. No videos, no music, no gifs, just animated svgs. Actually, that would be something lovely. Necessity begets ingenubeauty.

28
andrewstuart2 4 days ago 4 replies      
Did most of the web suck when we were on 28k or 56k modems? I'd argue that it didn't, and yet even with the light weight of pages back then, it was incredibly slower than today's pages (even heavy ones) load over our much-faster connections.

So really, I think what the author is observing is that having experienced high-speed reliable connections, it is very disappointing to move to a much slower connection. For the emerging tech markets, I can imagine the experience would not be great if the load was long enough to cause timeouts and connection failures, but at the same time, the 99% experience, as it probably was when the web was born, is "holy crap look at everything I have access to now!"

Yes, there are some really terribly optimized and redirect-happy sites out there and yes, you should do everything you can to make your page speedy. Everybody benefits when you do. I think, though, that this is more of a case of "let's be thankful for and aware of what we have," and "if you suddenly have a slower connection you might find yourself annoyed" more than "most sites suck on slow connections."

29
zeveb 4 days ago 0 replies      
> Pages are often designed so that theyre hard or impossible to read if some dependency fails to load. On a slow connection, its quite common for at least one depedency to fail. After refreshing the page twice, the page loaded as it was supposed to and I was able to read the blog post, a fairly compelling post on eliminating dependencies.

slow clap

His data on steve-yegge.blogspot.com is particularly unfortunate: Steve's (excellent) posts are almost completely pure text, and there's no reason for them to fail to download or display, except that Google demands that one execute JavaScript in order to get a readable page.

> if youre browsing from Mauritania, Madagascar, or Vanuatu, loading codinghorror once will cost you more than 10% of the daily per capita GNI.

Maybe the social-justice angle can convince some people to shed their megabytes of JavaScript and embrace clean, simple, static pages? There's probably some kid in rural Ethiopia who might have been inspired to create great things, if only he'd been able to read Steve Yegge's blog.

> The ludicrously fast guide fails to display properly on dialup or slow mobile connections because the images time out.

slow clap

> Since its publication, the ludicrously fast guide was updated with some javascript that only loads images if you scroll down far enough.

Incidentally, is there any way we can enforce the death penalty against people who load images with JavaScript? HTML already has a way to load images in a page: it's the <img> element. I shouldn't be required to hand code execution privileges over to any random site on the Internet in order to view text or images.

30
amelius 4 days ago 0 replies      
But HN almost never sucks, even on slow connections. That's why, when I'm on mobile, I only read the comments and not the articles :)

By the way, here's how we can collectively make the web faster, safer and more fun to use: [1]

[1] https://news.ycombinator.com/item?id=13584980

31
roadbeats 4 days ago 0 replies      
Not just web, mobile apps also suck when you have slow connection. For example, you can't open Itunes when you're on GPRS. It's trying to connect to Apple Music and locks you in a screen with a big Apple logo. Same as Spotify. Just try your apps with GPRS :) I camp every weekend so noticed how much they suck long time ago
32
anigbrowl 4 days ago 0 replies      
I really wonder how much time designers and developer actually spend on thoughtful testing vs. a/b or automated testing. Sometimes the problems on websites just seem so....clueless.

My current pet hate is news sites that float up a modal window asking me to turn off my ad blocker because bidness. OK, I turn off AdBlock Pro for that domain, turn off HTTP switchboard, and it still won't load. Why? I dunno, try again, still won't load. OK, guess I'm never coming back. Obviously it must be some other extension, but without any technical details how can I tell?

For that matter why did anyone think it was ever a good idea to float dialogs over web pages to get people to share (not submit) their email address? Has anyone ever looked at how poorly these display on mobile devices? Or how making it hard to close floating dialogs is a really good way to annoy people?

33
gimagon 4 days ago 1 reply      
Could a lighter weight website serve more users for the same dollar of bandwidth as bloated website?

It seems to me there's a business strategy, where rather than pushing for more ads, a website pushes for lighter weight and promises its few advertisers a wider audience.

34
atbentley 4 days ago 1 reply      
> if we just look at the three top 35 sites tested in this post, two send uncompressed javascript over the wire, two redirect the bare domain to the www subdomain, and two send a lot of extraneous information by not compressing images

So uncompressed javascript and images are bad, but I thought apex domain to www subdomain redirection was an optimisation as the apex domain can often only point to a single server but the subdomain can point to a range of geographically well distributed CDNs. So rather than going to North America for every request, the browser only needs to do it once than the rest can come from a regional CDN. Am i misunderstanding something, does this also break down on a slow connection?

35
jmcdiesel 4 days ago 0 replies      
As someone who had fiber internet then had to spend a year and a half on 1.5MBPS DSL ... (hell) ... I can say I agree that it sucks...

I can also say that at no point did I feel entitled for it to work better for me. I don't understand this level of entitlement (i dont like your ads, i dont like your layout, i dont like your visual effect ...) ... just leave the site.

The modern web isn't simple static pages... its not going to revert to that, either. We're developing actual applications in the browser now... those aren't easily translated to static, simple pages...

This is today's "grumpy old engineer" argument...

36
mmagin 4 days ago 0 replies      
The other issue I have with web page bloat: memory-constrained mobile devices are able to cache far fewer pages than a desktop computer, and navigating among multiple tabs, etc. gets slowed down to internet connection speed.
37
warcher 4 days ago 1 reply      
I'm gonna read the article, I promise, but is the title really "If your internet is bad, the internet is bad"?
38
Swizec 4 days ago 1 reply      
Slow connection is okay, it's just slow. Now spotty connection, or high latency, that's the killer.

Webapps that make 50 requests to download all the JavaScript and CSS and talk to the API and get 3 images really really really don't behave well when 12 of those 50 requests fail or take 30 seconds to complete. Honestly, I'd rather have slow internet than packet lossy internet.

Still don't know why, but my Xfinity router routinely gets into a state where it drops the first 10 or so packets of any request. The first `ping 8.8.8.8` takes 3 seconds, the rest are the usual 0.1 second. Terrible.

39
gkya 4 days ago 0 replies      
It sucks if you have a fast connexion too, because then your CPU and RAM suffer instead. And as you add addons to rectify the many offending web pages, then the performance penalty of those quickly equal to that of crappy JS. I was so happy with Xombrero as my browser, but it's stagnant and insecure now. I do like my Firefox but with all the blocking addons it's slow, and without them it's slower (not that it's its fault).
40
markplindsay 4 days ago 0 replies      
See also: The Website Obesity Crisis[0] by Maciej Ceglowski

[0] http://idlewords.com/talks/website_obesity.htm

41
meriobrudar 4 days ago 6 replies      
Wow, really? Who knew overuse of JS and fancy graphical effects where they're not needed could negatively impact user experience? Could it be that all the web devs using 20 CDNs, cramming 900 frameworks, 100 externally provided analytics, advertisement providers and fancy layout eye-candy were wrong all along? What a surprise!

I'm already sick when I have to visit a webpage and it won't even load ANYTHING if I don't enable scripts on it. At least load the god damn text, I don't care if it'll look like trash, just don't show me a blank page...

The irony is that everyone calls for people to not use Flash, and then they go out of their way to recreate the abysmal experience without it, so really nothing changed as far as UX goes. Remember when pages didn't load at all unless you had flash installed? Well here's some nostalgia for you, won't load unless you run all the JS on the page and then you have to "enjoy" a bloated joke of a website, but Jesus does it have eye-candy!!!

42
mathgenius 4 days ago 0 replies      
I wonder if there is room for a product, a kind of browser-in-a-website, that would eat those big-ass webpages (server-side) and spit out just the text and (heavily compressed) jpegs. With a little layout to match the original website. Something like how streaming services adaptively subsample data, or like how NX tries to compress the X window protocol. Obviously this would be patchy, but it could be much better than "FAIL".
43
hnarn 4 days ago 0 replies      
A lot of people here are talking about how 2G connections are "almost unusable" and how this should be optimized server-side and so on. I'd just like to point out that there are browser that cater to this specific demographic (slow connections).

Ever since the days of running Java applications on my old Sony Ericsson phone, Opera Mini has been my favorite. As far as the browser is concerned, the website can be as heavy as it wishes -- it will pass through Operas proxy and be compressed according to user preferences. This could include not loading any images (nothing new), or load all images with very low quality. You can also select whether you want things like external fonts and JS to load, or if you want to block that too. When I moved to a new country my first SIM card had one of those "unlimited but incredibly slow" plans. Opera Mini was a life saver.

I guess my point is that we shouldn't get stuck in optimization paralysis if there is no sound and standardized server-side way to solve this issue (and there doesn't seem to be). It would be nice if browsers had a way to tell web servers that they're operating under low bandwidth, like the do-not-track flag, but AFAIK this does not exist.

Until that exists, and I don't mean to suggest we go back to the days of "Made for IE9" here, maybe some responsibility needs to be shifted to the client side. As long as you design your websites in a sane way, they will pass through these low bandwidth proxies with flying colors. Maybe you don't need to spend hundreds or thousands of man-hours optimizing your page when you could insert a discrete indicator at the top of the screen for anyone taking longer than X seconds to load that there are many browsers available for low bandwidth connections, and that they might want to try them out?

44
mueslix 4 days ago 1 reply      
Instead of making sites that try to predict the unpredictable, I'd rather ask the question if TCP is still the right tool to use.

There shouldn't be a reason for a big page with many resources to not load - it should just be slower. Yet I can make the same observations as soon as my mobile signal drops to EDGE: the internet is essentially unusable as soon as there's packet loss involved and the roundtrip-times increase. Interestingly mosh often still works beautifully in such scenarios. So instead of focusing on HTTP2 or AMP (and other hacks) to make the net faster for the best-case scenario, I'd rather see improvements to make it work much more reliably in less than perfect conditions. Maybe it's time for TCP2 with sane(r) defaults for our current needs.

45
gumby 4 days ago 0 replies      
I was exasperated by his mobile example. Why? This is my life with Comcast (the faster of the two "choices"!) in Palo Alto. I also have Comcast in my ski house in the sticks and it's faster than Palo Alto. But my wired connection is so slow that I sometimes use my phone on LTE to read a page that hangs on Comcast.
46
jayajay 4 days ago 0 replies      
Lately my Pixel has been achieving sub KBps speeds on very good Wifi connection (laptop in same room 100MBps), and it reminded me of the old days with dial-up on win 98 -- but worse. The estimated download-time for the LinkedIn app (70MB... gg) was a whopping 6 months! What a great way to get me to guzzle up my mobile data.
47
ge96 3 days ago 0 replies      
Yeah I take my 100Mbps connection lightly, developing an image-oriented web app for the Philippines and holy crap the one guy was lucky to get 0.3Mbps.

So... had to severely redo the code to pull 50px wide images, blur them in, and only load the visible (depending on screen dimension) then a 2-second max refresh thingyyyy (yeah I'm just making this loader-interrupter thing) it's been a mess I feel pretty stupid sometimes. Why can't I get this... JavaScript. Yeap I am lucky to have Google Fiber (and I have the cheaper plan too)

48
tlow 3 days ago 0 replies      
Quora is unusable on a slow connection. It literally shows a popup that obscures content if you lose high speed connectivity or drop packets.

However, the web is even worse if you have no connection at all. This is important because if we provide internet access at a municipal level, we can reach 100% adoption among our pluralistic educational system and progress to primary learning materials that are web based (CA 60119 for example prohibits any primary educational materials not available to all students both in the classroom AND AT HOME).

49
dotchev 9 hours ago 0 replies      
This exactly one of the problems IPFS will solve by serving content from local peers.
50
bsukn 4 days ago 1 reply      
It only sucks if you've experienced a fast connection.

We generally don't target hardware from 98, why should we target bandwidth from 98? Current smartphones and computers are really powerful, and most applications are targeted towards those devices. Native apps don't have this insane requirement to support hardware from 2 decades ago.

The web is so much more than text in 2017. And before you whine about the ads and useless stuff, go read a tabloid and whine about the waste of paper, or try to watch tv and whine about the electricity and time you're wasting watching advertisements.

Media has and always will be like that.

The time spent on backwards compatibility and optimizations are usually not worth it anyway.

Do I think mostly text sites should be 5mb? Obviously not.

51
vegabook 4 days ago 1 reply      
you don't need to travel from Wisconsin to Washington to experience a slow internet connection.

Try any mainstream commute on the South West Trains Wimbledon to Waterloo (London) and you'll a) still get blackouts for about 1/4 of the 25 minute trip (this is one of the most densely populated areas in Europe - no excuses) and b) at 3 of the 4 stations you'll stop at, your vaunted 4g connection will drop to 1998 speeds due to contention. I generally curse the complex sites in these situations because you'll easily be waiting 30-90 seconds (firmly in your heatmap's red zone) for full load at least once per commute.

Incidentally, kudos on perfectly communicative yet lightweight web page (50Kb).

52
oregontechninja 4 days ago 0 replies      
My main source of clients are people suffering from website bloat because they have no idea how to build a website. They jump on every shiny JavaScript library they need and load 8 different versions of bootstrap and then 5 fonts from various sources, all from CDNs. I wish I were exaggerating, but it's such a mess. In every single case, 90% was garbage, and all they really needed was a nice semantic css sheet. Unless you are developing a web-app, or 100% need you ajax calls, you don't need JavaScript. Is this the same for others or am I just in a less technically inclined area?
53
kemps4 3 days ago 0 replies      
I live in a rural area. There are three options for internet - satellite (limited data allowance - but decent speed), dial-up or a local ISP with a Motorola canopy system. I chose the last option. I get 100 KB/sec max download speed (on a good day). Divide by the 4-5 people in the house regularly using the Internet and it gets really slow, really quick. Many times I just give up and shut the computer off or I browse using Lynx.

And nope - no cell phone signal here either..

54
crispyambulance 4 days ago 0 replies      
> Lets load some websites that programmers might frequent...> All tests were run assuming a first page load...

ehh, but is that really a good test for sites people "frequent" ?

What happens to the heatmap when we're talking about subsequent page loads!

55
EGreg 4 days ago 0 replies      
I have a different suggestion.

Build software that can work on a distributed architecgure. So people in Ethiopia can run their stuff on intranets and mesh networks and only occasionally send stuff around the world.

What broadband has really caused is this assumption that the computer is "always online". Apps often break when not online. When in reality there shouldn't even be "online/offline" but rather "server reachable/unreachable". And you should be building offline first apps, with sync across instances.

56
samuell 4 days ago 1 reply      
> The flaw in the page weight doesnt matter because average speed is fast is that if you average the connection of someone in my apartment building (which is wired for 1Gbps internet) and someone on 56k dialup, you get an average speed of 500 Mbps. That doesnt mean the person on dialup is actually going to be able to load a 5MB website.

As someone mentioned below too, the median value would make much more sense in this case (which it often does, it seems).

57
Aoyagi 4 days ago 0 replies      
But like, if you don't fill your website with megabytes of useless bloat, you'll get called out, because "it's 2017".
58
LyalinDotCom 4 days ago 0 replies      
YESSS!! ever have your 4G connection drop to shit? well imagine like that but like 24/7 on your wired connection, that's what many people live with today :(
59
bambax 4 days ago 0 replies      
> A pure HTML minifier cant change the class names because it doesnt know that some external CSS or JS doesnt depend on the class name.

After everything has been parsed, it would know (the browser knows).

Couldn't a proxy service produce super lightweight, compiled web pages? I seem to remember Opera used to offer something along those lines, but I may be wrong.

Would there be commercial value in building such a tool?

60
joeyh 4 days ago 0 replies      
I'm very impressed with Dan's methodology here, and it matches my own experiences with dialup.

One thing I wonder about is, it seems many dialup ISPs these days provide some kind of "accellerator", probably a web proxy that avoids some of the issues with timeouts, perhaps compresses some content etc. So it might be that many of the remaining dialup users don't experience quite as many problems as Dan found.

61
dangoldin 4 days ago 0 replies      
Shameless plug but I did something similar in 2014 and used PhantomJS to analyze the content of the top 1000 Alexa sites: http://dangoldin.com/2014/03/09/examining-the-requests-made-...
62
0xc001 4 days ago 0 replies      
I think about this a lot. And I think it's really easy for a page weight argument to fall into an "old man yells at cloud" tone. But I also want the industry to move towards simpler HTML and such, so, I've been thinking up an argument that companies will buy. I'm really bad at it though. Maybe the extra African market will open up new ad revenue?
63
kyleblarson 4 days ago 0 replies      
I live in a very remote town in the North Cascades in Washington state and work remotely in development. I'm on a 1.5mb DSL connection and while it's slow it's consistent and I rarely have issues with Skype / Hangouts / Slack / Git / normal work. Downloading large data dumps is another story, but you learn to plan ahead.
64
tmaly 4 days ago 0 replies      
I remember dial-up on a really slow modem back in the bbs days.

I was reminded of the slow connection with T-Mobile 2 years ago while in the Philippines. They give you free data in 120 countries, but its throttled.

This was my main motivation for rewriting my side project using highly optimized css and not a large framework that uses web fonts and bloated libraries.

65
ddebernardy 4 days ago 0 replies      
Try it with a bad connection and a 1st-gen iPad. :-)

You basically need to disable JS altogether to have a chance to even view many websites. And some, well, just crash the browser regardless.

It's amazing how much the web evolved in the past few years...

There used to be a time where supporting 10+ year old browsers was matter of factly. No longer.

66
georgehaake 4 days ago 0 replies      
Out in the country enough that I have 3 meg area wifi with a wife who enjoys streaming and Facebook and two boys who enjoy online gaming and streaming. Not much left for me. All you can eat at least and avoiding satellite.

Oh, we find Amazon, IMDb and Facebook are the biggest pigs on a slow connection.

67
0mp 4 days ago 0 replies      
There is a project called txti which provides a free hosting for a simple websites edited in Markdown: http://txti.es

The idea is to make the content available to all the web users as the fast connection is not as common as we might think.

68
Sir_Cmpwn 4 days ago 0 replies      
Website I'm currently working on has no JS and weighs an average of 15 KiB per page. Loads in <20ms.
69
johnnydoe9 3 days ago 0 replies      
Can confirm, am using horrible internet right now. Googleweblight is a lifesaver for reading article, not sure why it hasn't been mentioned by I recommend everyone facing speed issues to try it.
70
Shivetya 4 days ago 0 replies      
I will be blunt, you would be amazed at the sites that suck even when you have 1g. I used to always think, damn my DSL is slow until I was at 1g and some sites did not improve and how many of the applications I have which can update are throttled
71
dguillot 4 days ago 0 replies      
The worst of all I think it's NHL.comIt appears to me that they have been asked to be "responsive" in terms of viewability instead of functionality. Good luck using this site.
72
SnowingXIV 4 days ago 0 replies      
I feel a good solution to this problem, or at least covers a fair amount of users is having your website work well with safari reader. Anytime even on fast connections, I often find myself loading up a page with reader instead.
73
beautifulfreak 2 days ago 0 replies      
Why not make a site that proxies other sites, but retransmits them as fast-loading? Isn't traffic=dollars?
74
logicallee 4 days ago 0 replies      
Tangentially related:

As affects web apps, some of this is a conscious choice by network designers. First, click on your profile on Hacker News and turn on Showdead. You can then read this thread and my comment in it:

https://news.ycombinator.com/item?id=13597673

While the poster wasn't a web engineer specifically (or didn't say it) so much of the web architecture isn't built for front-loading payloads. But instead, on eventually getting there, through the magic of TCP/IP and letting users wait for a few dozen seconds as pages load.

I disagree with it and think these engineers are wrong and make the wrong decisions (optimize for the wrong things) and that this makes everyone poorer-off.

Thanks for listening. (Happy to discuss any replies here.)

76
kordless 4 days ago 0 replies      
I just realized this is why the growth of network speed is increasing at a lower rate than that of compute. Even though they both continue to grow in capacity, the accelerations are different.
77
julianj 4 days ago 0 replies      
Why hasn't someone implemented a kind of low bandwidth accessibility option? (Or is there one?) I would imagine this would be akin to the multipart text only email.
78
kzrdude 4 days ago 0 replies      
Posting from wifi on a plane over the UK: this is apparently not slow internet, I can read the usual bloated news and blog pages.
79
tlanc 4 days ago 0 replies      
It does. I'm on 2G and HN and the article site are the only usable things I've encountered today [on TMobile's intl roaming thing]
80
coin 4 days ago 0 replies      
> or one of the thirteen javascript requests timed ou

There's the root cause. Why do I need to download executables just to read static content?

81
noway421 4 days ago 0 replies      
This post can be seen as exceptional even because of the fact that that page loads instantaneously. Nothing extra. Bravo.
82
dsfyu404ed 4 days ago 0 replies      
This applies server side too. Note what sort of sites do and don't go down when they make the HN or Reddit front page.
83
realPubkey 4 days ago 0 replies      
And thats why we need to adapt offline-first.
84
uvince 4 days ago 0 replies      
Regardless of connection speed it also sucks if you try using LinkedIN's new website. Nothin' but progress bars.
85
k__ 4 days ago 0 replies      
What is the size per page one should not overstep?

I mean, yes as small as possible. But are there some size-budgets?

For 3G, 2G etc.

86
Esau 4 days ago 0 replies      
Bloat: its not just for operating systems.
87
amazon_not 4 days ago 0 replies      
Do you know how to make the web not suck on a slow connection?

Ssh into a shell account and use a text based browser :)

88
zump 4 days ago 0 replies      
This guys posts are insufferable for constantly namedropping where he works. Ugh.
89
ainiriand 4 days ago 0 replies      
The web sucks, but it sucks less if you have a fast connection.
90
Lxr 4 days ago 0 replies      
We need more sites like this! Absolutely no bloat, so nice to use.
91
dwighttk 4 days ago 0 replies      
how much weight would a little css to make the text not full-width add to that page?
92
songco 4 days ago 0 replies      
And the GFW(great fire wall)...
93
andrewclunn 4 days ago 0 replies      
reddit.com takes 7.5 seconds to load on FIOS? I must be reading this table wrong.
94
lordCarbonFiber 4 days ago 3 replies      
I'm torn here, in a way. On the one hand light page weights and other such optimizations make the internet better for everyone, on the other, there's a certain point where designing your product to target 3 decades ago (we forget 1990 was 27 years ago) gets a little absurd.

I think the greater tragedy is not that the web is bloated (an issue for sure), but that so much of America has internet worse than 3rd world mobile 2G.

95
dsfyu404ed 4 days ago 1 reply      
What do we care? The vast majority of out target audience lives in a city with fast internet

(I'm not putting /s because there's actually people that think this is a reasonable opinion in the general case).

96
ldev 4 days ago 2 replies      
Well 3G is as low as you can get somewhere deep in the woods, not really a problem...
2
Looking for Work After 25 Years of Octave gnu.org
1320 points by dhuramas  4 days ago   473 comments top 46
1
sandGorgon 4 days ago 23 replies      
This post today is what is wrong with open source software. If someone knows jwe, then they should tell him that lots of corporates and startups WANT to support stuff like this.

But you cannot give a paypal link and expect donations. As a company, I cant do that. I need an invoice. Hell, if you can get a business account, I daresay you will get subscriptions.

I like to call this "gratitude-ware".

Check out Sidekiq Pro and their experience making 80k USD per month. https://news.ycombinator.com/item?id=12925449

At first I built Sidekiq as an LGPL project and sold commercial licenses for $50. Revenue was laughably small, but the response I got was encouraging: people told me they were saving $thousands/mo over previous solutions and wanted to buy the license just to give me something as thanks.

Octave currently offers support packages for which you have to write in to the maintainer and have an email discussion. Compare that with Sidekiq : http://sidekiq.org/products/pro

Its one of the best designed gratitude-ware page...even works amazingly well on a mobile phone.

We personally also buy pfsense licenses https://www.pfsense.org/our-services/gold-membership.html

TL;Dr Donations wont work. Engineers cant give an excuse to corporate accounting. Make a pro subscription with ANY "pro level" feature. I can get my accounting to sign off. And no "contact us to find out about support contract".

2
vcistan-inmate 4 days ago 8 replies      
I don't understand how YC can fund non-profits like VotePlz and the ACLU while not funding stuff like this. All of the startups funded by VCs use free software, often exclusively, but these VCs continue to refuse to adequately fund its development. Marc Andreessen even publicly boasted about how much OSS his companies use[1], which he of course doesn't pay for.

I also shudder to think of how Eaton will fare on the job market should he actually be forced to seek regular employment. Will he be whiteboarded? Will his work on Octave--which by all rights should be able to serve as a strong-enough resume by itself to justify his hiring--even be looked at by potential employers? And what about his age? 25 years on Octave could mean he's pushing 50. I could easily see him getting a "no hire" from plenty of trendy tech companies.

[1] http://www.businessinsider.com/boxnet-2011-9

3
yazaddaruvala 4 days ago 6 replies      
This is not a scalable model of software funding. People may send you money today, because its on their minds, but bills will have to be paid again next month, and next year. Please get yourself, a Patreon account, or something similar.

Meanwhile, I'm sure MathWorks is looking for people with exactly your domain knowledge[0].

https://www.mathworks.com/company/jobs/opportunities/?s_tid=...

4
plinkplonk 4 days ago 1 reply      
You might want to check with the Julia folks. They now have a company (Julia Computing) backing the project. I don't know if they are hiring right now, and if so, whether you'd be a good fit, but it couldn't hurt to ask.

Might be a better alternative to working for MathWorks! Julia is an open source project with many brilliant developers and mathematicians contributing.

http://juliacomputing.com/

I don't see a 'jobs' page, but CEO Viral Shah (and everyone else, but Viral is who I know) is on twitter, and is a great guy.

https://twitter.com/Viral_B_Shah

5
anigbrowl 4 days ago 8 replies      
There's something fundamentally broken about the open-source model when you can invest so much time and npt get any sort of economic return. This seems like a huge limiting factor on open-source development. As I've pointed out in discussions on copyright issues, artists value copyright because the patronage model sucked - you're essentially dependent on people's charity and having to beg just to maintain basic economic security is inefficient, demoralizing, and unreliable.

I think services like Patreon etc. are quite worthy but they're also dysfunctional. Nobody has solved the micro-payment problem yet and it seems like people have just given up trying. In a saner world this person would be rewarded for the enormous technical contribution with a reliable pension of some sort to remove the distraction of financial anxiety.

6
JoelJacobson 4 days ago 3 replies      
$5,000 (will be) donated.

Tried to use PayPal. A Top Up to my PayPal account via Instant Bank Transfer worked fine, but when I tried to use the money by making a transfer to The Octave Guy, PayPal said my account had been frozen :-(

I have emailed the author and requested bank details in order to make a regular bank transfer instead.

7
hakcermani 4 days ago 3 replies      
I beseech every Andrew Ngs ML course student to contribute a little. I just did.
8
antirez 4 days ago 1 reply      
Is it just my feeling/bias or projects under the GNU umbrella suffer the problem of not putting the developers at the center of the project enough? One of the things you should get back from doing something like Octave, is to be recognized at least in certain parts of the software community. When an OSS project is a GNU project maybe it is less likely to get the deserved credits, that later may lead to positions, donations, or whatever, compared to having a project on Github, regularly writing to a blog, and so forth. So, without trying to ignore the fundamental problem of a lot of work important for the society that does not compensates the developers as it should, maybe OSS developers need to get smart and try to put themselves at the center of their projects in order to get the visibility that later may save their careers.
9
brilliantcode 4 days ago 3 replies      
> I would love to continue as the Octave BDFL but I alsoneed to find a way to pay the bills.

Breaks my heart to see such devoted developers having so much trouble paying bills for the work they've done.

I truly believe that this problem can be solved-by crowdfunding open source developments that rivals commercial status quo, we can decouple ourselves from restrictive licensing structures while paying people's bills for those who contribute to the development.

Imagine if Octave just got $3000 USD / month, that should help with basic costs of living (not knowing where the original author is) and also incentivize continued development-25 years of unpaid work is a tragedy.

I still have not figured out all angles but this is my dream. To help open source developers get paid and the work that becomes BSD or MIT licensed will offer a strong alternative to commercial softwares.

I'm wondering if anybody else shares similar vision, please subscribe at http://letsopensource.com or feel free to reach out at my email in my profile.

10
petercooper 4 days ago 0 replies      
At the risk of comparing apples to boulder sized oranges, it's a shame considering Wolfram supports hundreds of employees off of Mathematica alone (something that wows my mind - in a good way - every time I'm reminded of it). I hope this drive works but also the long term prospects.
11
saganus 4 days ago 2 replies      
Sometimes I dream that if I ever become very wealthy (either because of something I created or because I won the lottery) I would go around and donate some nice money to these under-appreciated developers/organizations/projects.

I mean, if I had billions, wouldn't it be feasible to spend say 1M or even 0.5M to say, 100 projects? 200 maybe?

Does this not happen or do we just not hear about it if it ever happens?

12
dhuramas 4 days ago 1 reply      
If you think GNU Octave has made a difference in your life or other's, please take a moment to continue funding his efforts. http://jweaton.org/?page_id=48
13
vijayr 4 days ago 3 replies      
This is probably just wishful thinking, but I wish I had enough money to buy a decently large property and just let people stay there (everything free, including food/stay/healthcare etc) and work on projects like these, for as long as they want. Tech has so many rich people, I dunno why this can't be done. Even if it is just 100 high caliber people staying/working in such a community, they can generate immense value for all of humanity.
14
hughw 4 days ago 1 reply      
While recruiting at UT Austin petroleum engineering school recently, I was impressed how embedded Matlab has become there. The students use Matlab for big semester projects and everyday scratchpad computation. I believe it's a required skill. I don't know the licensing arrangement Mathworks made with them. It's common for software companies to donate licenses to engineering and science departments. It's a loss leader, and presumably lots of the students will pull Matlab licenses into their employers later. But Matlab is so ubiquitous there, and so integrated into the curriculum, it could be that money did change hands.

I wonder if there's an opening along those line for a revenue stream for Octave?

15
andyjohnson0 4 days ago 1 reply      
It is interesting to contrast this with esr's recent announcement "In which, alas, I must rattle a tin cup" [1].

[1] http://esr.ibiblio.org/?p=7348#more-7348

16
sytelus 4 days ago 2 replies      
Some ideas for OP:

- Evangelize Octave in education, especially K-12. Community for Octave is rather small and K-12 education has lots of potential users and money.

- Apply for NSF grants

- Create specialized plugins that people might want to pay for commercial usage. Think of it as your consulting gig.

- Write book not on just using Octave but something more generic like fun with math that can have larger audience.

- Create an edition like Octave Gold for $5 which has zero feature differences but has some cool logo or chrome or fun cosmetic thing. You will be surprised how many people want to pay for it.

17
mooreds 4 days ago 1 reply      
For those of us not in the know, GNU Octave is, according to Wikipedia, "software featuring a high-level programming language, primarily intended for numerical computations."
18
n00b101 4 days ago 1 reply      
Meanwhile, MathWorks annual revenue is over $800 million.
19
baki 3 hours ago 0 replies      
Hmmm...Decrypt it with waterboarding!
20
samfisher83 4 days ago 2 replies      
Why can't a company like google just hire and let him work octave like 20% of the time. They can afford it.
21
supahfly_remix 4 days ago 7 replies      
It seems that octave has been overtaken (in mindshare) by python/numpy. Why is that the case?
22
EdiX 4 days ago 1 reply      
Free startup idea: Patreon for Open Source

* Users signs up to OSPatreon, downloads OSPatreon app that creates ospatreon directory in $HOME/.local/share/ or other operating system appropriate location, then creates a cronjob (or whatever) to rerun after one month

* Open Source projects that want in add a snippet of code that, when the application closes, writes the amount of time the application ran to $HOME/.local/share/ospatreon/

* At the end of the month the OSPatreon window pops up, shows the user a list of ospatreon projects the user has used, sorted by decreasing total time, lets user select how much to donate and the share each application gets (default: proportional to time the app was used)

* You collect money from the user, pocket some percentage, distribute the rest to OS projects.

Startup name: Fosstreon.

23
williamstein 4 days ago 1 reply      
I added a slide about this to the talk I'm giving tomorrow on funding for open source math software at University of Rochester: http://wstein.org/talks/2017-02-09-wing-sage/slides.pdf
24
dostoevsky 4 days ago 0 replies      
As a soon-to-be engineering graduate, I am very thankful for Octave. I've used it (instead of MATLAB) for projects involving numerical methods, control systems, image processing, and all kinds of data manipulation. While it may not be as powerful as MATLAB for certain use cases, it is an amazing piece of software.
25
bordercases 4 days ago 1 reply      
Can you get academia to give you funding to continue the project? It's a common model in the bioinformatics world.
26
Mikeb85 1 day ago 0 replies      
Then make a website, register a company, and make it official. Maybe take a page from RStudio's model and adapt an IDE to use Octave. Offer a product, any product, and people will buy it.

I have no doubt people will pay for the author to continue to develop Octave, but simply sending money to a PayPal account isn't something most people are comfortable with. Plus how do you invoice that?

27
bradneuberg 4 days ago 0 replies      
Just donated 25 bucks. Octave is a fabulous tool.
28
fooledrand 4 days ago 1 reply      
So xoctave[1] is not commercial version of octave?

[1] http://xoctave.com/blog/

29
superquest 4 days ago 1 reply      
Could some kind of micropayment scheme help these projects make some money?

For web servers you might charge per request served. For text editors you might charge per unit time the editor is being used.

It'd be interesting if the maintainer could define an acceptable salary, and all "consumers" would just split the bill by proportion of their usage according to the kind of metrics mentioned above.

I would gladly pay such a bill if nearly all the money went straight to the developer, I could cap my contributions, and it was extremely easy to opt-in. Ideally the package manager would read some dotfile or ask me! The problem with projects like Bountysource is that people will never track down all the OSS projects they depend on and figure what would be a reasonable donation to give them. Too much agency is required of the user to achieve meaningful adoption.

Has anything like this been attempted?

Edit: clarifications ...

30
ireadzalot 4 days ago 0 replies      
Just donated to the J W. Eaton Consulting. Thank you for octave. I used it throughout the Andrew Ng's ML course.
31
crb002 4 days ago 2 replies      
I'd apply to Mathworks as BDFL of Octave and work on Matlab support for open source packages.

So many areas to work on. Integrating D3.js, integrating with Python machine learning/dataframe libraries, FPGAs, SMT solvers like Z3 and CVC4, ...

32
vanattab 4 days ago 2 replies      
I have not tried the octave GUI in a couple of years but back then it was awful. In my opinion this has always been octaves biggest weakness. If they would fix this I am sure they gain significantly marketshare from matlab.
33
rdstone2 4 days ago 0 replies      
This is worth while. Wish I could give more.
34
johnmarcus 3 days ago 0 replies      
Octave seems to have some great mentions on indeed.com. https://www.indeed.com/jobs?q=octave&l=Not sure if any of those jobs are near you, but give it a look. The position titles might help you expand your search in your local area.
35
kazinator 3 days ago 0 replies      
Most people with an open source side project P will never be in the position of "oh, I need money; I guess I will have to find a non-P job".

That's the norm.

36
viraptor 4 days ago 0 replies      
Does anyone know if the paypal page is doing it right? Usually I see something about donation with other projects. This one says "Purchase details" and has quantity. I'd hate for it to be closed down for not complying with some paypal rules.
37
hebbarp 4 days ago 0 replies      
Yes, Octave the saviour. Sad that such projects aren't supported or there is no foundation that can come forward to help the BDFL. Made my contribution, hope it adds to the drop in the ocean. All the best.
38
jgord 4 days ago 0 replies      
Perhaps offer tiered packages of sponsorship for companies to advertise at OctaveConf ?

eg. A special thankyou to our Gold sponsor XYZ Corp who donated 10k to support Octave core developers and this conference. etc.

39
tigroferoce 4 days ago 0 replies      
I've never used so much octave (except for a couple of exams at university), but 25 years of life and work devoted to the community deserves my 10$.

Thank you for all the work and good luck!

40
shitgoose 3 days ago 0 replies      
this is sad. I just rediscovered octave not long ago. I have been using R, but was never comfortable with it - yukky syntax, clunky RStudio. octave has clean and solid UI, nice syntax that sticks and reasonable base packages.

I wish you guys could develop a practice around octave, selling services, while keeping base product free.

41
tempire 4 days ago 0 replies      
Contribution made. Octave is on point, and easily accessible for those wanting to learn Matlab-like syntax without having to mortgage their homes.
42
amelius 4 days ago 0 replies      
It's a sad truth that it is really difficult to make decent money writing scientific software.
43
geff82 4 days ago 0 replies      
I like Octave so I just sent some bucks. Doesn't hurt me, but helps the cause.
44
Chobicus 4 days ago 0 replies      
Made a small donation since I've used it in Andrew Ng's ML class
45
stevehiehn 3 days ago 0 replies      
Jeez, This is heart breaking.
46
david38 4 days ago 0 replies      
I'm sure he can a job at Google.
3
A US-born NASA scientist was detained at the border until he unlocked his phone theverge.com
911 points by smb06  22 hours ago   420 comments top 62
1
Jonnax 20 hours ago 18 replies      
Their point about how other countries will take the US's stance as a cue is somewhat scary.

If you try and cross any border it will be relinquishing access to all accounts.

I'm assuming email also comes along with 'social media', since communication is by its definition social.

So how do you protect yourself? I think just going with "Don't have any social media" isn't a good answer because the relationship that children growing up today with the internet is almost completely different to even people 10-15 years older than them.

Someone having carte blanche access to a person's phone will find something if they want to.

Imagine you're in a few group chats, someone mentions doing some drug. And you've just entered a country where that's an instant prison sentence.

Maybe some off colour jokes about politicians? Proof to kick you out or at least detain.

I imagine we're at the cusp of something much more unsettling. The technology to reverse image search a face is available today. It's pretty easy to make you appear associated with anything, anyone, etc.

2
krab 19 hours ago 1 reply      
As a EU citizen, I see these events from a bit different angle. I have visited US several times and the atmosphere and behaviour of both the customs officers and the TSA personnel gets more and more overlooking.

I have never went through such extended search but going across a US airport feels really uncomfortable, to the extent I haven't seen in another country (UK comes close, though). The thing is Trump only added a little bit. This is a process that has been evolving for some time already.

I wonder if anything would change if all US travellers to Europe would be given a leaflet explaining:

"As a reciprocal measure for ESTA or Visa process, you are obliged to pay $14 entry fee. Moreover, we will perform an extended search to every fifth American passport holder. During the search, we may seize your devices and ask for your passwords. Not complying may result in a detention up to 24 hours and/or denied entry."

3
caminante 20 hours ago 5 replies      
Here's the Customs and Border Patrol policy in question [0] (see page 31).

The EFF has a nice write-up on this topic [1]. It sounds like there's a "border search exemption" that bypasses the Fourth Amendment. The rationale was to ensure duties were paid and screen for "bad guys," drugs, weapons, diseased fruit, etc.

[0] https://www.dhs.gov/sites/default/files/publications/privacy...

[1] https://www.eff.org/deeplinks/2016/12/law-enforcement-uses-b...

4
suprgeek 20 hours ago 1 reply      
The crux of the matter is here:

More importantly, travelers are not legally required to unlock their devices, although agents can detain them for significant periods of time if they do not....

and here:The document given to Bikkannavar listed a series of consequences for failure to offer information that would allow CBP to copy the contents of the device. I didnt really want to explore all those consequences, he says. It mentioned detention and seizure.

It sounds like CBP is trying to circumvent the "PIN Revealing" need by basically illegally detaining Citizens until they do.

This is grounds for "Habeas Corpus" lawsuit - should a citizen really dig their heels in.

5
TomMarius 19 hours ago 2 replies      
When I was a child (in central Europe), USA was seen as a heaven where everyone would like to live - and I did too. Nowadays I'm very happy I live in a "poor", speaking by numbers, but much more free republic in the middle of the old continent.
6
Abishek_Muthian 17 minutes ago 0 replies      
Supposedly the reason being analysed is his South Indian name ' Sidd Bikkannavar ' ; I'm curious to know the story behind his name being a South Indian myself. The part of the name 'annavar' is generally found in interior villages being associated with village gods but I have never come across anyone named this way, perhaps it was 'Siddarth' which got shortened to Sidd.
7
mindslight 20 hours ago 1 reply      
Especially with a USG-owned device, this seems like it would have been a ripe time to assert one's citizenship for entry and just let them steal the device.

The last time I traveled internationally, I purposely brought only an old laptop. To return, I zeroed the hard drive and physically removed it from the machine so the scum would have pretext to steal less of my property.

For my preparation I was rewarded with absolutely no thuggery, which is how the sheer majority of border crossings actually go. That's the insidious thing about the inverted-totalitarian threat model - these specific situations are inherently rare. If they were common, change would easily happen through democratic means. It is only through the majority of people believing that it cannot happen to them, are the injustices allowed to persist.

We really need a reboot for a modern OS model which puts cryptographic access control front and center, with support for secret splitting and the appropriate bottom-up foundation that allows for steganographic-secure machines. I can actually see this plausibly happening for proper personal computers, eventually. Unfortunately the average person's computing device has become a "cell phone" which, even ignoring the inherent pwntivity of Qualcomm integrated chips, is a software ecosystem funded primarily through commercial surveillance.

8
makecheck 19 hours ago 2 replies      
There are at least four facts that should cause stuff like this to be discontinued immediately:

It is not only possible to acquire electronic data after crossing a checkpoint but there are many ways of doing so.

There is no possible way for the contents of a phone to be a threat to TRANSPORTATION security, which is theoretically the only reason someone should care when youre crossing a border, boarding a plane, etc.

Even if it were possible for data itself to be a threat (and its not), there are many ways to carry data. Someone could hide the data in encrypted form, or even hide it in plain sight by being clever. Also, the information crossing a border doesnt have to be electronic at all; it could be a page in a book.

Even if something suspicious is found, that is not guilt and no charges can be laid so what is the point!?

Its long past time to shut down all of these ridiculous things. There should be a very tiny list of things that border security needs to do, and it should all fit on one hand.

9
swalsh 19 hours ago 1 reply      
Another terrifying part of this, is due to the nature of networks, when one of your "friends" becomes compromised, your private message do too. This goes way past national security, and 7 countries.

I have former coworkers from Syria, Iran, and Iraq. They're great people, and are great programmers. I friended them on Facebook many years ago, and now when one of them is caught at a border it's not just their private messages being raided... its my own anti-trump messages.

This needs to stop here.

10
maaaats 20 hours ago 1 reply      
The former prime minister of Norway was recently detained at the border for previously having visited Iran. Think about that, prime minister of a NATO allied country. The border rules (even before Trump) are whack.
11
safeaim 17 hours ago 2 replies      
For all of you guys recommending using fake accounts, do remember that right before christmas, Obama administration signed in new rules[1], giving NSA leeway to share their collected data with 16 other agencies, including DHS, which CBP falls under. So you may get caught if you try to pull these shenanigans off. US agencies are no strangers to mission creep when it come's to sharing data, as seen recently in this article from Intercept on how FBI is building a national watchlist for companies that want to have realtime updates on whether their employees have committed any crimes while employed. [2]

Two quotes from the NYT article that I feel are important to have in the back of your head when you plan your fake accounts:

Now, other intelligence agencies will be able to search directly through raw repositories of communications intercepted by the N.S.A. and then apply such rules for minimizing privacy intrusions.

But Patrick Toomey, a lawyer for the American Civil Liberties Union, called the move an erosion of rules intended to protect the privacy of Americans when their messages are caught by the N.S.A.s powerful global collection methods. He noted that domestic internet data was often routed or stored abroad, where it may get vacuumed up without court oversight.

Let's say CBP get's a tool in a couple of months that let's their border agent search up any passenger through the NSA raw data. That search may then produce your real accounts. Let's say they do this before questioning you, and you then provide them with your fake accounts, that will not look good.

[1]https://www.nytimes.com/2017/01/12/us/politics/nsa-gets-more...[2]https://theintercept.com/2017/02/04/the-fbi-is-building-a-na...

EDIT: Removed the part about felony, as that was blatantly wrong.

12
joshuaheard 20 hours ago 3 replies      
I don't see what this has to do with Trump's travel restrictions, other than it coincidentally happened at the same time. If the author is trying to imply this correlation is causation, there is no evidence in the article. That being said, no American should have to have his phone searched at the border, even with the stated border exceptions to the Fourth Amendment.
13
coldcode 19 hours ago 0 replies      
No, the US should not be allowed to request or demand access to a US Citizen's electronic devices for any reason whatsoever, no matter what Homeland Security says. The whole point of customs was to insure that goods were not brought into the country to avoid paying duties. The contents of an electronic device cannot be charged a duty. Anything else is beyond their authority. Of course none of this will stop a guy in a fancy uniform from demanding an illegal search anyway and making your life hell. Given the current government this will only get worse.
14
ianderf 19 hours ago 0 replies      
> just over a week into the Trump Administration.

It actually started long time before Trump. http://travel.stackexchange.com/questions/3363/laptop-search...

15
MR4D 1 hour ago 0 replies      
Intersting thought...since this appears to be the law of the land, and the border is controlled by the Executive Branch, what happens if somebody who works in one of the other two branches is stopped? It seems they would have a good claim that it's unconstitutional given the Separation of Powers that the constitution provides.

I would think that once a lawyer frames the possibility of this in front of a judge, that the law will be stricken down.

16
bogomipz 17 hours ago 0 replies      
>"Not only is he a natural-born US citizen, but hes also enrolled in Global Entry a program through CBP that allows individuals who have undergone background checks to have expedited entry into the country."

Incredible. The TSA and DHS is basically "theater of the absurd". Every other disturbing detail aside, this individual actually paid good money to enroll in the Global Entry program only to be detained and humiliated by this agency.

17
fny 15 hours ago 1 reply      
All the people suggesting a "duress mode" would solve this issue need to wake up.

For as long as it's not illegal to force people to open up their phones at the border, you are not under duress. In fact, the government could even warp the situation to where you'd be commiting perjury by showing a fake screen.

Unfortunately, we can't solve this problem through technology: we need to convert the broader public and fight to make the representatives we elect work for the people.

18
bmc7505 20 hours ago 1 reply      
This is why we need plausibily deniable encryption. Does anyone know an Android ROM, or jailbreak app that is visually indistinguishable from the lock screen, which can be unlocked to an innocuous home screen?
19
rl3 20 hours ago 1 reply      
Using detention as a tool to extort access from people is underhanded at best. Rifling through people's digital devices should not be acceptable.

Why doesn't Apple/Google/Microsoft/Facebook et al. coordinate with organizations like EFF or ACLU and throw their weight behind a campaign to stop this bullshit?

If not companies, there's still plenty of extremely wealthy individuals in SV that one would think might care.

20
Havoc 20 hours ago 1 reply      
Note to self - bring burner phone for dodgy countries...like the US.
21
throwaway2439 18 hours ago 1 reply      
I work at a military base, I received a background check and everything, not for anything classified, but for a scientific research group stationed on a base. Now, I think I'm afraid to leave the country because I'm not white.

Here I was planning to get global entry, but it's clear it doesn't matter lick.

22
blintz 17 hours ago 1 reply      
> More importantly, travelers are not legally required to unlock their devices, although agents can detain them for significant periods of time if they do not.

What kind of ridiculous technicality is that? Detainment isn't supposed to be a tool to coerce cooperation.

23
BJanecke 17 hours ago 2 replies      
Obviously appreciating that NASA has some sensitive data but just for a second also try and appreciate how would this play out for someone who works in the financial industry, where unauthorised sharing of sensitive data is not only a breach of contract with your employer/clients but a crime(insider trading) in almost all countries.

Or imagine how the people who enforce these new regulations can exploit this.

Query from the traveller at the border what their job is, if in the financial industry request that they relinquish their email find something that could tip you off and go buy stock. If they don't you lose nothing, you deport them, you do your job.

I understand that you might want to tear this apart, but keep in mind the person that requests your data will often not be the person viewing it, so you are in no position to just "Take some names" and then ensure that your data remains confidential.

This is terrifying.

24
kentbrew 10 hours ago 0 replies      
Some tweets from @Pinboard feel like they want to go here:

This is a small point but important: dont specify people are US-born; either youre a US citizen or youre not.

Emphasizing that someone was born in the US as a kind of super-citizenship plays into the hands of people you dont want to be helping.

The proper term for someone born abroad who doesnt speak English and has a brand-new US passport is: American

25
bitwizzle 19 hours ago 1 reply      
Now is the time for vendors to consider implementing a duress password. Upon entering your duress password the user is presented with a fake profile, or perhaps everything could just be wiped. I'm not sure how well this would play out in the real world, but it's one of the best things protections I could imagine if you want to carry sensitive data across borders.
26
Mikeb85 20 hours ago 3 replies      
This happens in Canada too, unfortunately (was in the news quite a bit not too long ago, and can be seen in action on that reality TV show about border security).

Best to wipe your phone, and not bring any sensitive documents across borders period.

27
biafra 17 hours ago 3 replies      
What is the worst that can happen to a non US citizen, who is not producing the passwords they demand? Being send back immediately and all belongings seized? Or only devices they do not get the decryption passwords for? Detainment for how long? Being charged with what?
28
Esau 14 hours ago 0 replies      
This is bullshit. The U.S. should be required to treat its citizens in a constitutional manner regardless of where they are located.
29
virmundi 21 hours ago 3 replies      
As a white man, I'm a bit concerned about coming back into my own country. The only social media accounts I have are here and reddit. Will the guards at the gate accept that I don't have a Facebook or twitter account?
30
heckubadu 17 hours ago 0 replies      
Here's gov data on device searches, from the ACLU https://www.aclu.org/government-data-about-searches-internat...
31
helpfulanon 17 hours ago 0 replies      
So, for a casual traveler who may have a phone with conversations peppered with unfavorable political views throughout - what is the best security hygiene in this situation?

Anyone have tips or tricks that average people can use, things that maybe don't involve having a separate phone etc?

32
ianderf 19 hours ago 1 reply      
Nobody has mentioned the "Scroogled" short story yet? I really hope this is not the future that expects us. http://www.crimeflare.com/doctorow.html
33
droithomme 20 hours ago 4 replies      
US citizens can not be prohibited re-entry into the US.
34
xexers 15 hours ago 0 replies      
"You will not share your password (or in the case of developers, your secret key), let anyone else access your account, or do anything else that might jeopardize the security of your account."

https://www.facebook.com/terms

It would be a violation of my facebook terms and conditions to share my facebook password!

35
clamprecht 20 hours ago 1 reply      
> I asked a question, Why was I chosen? And [the CBP agent] wouldnt tell me, he says.

He should file a Freedom of Information / Privacy Act request to get the reason they chose him.

36
pedalpete 13 hours ago 0 replies      
The Customs agent insisted he had the authority search the device.

I was thinking about this the other day, as a non-American, and I don't see how that is possible.

I work for the Australian gov't and of course cannot give out passwords or access to any of the devices which I carry which belong to the Australian gov't. How does US Border controls deal with that situation. They definitely do not have the authority to search a device owned by a foreign gov't, though it also seems they don't have the authority to search the device of an American either.

Thoughts?

37
donquichotte 7 hours ago 1 reply      
This reminds me of the guards on the border between Kazakhstan and Uzbekistan that wanted to check the photos on my phone, presumably because the import of pornographic material into Uzbekistan is strictly outlawed.
38
otaviokz 18 hours ago 0 replies      
Welcome to the USSR as we read about in school...

:(

39
JammyDodger 7 hours ago 2 replies      
Why is this even a story? I've had the same thing happen to multiple times coming into the UK. I'm also white and British if that makes any difference.
40
bayesian_horse 19 hours ago 0 replies      
I feel myself discouraged from visiting the US for the next few years at least.
41
Frogolocalypse 14 hours ago 0 replies      
While I have visited the states many times over the years, and enjoyed the time I spent there, I will never go there again.
42
sova 17 hours ago 0 replies      
Telecommunications devices are so strongly regulated and the laws regarding privacy so systemically ignored that it's a wonder we even petition our rhetorical "ownership" of said stuff. Jokes aside, just because an airport is an "effective border" and also a big police station at the same time, that does not mean you waive your rights.
43
bradneuberg 15 hours ago 0 replies      
If this becomes common place across borders phones should come with a "fake" unlock access code - if you enter it it drops you into a plain vanilla setup with some fake contacts perhaps. Might make sense to create fake email and social media accounts too then...
44
6nf 16 hours ago 1 reply      
This guy is a US citizen. What are they going to do if he refuses? He has the right to enter the USA. At worst they can confiscate the phone but without a court order I don't see how they can detain him indefinitely. Can they?
45
99_00 9 hours ago 0 replies      
This happens every day in western countries. It happened before Trump and it will happen after.
46
JustSomeNobody 16 hours ago 1 reply      
So, if it was NASAs phone, why not call their legal department before turning over the phone? I carry a work phone and would definitely seek legal counsel before turning it over to anyone.
47
nnd 15 hours ago 0 replies      
Looks like one would need to wipe their phone before traveling to US from now on. What about laptops though? :/
48
whalesalad 17 hours ago 0 replies      
This happens in other countries too. It's happened to friends of mine entering Canada from the United States.
49
SN76477 18 hours ago 0 replies      
I guess I will start packing my phone and using my ipad when I travel since it has less personal data attached.
50
Pica_soO 18 hours ago 0 replies      
Could i have a dualboot jon-doe os on my phone, presenting the most boring person ever?
51
tn13 18 hours ago 0 replies      
What if you are judged by the password you have chosen ? You have 24 character password ? Clearly you want to hide something more sinister.

Why is your password "ResidentEvil3040" ? You intend to not return after visiting USA ?

52
br_smartass 15 hours ago 0 replies      
Great freedom, huh?
53
sneak 15 hours ago 0 replies      
Use a password manager. Use long random passwords for every site.

Set your phone PIN to something 20 chars and random and text it to your friend. Write your friend's number on a slip of paper but add 1 to each non-area-code digit.

Disable biometrics. Power off phone.

You now no longer have the ability to provide the information they seek at the border.

Call your friend when you get through (from someone else's phone) and change your PIN back.

54
ommunist 18 hours ago 0 replies      
Looks like US gov is a self-eating snake. Reminds me of grave incident happened to Dr Stephen Mann, who was brutally deprived of his reality augmentation devices on the US border.
55
tn13 19 hours ago 1 reply      
How does this apply when my phone belongs to my employer and has sensitive data on it ?
56
ArenaSource 20 hours ago 0 replies      
where did I put my old Nokia 8210...
57
yarou 18 hours ago 3 replies      
Let it happen to Thiel or Musk, see how quickly the procedure will be reversed.
58
jjawssd 21 hours ago 4 replies      
59
kareldonk 19 hours ago 0 replies      
This is what happens in statism. Time to wake up, slaves. Google my name and statism. Read.
60
tn13 19 hours ago 3 replies      
This is really sad. As an immigrant I always thought this was coming. I never fly on a muslim airlines like Etihad to Turkish even though they are more convenient, I do not accept friend requests from muslims people on LinkedIN and Facebook.
61
kevin_thibedeau 20 hours ago 2 replies      
> Since the phone was issued by NASA...

So it was already government property. I don't see the issue here.

62
batbomb 21 hours ago 4 replies      
Conceivably, it's quite possible he had information which is subject to ITAR regulations, including data about sensors, mirrors. At the very least, he probably had sensor vendor specifications which are trade secrets and often covered under NDAs.
4
What Vizio was doing behind the TV screen ftc.gov
886 points by Deinos  6 days ago   332 comments top 53
1
mikeryan 6 days ago 5 replies      
So I have a bit of intimate knowledge of this.

Not sure what I can answer but for years my company worked on an Automatic Content Recognition project using tools from a team called Cognitive Networks who were bought by Vizio and makes up the tech that did this. If I understand correctly the founder of Vizio kept this tech for himself in the sale of Vizio.

When developing this we would work directly with Cognitive checking sync'd apps. We knew for a long time that they could see our content in their office while we tested.

Note LG got caught on this about 2-3 years ago and made ACR apps opt-in which pretty much killed it for LG.

AFAIK Samsung never did the exact same thing a bunch of providers saw the writing on the wall and dumped this sort of technology a few years back. It had some really cool applications for interactive sync to broadcast apps but the privacy concerns killed it for a lot for a lot of manufacturers.

2
jasonwilk 6 days ago 12 replies      
It's not worth buying any of these 'Smart' TVs. I don't know whether it is a shoddy developer experience provided by the likes of Samsung / Vizeo etc or if it's the developers themselves (Hulu I'm looking at you) who do not maintain their apps which are constantly bug filled.

I much prefer my old dumb TV that has a Roku plugged into it. Oh yeah, and I know it's not WATCHING ME.

3
jaimex2 6 days ago 9 replies      
I caught my TV doing this and went to war.

For the last two years I have had a service running that floods garbage data back to the collection point from several addresses throughout the Internet.

You're welcome.

4
passivepinetree 6 days ago 6 replies      
The amount of money they made from that data is probably orders of magnitude more than the paltry $2.2 million penalty.

I hate to get all paranoid, but it seems like every day there's news of a company's data being hacked, and what information isn't being hacked is being actively sold.

What can an average citizen do (short of living Ron Swanson-style in a cabin in the woods) to protect their privacy?

5
awfgylbcxhrey 6 days ago 3 replies      
Vizio collected a selection of pixels on the screen that it matched to a database of TV, movie, and commercial content.

I would like to know more about that process. I find it ethically abhorrent, but technically very interesting.

Like, is it grabbing, say, three pixels in constant locations across the screen and matching their color change over time? Is it examining a whole block? Is it averaging a block at some proportional location on the screen?

6
JohnBooty 6 days ago 5 replies      
If nobody's started one yet, I think there would be an audience for a blog/vlog/whatever that reviews non-smart TVs. And/or a place that evaluates which "smart" TVs function acceptably as "dumb" when they are not connected to a network.

Realistically, this would have to include evaluating things beside consumer TVs for use as living room devices, since "smart" features in consumer TVs are nearly unavoidable at this point.

Because I'm going to have to start looking into the world of commercial displays for my next TV, I guess. At least I think those don't have "smart" features. Yet?

7
pdimitar 6 days ago 2 replies      
"Vizio has agreed to stop unauthorized tracking".

As if there's any human-measurable way of confirming this. Yes they can be forced by a court. And no, the court can't know if they stopped all of the software copies on all TVs and no, the court can't know if they didn't re-activate them in the future back again.

What actual proof do we have that LG actually stopped? What actual proof can we have that Vizio will stop doing this?

8
criley2 5 days ago 2 replies      
Just further confirms that "Smart" TV's are a ripoff at best and a scam at worst.

Never, ever, ever buy a television described as smart. For any reason at all. All of the solutions are miserably pathetic. All of the solutions are riddled with bugs, design omissions and potentially nasty security zero days. All implementations have little to no update support from major third parties.

And, in many cases from many companies, the units spy on you as aggressively as could be to sell data for marketing purposes.

"Smart" tv's are lose lose lose lose. You pay more, you get inferior software, inferior hardware and ultimately have your privacy abused.

EDIT: To be fair, I love my Vizio dumb TV I just got. 40" 1080p dumb TV for $167 inc. taxes this past black friday. Got a HDR/4K Roku for an additional $70 and this TV is beautiful and the Roku is so much impossibly better in both hardware, software and third party support than any "smart" solution ever could be, and costs far less than the "smart" upgrade!

9
ComputerGuru 6 days ago 2 replies      
A 2.2M settlement is absolute peanuts compared to the mountains of cash they likely made.
10
neotek 6 days ago 1 reply      
"Smart" TVs are the worst TVs I've ever used, I really don't understand the appeal whatsoever.

They're almost universally clunky and slow with horrific UI / UX choices and painfully high latency on simple things like browsing a list of files or even just registering button presses, provide fuck all useful benefit over and above the regular TV experience, are usually running some long-deprecated version of Android which is riddled with security holes that will never get patched - why does anyone actually want this?

A Raspberry Pi running OSMC is everything you could ever want out of a home media setup, it'll work with good old regular "dumb" TVs that can't invade your privacy, with an interface so simple your grandparents can use it, and can be put together for well under $50.

11
fencepost 6 days ago 2 replies      
This sounds like an excellent reason to simply never connect the TV to the Internet and to simply connect your own system to the TV whether it be a stick PC or something with a little more oompf.
12
abandonliberty 6 days ago 2 replies      
This is promising and is a good start towards IOT precedent, and perhaps even operating systems of our devices (Windows 10).

- Explain your data collection practices up front.

- Get consumers consent before you collect and share highly specific information about their entertainment preferences.

- Make it easy for consumers to exercise options.

- Established consumer protection principles apply to new technology.

I wonder how many technical teams are scrambling to undo their spying now - though this is a fairly insubstantial fine. I could see the data being potentially worth more than $2.2m

13
diamondlovesyou 6 days ago 6 replies      
What I'm about to say may go against what many of the HN community believes. This isn't an attack on anyone's beliefs; I'm merely expressing my thoughts in an attempt to solicit constructive discourse.

I'mma be honest. I don't understand the repulsion at the possibility of corporation X knowing my personal info, (excluding the usual things like bank account info, SSNs, etc) like my location, search history, etc. To be clear, I'm 10000000% against warrantless (FISA court "warrants" excluded) government access to this information. Here's my reasoning:

* Governments

Have the power to arrest and detain on a whim. Not to mention, use drone strikes.

* Corporations

... Don't. These entities have self-interested incentives to provide tools which are economically productive for users. For example, a smarter smartphone, whatever that may be.

Regarding Vizio, my grip is that Vizio's goal (for this product at least) is to make a profit producing TVs. So, after the TV is sold, the product is individually "finished" (not considering support stuff). So, then, what other product is the data collection for, and what does this product give me in return for my data? The answer to both is no, and not just for Vizio.

Maybe I'm naive.

14
jeanvaljean2463 6 days ago 3 replies      
Huge schocker /s

Pretty sure that Samsung does very similar things. I've been interested in actually capturing outgoing pcap data for this purpose. Looks like I have a new project to add the pile.

15
silveira 6 days ago 1 reply      
> Consumers have bought more than 11 million internet-connected Vizio televisions since 2010.

> The order also includes a $1.5 million payment to the FTC and an additional civil penalty to New Jersey for a total of $2.2 million.

> Vizio then turned that mountain of data into cash by selling consumers viewing histories to advertisers and others.

$2.2 million / 11 million tvs = $0.20 per tv

16
kevin_b_er 6 days ago 1 reply      
This is why you do not use a smart TV: Nefarious data collection on what you watch and Samsungs are known to demand to show ads or else. https://news.ycombinator.com/item?id=13585132https://www.extremetech.com/electronics/241500-samsung-smart...http://www.techtimes.com/articles/190222/20161227/samsung-sm...

I'm also, for political reasons, suspicious of the FTC's willingness to pursue such cases in the future.

17
zeropoint46 5 days ago 0 replies      
So I actually worked at cognitive networks up until the end of 2014. I've read this thread and thought I would address some things here that didn't seem to get fully concrete answers (in no particular order).

The ACR technology that cognitive used was/is in vizio and LG tvs. during the time I worked there we only had a deal to use it actively on vizio tvs. I guess lg was just testing the waters to see how it'd work. The ACR technology that CG used is based on RGB values from sampled patches on regions of the image. There was no audio finger printing used. There were a number of items that would mess up the "recognition". Some of those included aspect ratio of content, watermarks from different providers, overlays and basically anything that modified either the size of the original image or obstructed it. For the server infrastructure, what we did was we ingested live feeds from the major network providers, these feeds had to be ahead of what tvs were watching by at least 5-10 seconds so we actually had the fingerprint data in our database to be recognized. we would pair the ingested fingerprints to TV scheduling data and voila, we "knew" what you were watching. Now clearly if we didn't have the content in our database we had no idea what was being shown on your screen.

What did we use the ACR data for. Well there were 2 "deals" going on while I was there. One was ratings, something to compete with the likes of neilsons. Different content providers, distributors, marketing agencies, etc. would want ratings info. Additional there were other "data mining" companies that build profiles based off public IP addresses that would want to use our data to enhance and augment their data. The other application that was the one that everybody was after want "interactive advertising". This would allow us to pop up an HTML5 app/page based on the ACR. So for example your selling a car, your ad comes up, you pop up your app and allow the user to schedule a test drive or look at the car in more detail. The use cases were endless though.

The ACR technology ONLY worked on content that was viewed from the HDMI ports. Any built in apps like netflix or hulu that were run, ACR was force disabled. One thing I remember about that was that netflix is huge about NOBODY getting viewing data/ratings information about netflix and it's users. Only netflix has that data apparently. One somewhat reassuring thing about disabling the technology is at one point vizio did notice a bug on one of its TVs where ACR was not being disabled when the user opted out of "interactivity". This was a big deal and we were required to solve it ASAP.

AMA if I missed something.

18
troydavis 6 days ago 2 replies      
It's amazing this was settled for a few million dollars. It's easy to imagine an alternative press release where the settlement was 10x or even 100x larger.
19
csours 6 days ago 0 replies      
>On a second-by-second basis, Vizio collected a selection of pixels on the screen that it matched to a database of TV, movie, and commercial content. Whats more, Vizio identified viewing data from cable or broadband service providers, set-top boxes, streaming devices, DVD players, and over-the-air broadcasts. Add it all up and Vizio captured as many as 100 billion data points each day from millions of TVs.

> The order also includes a $1.5 million payment to the FTC and an additional civil penalty to New Jersey for a total of $2.2 million.

20
myrandomcomment 6 days ago 0 replies      
1. Press the Menu button or open the HDTV Settings.2. Select System.3. Select Reset & Admin.4. Select Smart Interactivity.5. Right arrow to Off.
21
sitkack 6 days ago 1 reply      
How is this not an illegal wiretap? Shouldn't executives and employees at all the involved companies go to jail?
22
segmondy 6 days ago 0 replies      
This is ridiculous, I wish someone with money would create an absolute shitstorm by buying this kind of data, buying data from Facebook, google, twitter, internet cable companies, state departments, combining them, deanonymizing millions of users and dumping them. Until something crazy like this happens nothing will happen, it needs to be brought to the light. Until then, no regulation will ever happen on data collection on users and we will all be sheep and the product. Crazy thing, it won't cost that much money. Folks need to wake up and be scared shit less. Everything spies on you, your pace maker, your fitbit, your car, your TV, your fridge, watch. 1984 ain't got shit on this! :-(
23
bitmapbrother 6 days ago 0 replies      
Now that we know what they did the class action lawsuits should follow. If your concerned about privacy don't connect your TV to the Internet. Treat it like the dumb screen it's supposed to be and just cast or route content to it.
24
a3n 6 days ago 0 replies      
I wonder how many of these Vizio TVs are in government offices, recording and selling their IPs, pixels, preferences and schedules.

Remember, it's not just broadcast, it's also from DVD players. Anything displayed.

And I wonder who's buying, and then correlating IPs and devices, besides the obvious advertisers. The potential for espionage and extortion is interesting.

"That's an interesting fetish you got there, Mr third or fourth down on the org chart who does the actual day to day running of the agency. It'd be a shame if it was to be ... exposed."

25
rasz_pl 6 days ago 0 replies      
Do you watch cable? Every single settopbox is designed in a way that makes tracking viewing habits trivial and every cable company does this.
26
tps5 6 days ago 0 replies      
> Consumers have bought more than 11 million internet-connected Vizio televisions since 2010

11 million televisions. 2.2 million penalty. 20 cents per television.

27
guscost 6 days ago 0 replies      
I got a supposedly "smart" TV at a ludicrous price the other day, maybe because there are already surplus units that nobody wants? It's a Roku/Sharp combo thing so there are no numbers on the remote either, but the UI is actually pretty darn good.

And no, I would never connect my cheapo TV to the Internet. Come on.

28
busted 5 days ago 0 replies      
There is a comment on this article basically saying, "I bought a Vizio TV, later my email, bank account, and facebook got hacked, and now I know why." Shows roughly the understanding of these issues for some people.
29
msmith10101 6 days ago 0 replies      
How did Vizio get caught? Was it a whistle blower? https://www.propublica.org/article/own-a-vizio-smart-tv-its-...
30
gesman 6 days ago 0 replies      
>>The order also includes a $1.5 million payment to the FTC

>>and an additional civil penalty to New Jersey

Read:FTC and New Jersey decided to made money off consumers too by charging Vizio a little tax. "Protected by law" consumers got: $0.

31
werber 6 days ago 0 replies      
Are they shielded from a class action suit now?
32
codedokode 6 days ago 0 replies      
The law is not strict enough. No single byte should be sent outside without user's consent. No matter whether it contains personal data or not.

And that would make proving company's guilt much easier.

33
nikanj 6 days ago 0 replies      
34
hueving 6 days ago 0 replies      
Wow, those punishments are pathetic for sampling private movies you watched (e.g. porn) on your TV and funneling that information off with IP address to advertisers.
35
nojvek 6 days ago 0 replies      
Consumers want the best service at the cheapest price. Producers want to maximize the profit on products and services. Advertisers want the best return on investment for their ad dollars so they also maximise profit.

This are fundamental thruths of the market. It's why Google and Facebook are behemoths.

The only way to win the game is precision tracking, addictive services and building good models of customer behavior for advertising.

36
whalesalad 6 days ago 0 replies      
On a similar note, can anyone here speak to the hidden audio signal that is broadcast over the air with things like sporting events?

I noticed it once when Google Now knew instantaneously that I was watching a specific NFL football game and began displaying the score. It felt magical but after a little research there are hidden frequencies that reveal this information.

37
agotterer 6 days ago 1 reply      
How are fines that are as little as this supposed to deter future companies from sketchy collection practices? One can only assume they made more than $3.7M selling illegally collected data.

There's no incentive for companies to do better and not be shady. It pays to roll the dice and see if you get caught. If you do just say sorry and pay a small fine.

38
scarface74 6 days ago 0 replies      
If they were aggregating and selling this information to television networks, as a better way of measuring how many viewers a show had, I would be okay with that. It may help keep some of my favorite shows on air. But to sell my individual viewing habits with my IP address? Not okay.
39
calvinbhai 6 days ago 1 reply      
With Vizio and other Dolby HDR compatible TVs you'll have to keep it connected if they intend to get firmware updates. I wonder which TV will be ideal for purchase, now that Samsung and Vizio have been caught hoodwinking their customers.
40
knodi 6 days ago 1 reply      
3.7mill is not enough of a fine. The fine should have been 50mill plus.
41
mathgeek 6 days ago 0 replies      
I wonder how many meetings were called at other manufacturers when this went public, both to check on what they themselves were doing, and to make plans to stop doing it where relevant.
42
chinathrow 6 days ago 0 replies      
I wonder how you can work on such a setup as an engineer with morale, colleting 100B data points _daily_ without telling your customers...
43
noonespecial 6 days ago 0 replies      
Well, looks like old Orwell got the Telescreen just about right. "Facecrime" turned out to be something a little different though...
44
myrandomcomment 6 days ago 1 reply      
So there was a setting to turn their tracking off if you dug into the menus. I turned it off on my set. I hope it covered that feature.
45
usgroup 6 days ago 0 replies      
May be worth noting that AV companies and privacy guards also sometimes operate on this model. E.g Avast and Ghostery.
46
dewiz 6 days ago 0 replies      
I wonder if Comcast does the same, they can even cache user interests locally waiting for a connection to be available.
47
kelvin0 6 days ago 0 replies      
Vizio: Clash of the Titans, when capitalism is at odds with individual privacy
48
skc 6 days ago 0 replies      
The cynic in me believes that Vizio are probably just a few years too early.
49
amq 6 days ago 0 replies      
Want a dumb TV? Disconnect it from internet and have your Roku etc.
50
jlebrech 6 days ago 0 replies      
this sound like a reverse-DRM, they can they figure out if you're watching pirated content then send you a bill.

this cheap unknown brand doesn't look so cheap now does it.

51
daveheq 6 days ago 0 replies      
Yah but Trump's going to get rid of the FTC and regulations that get in the way of business, such as spying on you and selling your watching habits and personal info to a bunch of other companies.
52
firefoxd 6 days ago 2 replies      
I have a Vizio TV, can i disable tracking?
53
hoodoof 6 days ago 0 replies      
Ironic that the government should be so concerned about spying.
5
Introducing Keybase Chat keybase.io
982 points by aston  4 days ago   191 comments top 48
1
malgorithms 4 days ago 17 replies      
OP here! I had to trim the post down for brevity, but I thought the HN community in particular might be interested in the API side of things.

Undocumented in the post: you can invent channels for app-to-app communication from the JSON API. For example, it's possible with Keybase chat to have a program posting encrypted messages for another person or program, without cluttering up the visual chat interface.

Also - to test chat we've cut the invitation requirement. You should be able to try the app without anyone inviting you.

2
cgijoe 4 days ago 8 replies      
Warning to all OS X users: The Keybase Chat desktop app does a number of shady things that ultimately led me to delete it from my system. I am writing this purely as a public service announcement, to those who worry about installing unknown apps on their Macs. The Keybase Chat app:

(1) Requires administrator privileges to launch on first run, to install a "Helper Tool". The app does not explain what this tool does, where it lives, nor does the Keybase website.

(2) Installs a login (startup) item without asking permission, so Keybase will auto-launch on every boot.

(3) Installs a Finder Favorite in your Finder sidebar, without asking permission.

(4) Installs /usr/local/bin/keybase without asking permission.

(5) Installs /Library/PrivilegedHelperTools/keybase.Helper without asking permission.

(6) Installs /Library/LaunchDaemons/keybase.Helper.plist without asking permission.

(7) Installs ~/Library/LaunchAgents/keybase.* (3 files) without asking permission.

(8) Runs permanently in your menu bar, even if you quit the main app.

These things may all have good reasons and be benign, but they are too shady for me, so I deleted the app and all the files listed above. Apologies to the devs.

3
bgentry 4 days ago 2 replies      
This really does look great.

Edit: since I haven't been running Keybase for the past 2 weeks, I missed the fact that they disabled continuous background proof verification due to my concerns: https://github.com/keybase/keybase-issues/issues/2782#issuec...

Good on them! The rest of this comment is not actually applicable anymore and you should give Keybase Chat a try :)

Original comment:

-----------------

My biggest concern with it, however, is that the Keybase client is now frequently verifying all my contacts' proofs. Many of these verifications are for personal websites and are done over port 80 or involve DNS lookups that my contacts control.

This leaks a great deal of metadata over the network about who my contacts are, and makes it easy for a hostile network to determine who I am if I'm running the Keybase app.

I reported this on GitHub when I noticed it and have unfortunately not been regularly running the Keybase app since: https://github.com/keybase/keybase-issues/issues/2782

I hope they decide on some sort of fix for this. They could at least not do verifications over insecure connections and arbitrary 3rd party DNS lookups without my explicit approval.

4
x1798DE 4 days ago 1 reply      
The continued fragmentation of chat into walled gardens is really annoying. I feel like Matrix has done a good job not only designing their protocol to be open and federated from the start, but also in that they are actively working to provide bridges to other services. It would be really nice if keybase would work to federate with Matrix servers.

(Link to Matrix service, since they have an un-googleable name: https://matrix.org. The only working client that I know of at the moment is https://riot.im)

5
Meph504 4 days ago 4 replies      
Umm has anyone read the lisc. for this application?

https://keybase.io/docs/terms

When providing Keybase or the Service with content, such as your name, username, photos, social media names, data or files, or causing content to be posted, stored or transmitted using or through the Service (Your Content), including but not limited to the Registration Data and any other personal identification information that you provide, you hereby grant to us a non-exclusive, worldwide, perpetual, irrevocable, royalty-free, transferable (in whole or in part), fully-paid and sublicensable right, subject to the Privacy Policy, to use, reproduce, modify, transmit, display and distribute Your Content in any media known now or developed in the future, in connection with our provision of the Service. Further, to the fullest extent permitted under applicable law, you waive your moral rights and promise not to assert such rights or any other intellectual property or publicity rights against us, our sublicensees, or our assignees.

That's a bridge too far, and someone needs to dial this back.

6
Jaruzel 4 days ago 2 replies      
Why do all new chat clients look like Slack? We're rapidly moving towards a monoculture of chat UIs.

I'd like to see a return to less intrusive chat apps, with more minimal UIs that don't take up most of the desktop real estate. The most common screen resolution out there? 1366x768. I kid you not. IRC has it's many flaws, but the clients still understood the meaning of good information density.

People seem to forget that chat is a communication medium first and foremost, and not a multimedia based experience.

7
problems 4 days ago 2 replies      
I disagree with the idea of allowing backup/restore of conversations defeats forward secrecy. There's a big difference between decrypting past conversations and decrypting chat logs. I have full control over my chat logs, I can choose to delete them, not store them with some people, encrypt them with a different password and rotate them monthly, etc.

Even Signal and other apps store all your messages on your device, optionally locally encrypted.

Forward secrecy is so that you can't just steal the key and network traffic and get _all_ past messages, regardless of whether or not I wanted to archive them. And getting my live key doesn't mean getting all my archived logs.

8
fiatjaf 4 days ago 0 replies      
> What if we're living in a simulation?> > Keybase offers no guarantees against sophisticated side-channel attacks by higher-level entities.

ahahah, that's great!

9
primigenus 4 days ago 3 replies      
Hey malgorithms, this is great! I check the Keybase website every month or so for updates and discovered yesterday that there's a new logo, replacing the old thieving dog/ferret/raccoon with what appears to be a person's head with their hair in a bun holding a key. Can you give some background on the thinking behind this logo redesign? (Sorry it's not a question about chat, per say)
10
alexkadis 4 days ago 2 replies      
Is it technically possible for Signal/Whatsapp to use Keybase keys in lieu of phone numbers? If so, how practical would it be to add this as an option?
11
coffeemug 4 days ago 0 replies      
That looks spectacular, can't wait to try it tonight. Hope this software can overcome the network effects of existing systems. End-to-end encryption is really, really important, but I feel like the real game changer is being able to instantly chat with anybody online by just typing in their username.
12
chias 4 days ago 0 replies      
This is fantastic! I've been playing with it for a bit, and I'm loving it.

Question: since (encrypted) chat history is stored on keybase servers, does my chat history count against my KBFS quota? If so, how do I clear it out? If not, how do you mitigate against someone building a pseudo-FS on top of chat messages for free unlimited storage?

13
adrianpike 4 days ago 0 replies      
Wow, this is awesome! A colleague and I were just recently discussing how badly we feel the need for "encryption-first" chat software is - not tools that sell it as a feature, but tools that make it _the_ feature.

Great work KB team!

14
hollander 4 days ago 1 reply      
This looks great, but if you want this to work, you need Android and iOS support. When is that going to happen? Is that going to happen?
15
pfraze 4 days ago 0 replies      
Thoughts from skimming the post:

Using all of the associated accounts across services to do user lookup is really quite cool, and the CLI integration and public broadcasts look very fun. Nice work there.

Multi-device key management is one of the hardest tasks for end-to-end, but that's been taken seriously from the beginning by keybase, and I'm leaning toward optimism. The UX decisions for forward secrecy seem pretty reasonable as well.

16
Nadya 4 days ago 0 replies      
It'll be interesting to see if I ever receive messages from my fellow HN users now that it's a bit easier to do so without navigating my website to find my email address. I doubt it, but still.

I'll give it a run when I get home today. Since few of my contacts use Keybase, or would have any interest in Keybase, this is less "Wow! Awesome!" for me than the release of KBFS was - but it's still pretty cool.

I love how Keybase is expanding to be more than just a collection of "internet personas verified by a PGP signature" and am interested in what else you guys may have in the works.

E: Updated my profile info to make mention of Keybase Chat. And I don't even have it yet. ;)

17
Walkman 4 days ago 4 replies      
This is the last time I spam a Keybase thread with invite codes :)

https://keybase.io/inv/6953921e2f

https://keybase.io/inv/637bfd5d42

https://keybase.io/inv/20be67f672

18
exabrial 4 days ago 1 reply      
I love keybase. I am waiting for a password manager solution from them
19
ryanmarsh 4 days ago 0 replies      
The "forgot your password" flow on keybase.io explicitly tells you whether or not the email address you enter has a valid account. Is this ok?
20
philip1209 4 days ago 0 replies      
This could be a great way to securely alert Github project maintainers about security vulnerabilities.
21
bballard1337 3 days ago 1 reply      
This is the reason I am so excited about Keybase. I can't comment on the integrity of the software but the vision is there. All encrypted everything is where I see the future of the internet.

Does anybody know if they are working on a mobile app for at least the chat system? I don't necessarily need the whole desktop app on the phone but encrypted chat would be fantastic. (Currently using Signal but would be open to using everything keybase in the future)

22
homakov 4 days ago 0 replies      
How you managed to make Keybase.dmg 72MB when any Electron app is 120+?
23
johnflan 4 days ago 0 replies      
It seems that this app and Slack are hugely influenced from the iPad style of app design. Why can't we have a window per chat session on the desktop and why do desktop users get wrapped apps? Is this an indication of the lack of perceived importance of the desktop?
24
martyvis 4 days ago 1 reply      
111MB for the setup download (at least on Windows)?! What's in it apart from a chat app and encryption library?
25
mxuribe 4 days ago 1 reply      
Sorry, I'm a little confused: is this a chat app client that still requires a central server to route messages around?
26
SamPatt 4 days ago 0 replies      
I don't use Keybase on a regular basis yet but every time they announce something new I check it out again, and every time I'm impressed. I'm not sure what it will take for me to make the switch and use it regularly but if they keep this up I have no doubt it'll happen.
27
rabidrat 4 days ago 3 replies      
I would love to have a linux curses client for encrypted chat. Something that irssi can connect to, perhaps?
28
zokier 4 days ago 1 reply      
I'm not really sure about Keybase accumulating more and more services instead of focusing on integrating to existing ones. One of the initial attractions of Keybase (to me at least) was how the system was very simple, transparent, and not really dependent on keybase.io.
29
EGreg 4 days ago 0 replies      
Hey Keybase, I have a question for you guys:

What if we launch our own apps and websites that would allow users to claim they are X on website Y. Do you have a way for them to use their public/private key pair from their keybase clients, to sign these claims?

I do not necessarily want these claims to be publicly available to everyone on website Y. I want them to be privately transmitted between website A and B, so people can't be tracked between domains.

30
kseistrup 4 days ago 0 replies      
Shameless plug: Before the Keybase [GUI] Chat was invented I hacked together this simple text-based client that uses twtxt formatted files to store private chats between two keybase users:

https://github.com/kseistrup/kbmsgr

PS: It doesn't use the Keybase chat API, and it never will.

31
IanCal 4 days ago 0 replies      
Argh! Please remove the typing animation! It's flipping between one two and three lines jerking the whole screen around on my phone.
32
woodruffw 4 days ago 0 replies      
Awesome! I've been using this for the past few weeks on and off, and the user experience is very pleasant.

Now that I know about the JSON API for chatting, I'll have to add it to my unofficial Ruby interface[1].

[1]: https://github.com/woodruffw/keybase-unofficial

33
daurnimator 4 days ago 1 reply      
Why doesn't this seem to be in a release?The last release of the client was back in October: https://github.com/keybase/client/releases/tag/v1.0.18
34
Splendor 4 days ago 1 reply      
So how does this compare to Slack's free tier? Is there a user limit, channel limit, message history limit, etc.?
35
amingilani 4 days ago 0 replies      
If you need an invite, hit me up on Twitter! If you're trying to find a random person on the internet to chat with and test this out, hit me up on Keybase! :)

Use my HN username.

36
kristianp 4 days ago 1 reply      
Why does the page have 112px of top padding? Seems like a waste of space.

 body { overflow-x: hidden; padding: 112px 0 50px; }

37
perrohunter 4 days ago 0 replies      
Do you think this could end up the same way OpenID did?
38
Dangeranger 4 days ago 0 replies      
Saw this yesterday in the app, tried to use it and it failed.

Works like a charm today.

This should be very nice for ad-hoc secret exchange.

39
james_pm 4 days ago 0 replies      
The --public broadcast messages are interesting. Is a Twitter-style service part of the plans?
40
mikaelf 4 days ago 0 replies      
Played around with the chat in beta and it's super neat! Keybase really is keybae.
41
brett40324 4 days ago 0 replies      
Key gen less than two minutes from phone - all around great UI signing up!
42
warcode 4 days ago 0 replies      
I tried to set the proxy setting but it still does not work?
43
wslh 4 days ago 1 reply      
If I don't have a keybase account, can I use this app?
44
lightning1141 4 days ago 0 replies      
I think this tool is very cool.
45
fiatjaf 4 days ago 1 reply      
What is this paper key? I don't want a paper key! Now I have to write this and keep it in my pocket? No!
46
rbcgerard 4 days ago 0 replies      
iphone app please! until i can use it on my iphone its not that useful...
47
misiti3780 4 days ago 0 replies      
this looks like a great codebase! thanks so much for open sourcing this.
48
lewisl9029 4 days ago 1 reply      
This looks absolutely amazing!

Any plans for a web client for chat?

6
What programming languages are used most on weekends? stackoverflow.blog
781 points by minaandrawos  5 days ago   288 comments top 57
1
slg 5 days ago 7 replies      
I surprised there was no mention in the blog post or the comments so far about the homework factor. It isn't just personal side projects that people are working on over the weekend. I am betting the relative percentage of CS students on the site is also much higher on the weekend. Tags like assembly, pointers, algorithm, recursion, class, and math are all rather vague. Those topics are all discussed at length in CS classes, but if you are working on a real world project in those fields, odds are you will tag it with a more specific technology you are using rather than the abstract theory behind it.

EDIT: On second look, Python, C, and C++ are also the go to languages for CS classes (along with Java but that is also a big enterprise language unlike the other three.) Almost this whole list seems to be schoolwork related.

2
wcbeard10 5 days ago 8 replies      
The funnel shape of the scatter plot immediately reminded me of an article on the insensitivity to sample size pitfall [0], which points out that you'll expect entities with smaller sample sizes to show up more often in the extremes because of the higher variance.

Looks like the tags with the biggest differences exemplify this pretty well.

[0]- http://dataremixed.com/2015/01/avoiding-data-pitfalls-part-2...

3
wimagguc 5 days ago 2 replies      
One way I use Stackoverflows dev stats is to make educated guesses about the easiness of finding developers in 2-3 years time to maintain now-greenfield projects. Does Ruby seem to go down while Python is in steady growth? Let's move away from Rails. Swift is picking up steam? It's safe to switch from Objective-C. This dataset seems to be just fantastic for that.
4
netinstructions 5 days ago 0 replies      
Somewhat related, if you're looking to compare tags from StackOverflow, I made this site[1] a couple years ago to quickly visualize how many questions and answers are out there for given tags.

I use StackOverflow tag count as well as Google Trends and GitHub star count to get a rough feel for how much people are using certain things, such as version control software[2], databases, or view engines in Express[3].

[1] - http://www.arepeopletalkingaboutit.com/[2] - http://www.arepeopletalkingaboutit.com/tags/cvs,svn,git,perf...[3] - http://www.arepeopletalkingaboutit.com/tags/ejs,pug

5
Impossible 5 days ago 0 replies      
The answer might be as simple as "people tend to work on games on the weekend", either as hobby projects or that professional game developers work weekends more often, skewing the weekend results away from serious enterprise apps. This would explain both the rise in low level languages but also things like OpenGL, Unity3D and Actionscript 3. It doesn't explain Haskell, of course, but I think the Haskell explanation in the article is accurate.
6
brink 5 days ago 4 replies      
I don't think the number of questions asked correlate with which languages are used the most. My weekends are mainly Java, but I don't need to post on stack overflow because all of my questions have been already addressed.
7
thomasfoster96 5 days ago 1 reply      
> We defined weekends using UTC dates...

...which means quite a few Saturday mornings in Asia have been counted as weekdays and many late Friday nights in the Americas have been counted as weekends.

It would be great if StackOverflow had information on the local timezone that the question was asked in. Seeing Mon-Fri 9-5 vs other times would be interesting.

8
Xeoncross 5 days ago 1 reply      
I can see room for lots of false assumptions when reading this data.

What if Haskell never changes the rate at which it is discussed - but all the entry programers doing the 9-5 job go away on the weekends helping Haskell to be "louder"? What if the people with homework ask more on the weekend than during the week?

What if certain developers don't post questions tagging a language - but rather tagging an algorithm knowing they can implement it in whatever language they need?

What if Haskell only works on the weekend?

9
ThePhysicist 5 days ago 3 replies      
Not many Sharepoint enthusiasts out there, it seems.
10
tempestn 5 days ago 0 replies      
One thing that caught my eye is that at least of the tags included in their scatter plot, there appear to be more weekend searches than weekday searches on average overall, especially for the most popular tags. (And note that the X axis is logarithmic, so those will have a much larger effect on total searches.) I wouldn't have expected that. Perhaps weekdays are more geared toward 'getting things done', so weekends are when people have time to learn.
11
harry8 5 days ago 4 replies      
Microsoft should really take note of that. That's a huge tick on their woefully uncool meter. Developers, developers, developers, developers don't want to use Microsoft gear unless they're being paid, it would seem.
12
tedmiston 5 days ago 1 reply      
It might also be interesting to see which tags are used more in the early mornings or evenings vs during the workday.

Edit: I hadn't seen Kaggle before today, but it looks very easy to hack on the SO data set [1] with a Jupyter notebook.

[1]: https://www.kaggle.com/stackoverflow/stacklite

13
nirv 5 days ago 2 replies      
Happy to see Python-3.x taking over old Python.
14
AnimalMuppet 5 days ago 5 replies      
No big surprise that nobody works on sharepoint or XSLT as a weekend hobby.
15
jlas 5 days ago 1 reply      
Also interesting to see the weekend dips in google trends, e.g. Java: https://www.google.com/trends/explore?date=today%201-m&q=jav...
16
coretx 5 days ago 2 replies      
Not even a single Rust mention. Hmmm. Not sure if I'm a weekend-idiot or simply ahead of the crowd. Let's hope it's because of Rust developers both strongly disliking and never experiencing _stackoverflows_. ;+)
17
dmozzy 5 days ago 0 replies      
Also somewhat related. I made this site to show you the popularity of programming languages on Stack Overflow by countries and US states: http://soversus.com
18
espeed 5 days ago 1 reply      
Exploring StackOverflow Data - Evelina Gabasova (2016) [video] https://www.youtube.com/watch?v=qlKZKN7il7c
19
lngnmn 5 days ago 0 replies      
Python? If noyt it should, because there is no better prototyping/bootstrapping language and definitely there is no better culture than that which emerged around this language.

Only Scheme of old days could be compared to have similar similar balance of features and culture of careful attention to details, which, basically, defines a craft approaching (turned into) an art.

20
quadcore 5 days ago 0 replies      
Oh that's a wonderful idea. Someone should - continue to - study the differences between weekday and weekend hacking. Programming languages, but also methods, productivity, value for customers, etc.

The way I would like the world is that the weekend hacking has way better methods, languages, productivity and value for customers.

21
wslh 5 days ago 0 replies      
Sidenote: I always feel limiting that you cannot use Haskell in the weekend in your Android or iOS mobile phone or tablet. I think this is a natural environment for learning.

There are some Haskell apps in the app store but they are not official or they are using tricks like executing the code in a remote server.

22
monokrome 5 days ago 1 reply      
Seriously, though, if you are going to post the following thing in your article then just reconsider:

"Warning: the following section involves googling usernames and reading the first page of results for the people involved. This may be unethical. I apologize in advance."

Obviously your apology means nothing if you are doing it anyway.

23
anotheryou 5 days ago 7 replies      
"actionscript 3" what? o_O

I thought this is over

24
hellofunk 5 days ago 0 replies      
My weekends are usually pretty rough and unstable, so I went with this one several months ago and it fits well into my life style: https://en.wikipedia.org/wiki/Brainfuck
25
dvnguyen 5 days ago 1 reply      
Coincidently I've just read several chapters from the Learn you a Haskell book. I couldn't write any serious Haskell project in near future, but learning it has been so much fun. No surprised when many other programmers are touching it on weekends.
26
BinaryIdiot 5 days ago 1 reply      
Wow I'm surprised to see ExtJS on a list of "most used" anything. I mean don't get me wrong it's great if you want to prototype something quick that uses data but for a great UX / real application it's dreadful to use IMO.
27
problems 5 days ago 1 reply      
The curve for Selenium on this graph is the weirdest thing:

https://i.stack.imgur.com/LUQei.png

Anyone want to speculate why this may be?

28
c3534l 5 days ago 1 reply      
Everyone is pointing out potential problems with the methodology, but really this matches up pretty well with experience. SQL, MS Office stuff, boring things like logging, and testing all show up as being about work. Haskell is actually infamous for having an evangelical following and limited real-world uses. And, yeah, your C-like languages work well for both work and side-projects. This all makes perfect sense. The only thing I'm surprised about is assembly being for pleasure (maybe hardware people?), and web stuff being as versatile.
29
rodionos 5 days ago 0 replies      
The number of new questions tagged by mainstream language has been relatively stable (python) or decreasing (java, js, php) in 2016.

http://apps.axibase.com/chartlab/c1acecc0/3/#fullscreen

If anything, it might suggest that the knowledge base coverage is reaching a plateau. It would be interesting to watch how many questions are tagged as duplicates. The ratio is probably increasing.

30
kaghaffa 5 days ago 0 replies      
Visualizations that are more accessible would be great. I have red-green deficiency so I can't differentiate the lines on the charts for the life of me..
31
grandalf 5 days ago 1 reply      
Wouldn't StackOverflow questions equate to confusion about the language rather than use, or at least use heavily weighted by confusion?
32
doggydogs94 5 days ago 0 replies      
In general, the weekday questions are dominated by enterprise products that cost money. The weekend questions are mostly about stuff that is free.
33
kc10 5 days ago 0 replies      
This doesn't necessarily mean the most used languages. Probably these are the languages that developers are trying to learn and post the questions. And other languages such as C#, Java have reached certain state that people may not have lot of questions, so certainly the activity would decrease and doesn't mean lot of people don't use these languages.
34
kriro 5 days ago 1 reply      
I'd be more interested in Github commits on weekends vs. weekdays as that is likely to be a better indicator of side projects (due to the "homework factor"). Or maybe Gitlab commits or private Github commits since public Github commits are FLOSS and thus likely to include more side project commits than another data sets.
35
dlandis 5 days ago 1 reply      
I didn't see the word "legacy" technology used in the article, but I think a lot of the stuff in the weekday column is exactly what people associate with the word legacy. I mean soap and xslt !? That is straight from the darkest era of bloated J2EE apps.
36
yazinsai 5 days ago 0 replies      
Are number of questions asked a good determinant of language popularity?

One would think the "ideal" programming language would be so intuitive that it would have a much lower questions asked to usage ratio.

The two might not be that strongly correlated.

37
wtvanhest 5 days ago 0 replies      
It may be better to group like languages/frameworks and compare them over time:

Django vs rails for example.

Comparing languages heavily used by acedemics may skew things since they often work on the weekends. Or game development languages vs webapp languages.

38
ziikutv 5 days ago 1 reply      
Assembly is too vague of a tag
39
alkonaut 5 days ago 0 replies      
TL;DR: On weeekends people either do homework if they are students or play with the languages they would like to work with, if they work in some SharePoint salt mine during the week.
40
tzury 5 days ago 0 replies      
Just quoting a comment from the page itself:

 According to the infographic, most people spend their week struggling to get a document out of SharePoint; whereas on weekends, they write cool algorithms in Haskell, C, C++11 or assembler. This is a surprisingly accurate reflection of the situation on the ground, from what I hear from people around me. Now the question is: how can we swap the weekend for the week, so that more people can do more of the cool stuff?
http://disq.us/p/1fzpzr5

41
sAbakumoff 5 days ago 0 replies      
Relevant research : StackOverflow questions referenced in the source code hosted on Github http://sociting.biz
42
sytelus 5 days ago 0 replies      
The surprising thing is actually that a significant number of people seems to be spending weekends in preparing for interviews[1]

1. example "recursion", "algorithm"

43
minaandrawos 5 days ago 2 replies      
I was kinda surprised that Go (golang) wasn't up in the list
44
Musaab 5 days ago 0 replies      
C# doesn't get the love it deserves because everyone loves to rag on Microsoft. I think Bill Gates should be in prison, but C# deserves better :)
45
luckystartup 4 days ago 0 replies      
This just makes me so glad that I don't have to spend my weekdays working with sharepoint, SOAP, excel, VBA, and internet explorer.
46
poorman 5 days ago 0 replies      
I'm going to assume Haskell has the most asked questions because it's one of the most confusing languages.
47
vonnik 4 days ago 0 replies      
nothing against haskell, but this is funny:

http://classicprogrammerpaintings.com/post/143847262458/hask...

48
whatever_dude 5 days ago 0 replies      
As someone who just started using Assembly on my weekends, I find these results shocking.
49
cdnsteve 5 days ago 0 replies      
Cmon, nobody is doing SOAP work on the weekends!? ;)
50
joelthelion 5 days ago 0 replies      
Not looking good for Microsoft...
51
officialjunk 4 days ago 0 replies      
without factoring in time zones, this contains some friday and monday usage, no?
52
calibas 5 days ago 0 replies      
Javascript, all day every day.
53
deepnotderp 5 days ago 0 replies      
Python.
54
codesushi42 5 days ago 7 replies      
55
legostormtroopr 5 days ago 2 replies      
Given the current user revolt over there regarding recent political shenanigans by the mods (and CEO), I'd be keen to see how their trends track over the next few years.

http://meta.stackoverflow.com/questions/342903/well-always-e...

56
taylorh140 5 days ago 1 reply      
I am tired of people confusing programming languages and domain specific languages. Sql is not Turing complete.
57
meerita 5 days ago 0 replies      
I do a lot of HTML/CSS(Sass) using Middleman. Sometimes, I do Ruby.
7
RethinkDB versus PostgreSQL: my personal experience sagemath.com
811 points by williamstein  3 days ago   325 comments top 34
1
mglukhovsky 3 days ago 1 reply      
I appreciate the detailed analysis. A few comments:

> This post is probably going to make some people involved with RethinkDB very angry at me.

Actually, our community has always felt the opposite. Performance and scalability issues are considered bugs worth solving. That may have been the reaction of one or two community members, but that doesn't represent our values at all.

> A RethinkDB employee told me he thought I was their biggest user in terms of how hard I was pushing RethinkDB.

This may have been true (at the time) in terms of how SMC was using changefeeds, but RethinkDB is used in far more aggressive contexts. Here's a talk from Fidelity about how they used RethinkDB (for 25M customers across 25 nodes): https://www.youtube.com/watch?v=rm2zerSz6aE

SMC did seem to uncover a number of surprising bugs along the way: I would describe it as one of the more forward-thinking use cases that pushed the envelope of some of RethinkDB's newest features. This definitely came with lots of performance issues to solve along the way. I appreciate Williams tenacity and patience in helping us track down and fix these along the way.

> In particular, he pointed out this 2015 blog post, in which RethinkDB is consistently 5x-10x slower than MongoDB.

Its worth pointing out that this particular blog post raised serious questions in its methodology, and recent versions of RethinkDB included very significant performance improvements: https://github.com/rethinkdb/rethinkdb/issues/4282

> Even then, the proxy nodes would often run at relatively high cpu usage. I never understood why.

I'd have to double-check with those who are far more familiar with RethinkDB's proxy mode, but it's because the nodes are parsing and processing queries as well, which can be CPU-intensive. They don't store any data, but if you use ReQL queries in a complex fashion (especially paired with changefeeds) it's going to require more CPU usage. We generally recommend that you run nodes with a lot of cores to take advantage of the parallelized architecture that RethinkDB has. This can get expensive if you aren't running dedicated hardware.

> The total disk space usage was an order of magnitude less (800GB versus 80GB).

RethinkDB doesn't yet have compression (https://github.com/rethinkdb/rethinkdb/issues/1396). Between this fact and running 1/3 the number of replicas, the reduced disk usage is not surprising.

> I imagine databases are similar. Using 10x more disk space means 10x more reading and writing to disk, and disk is (way more than) 10x slower than RAM

This isn't necessarily true, especially with SSDs. RethinkDB's storage engine neatly divides its storage into extents that can be logically accessed in an efficient fashion. This is particularly valuable when running on SSDs, which are fundamentally parallelized devices. RethinkDB also caches data in memory as much as possible to avoid going to disk, but using more disk space doesn't immediately translate to lower performance.

One other interesting detail: since RethinkDB doesnt have schemas, it stores the field names of each document individually. This is one of the trade-offs of not having a schema: even with compression, RethinkDB would use more space than Postgres for this reason. (This also impacts performance, since schemaless data is more complicated to parse and process.)

> Not listening to users is perhaps not the best approach to building quality software. [referring to microbenchmarks]

I think William may have misinterpreted the quote he describes from Slavas post-mortem. Slava was referring to benchmarks that dont affect the core performance of the database or production quality of the system, but may look better when you run micro-benchmarks: https://rethinkdb.com/blog/the-benchmark-youre-reading-is-pr...

We have always had an open development process on GitHub to collaboratively decide what features to build, and what their implementation should look like. Im not certain what design choices William is suggesting we rejected. One has to only look at the proposal for dates and times in RethinkDB to see how this process and open conversation unfolds with our users: https://github.com/rethinkdb/rethinkdb/issues/977

> Really, what I love is the problems that RethinkDB solved, and where I believed RethinkDB could be 2-3 years from now if brilliant engineers like Daniel Mewes continued to work fulltime on the project.

RethinkDB development is proceeding after joining The Linux Foundation, despite the company shutdown. We believe that with a few years of work, RethinkDB will continue to mature as a database to reach Postgres level of stability and performance. Were exploring options for funding dedicated developers long-term as an open-source project.

My thoughts: whatever technology you end up picking is going to have tradeoffs depending on your use case (and the maturity of the technology) and it's going to come with baggage. That's true of Postgres, MongoDB, RethinkDB, any programming language you choose, any tools you pick. If you're willing to carry that baggage it can be worth it: especially if it gives you developer velocity or if the problem you're solving is particularly well-suited to the tool.

Pick the technology that will have the least baggage for your problem. I often recommend Postgres to people, despite being one of the RethinkDB founders. Pragmatism wins over idealism, every time.

2
dhd415 3 days ago 6 replies      
I think there are several useful points here:

1) It's rare to have enough insight into the internals of a particular datastore to accurately predict how it will perform on a particular workload. Whenever possible, early testing on production-scale workloads is essential for planning and proofs of concept.

2) Database capabilities are a moving target. E.g., the performance improvements to pgsql's LISTEN/NOTIFY are essential to its ability to handle this particular workload. In previous jobs, I've had coworkers cite failed experiences with 15-20 year-old databases as reasons for not considering them for new projects. Database tech has come a long way in that time.

3) Carefully-tuned RDBMSs are more capable than many tend to admit.

3
eikenberry 3 days ago 5 replies      
IMO the main advantage for RethinkDB is its HA story. Last time I had to manage a PostgreSQL cluster (2012-2013) its HA story was pretty bad. It was limited to a master-slave(s) setup with manual failover and manual cluster rebuilding all dependent on incomplete 3rd party tools. Has PostgreSQL improved on this? A quick googling leads me to believe it hasn't and I'd only even consider it again if it were managed by a 3rd party (eg. aws rds).
4
jwr 3 days ago 4 replies      
Do I understand correctly that the author went from a distributed database to a single-master scenario? That's a valid tradeoff, but I'd clearly describe it as such.

My experiences with RethinkDB have been rather positive, but my load is nowhere near that of what the article describes. I agree that ReQL could be improved, I found that there are too many limitations in chaining once you start using it for more complex things.

But the two most important advantages remain for me:

* changefeeds (they work for me),

* a distributed database that I can run on multiple nodes.

I do agree that PostgreSQL is fantastic and that SQL is a fine tool. In my case the above points were the only reasons why I did not use PostgreSQL.

EDIT: after thinking about this for a while, I wonder if the RethinkDB changefeed scenario is doable with the tools in PostgreSQL: get initial user data, then get all subsequent changes to that data, with no race conditions. Many workloads seem to concentrate on twitter-like functionality, where the is no clear concept of a change stream beginning and races do not matter.

5
living-fossil 3 days ago 13 replies      
>Everything is an order of magnitude more efficient using PostgreSQL than it was with RethinkDB.

A large part of the sales pitch of "NoSQL" was that traditional RDBMSs couldn't handle "webscale" loads, whatever that meant.

Yet somehow, we continue to see PostgreSQL beating Mongo, Rethink, and other trendy "NoSQL" upstarts at performance, one of the primary advantages they're supposed to have over it.

Let's be frank. The only reason "NoSQL" exists at all is 20-something hipster programmers being too lazy to learn SQL (let alone relational theory), and ageism--not just against older programmers, but against older technology itself, no matter how fast, powerful, stable, and well-tested it may be.

After all, PostgreSQL is "old," having its roots in the Berkeley Ingress project three decades ago. Clearly, something hacked together by a cadre of OSX-using, JSON-slinging hipster programmers MUST be better, right? Nevermind that "NoSQL" itself is technically even older, with "NoSQL" systems like IBM's IMS dating back the 1960s: https://en.wikipedia.org/wiki/IBM_Information_Management_Sys...

6
cpr 3 days ago 0 replies      
Thanks for taking the time to write this up, William.

This is a great read, even if only as a helpful "this is how I did something hard, and how it turned out" kind of hacker story.

And I could see this being quite relevant to some ideas I have for a multi-user semi-real-time cooperative-editing web app.

7
bryogenic 3 days ago 2 replies      
How far has postgres come w.r.t. setting up a cluster with automatic fail-over and recovery? I didn't see the author address this aspect of Rethink that has a lot going for it.
8
iamleppert 3 days ago 8 replies      
It doesn't make sense to me to architect these kinds of applications at the database level.

What's wrong with using something like redis pubsub? I don't get the obsession of evented databases, or implementing this kind of thing at the database level. I suppose its attractive to "listen" to a table for changes but the pattern can be implemented elsewhere and with better tools.

Databases should be used for persistence, organization and schema of data, have flexible querying, and not much else.

9
StreamBright 3 days ago 0 replies      
Should be titled: Finding out how awesome Postgres is (the hard way)
10
api 3 days ago 1 reply      
What I didn't see mentioned here is clustering. One of the things that sold us on RethinkDB was how easy it was to cluster compared to PostgreSQL. The latter has poor documentation and it's very hard to know you've done things right... and if you don't the results can be catastrophic failures or mysterious replication problems with cryptic error messages.

Edit: also I was led to believe by PG documentation that LISTEN/NOTIFY is impossible across a cluster, which means that code depending on LISTEN/NOTIFY is impossible to cluster. If that's the case you're stuck with master/slave and manual or (scary) automatic failover now.

We wanted a system that is masterless (or all-master) in the sense that any node can fail at any time and the system doesn't care. RethinkDB delivers that, at least within the bounds of sane failure scenarios, and it delivers it without requiring a full time DBA to set up and maintain. That's worth a certain amount of CPU, disk, and RAM in exchange for stability and personnel costs, especially when a bare metal 32GB RAM SSD Xeon on OVH is <$200/month fully loaded with monitoring and SLA. So far we've been unable to throw a real world work load at those things that makes them do anything but yawn, and OVH has three data centers in France with private fiber between them allowing for a multi-DC Raft failover cluster. It's pretty sweet.

The only thing that would make me reconsider is if the use patterns of our data were really aggressively relational. In that case PGSQL would be a clear winner in terms of the performance of advanced relational operations and the expressivity of SQL for those operations. ReQL gives you some relational features on top of a document DB but it has limitations and is really designed for simpler relational use cases like basic joins.

11
acidity 3 days ago 2 replies      
>>> Weird. OK, I tried it with some other parameters, and it suddenly took 15 seconds at 100% CPU, with PostgreSQL doing some linear scan through data. Using EXPLAIN I found that with full production data the query planner was doing something idiotic in some cases. I learned how to impact the query planner, and then this query went back to taking only a few milliseconds for any input. With this one change to influence the query planner (to actually always use an index I had properly made), things became dramatically faster. Basically the load on the database server went from 100% to well under >>> 5%.

I am actually interested in this part. Figuring out issues with EXPLAIN is one of my favorite things.

12
skc 3 days ago 3 replies      
So are there any actual success stories that the NoSQL movement can point at because it's bizarre to me how NoSQL can still be all the rage when time after time all I read is post mortems detailing painful experience after painful experience.

I'm at a stage where I haven't built enough of my current project to make moving back to an RDBMS painful yet, so all this stuff scares me.

13
bartread 2 days ago 0 replies      
Good grief. Well this is a genuinely fascinating post containing a couple of absolutely terrifying insights:

"A RethinkDB employee told me he thought I was their biggest user in terms of how hard I was pushing RethinkDB."

Erm. If I were working for/a founder of a relatively small company using a product or service, especially one that's so critical to my own business, that is not the sort of thing I'd want to hear from the provider.

"Everything was a battle; even trying to do backups was really painful, and eventually we gave up on making proper full consistent backups (instead, backing up only the really important tables via complete JSON dumps)."

Holy crap.

Well, the story has a happy ending, and I think the point about the fundamental expressiveness of SQL is something that a lot of people miss in the mad dash to adopt "simpler" NoSQL solutions. I personally find SQL verbose and a bit ugly, but I still sort of love it because it's hugely powerful and expressive. I was perhaps 6 or 7 years into my career before I became comfortable with it, but I wish I'd thrown myself into learning it properly sooner because it is so incredibly useful.

14
brilliantcode 3 days ago 3 replies      
This gives me hope that maybe we could see something similar with Datomic. Perhaps it is possible to implement the same append-only (this is the best way I understand it), immutable audit trail on top of PostgreSQL and still walk away with SQL (datalog is neat-o but has a learning curve).

but I dream the dream...

15
h1d 3 days ago 0 replies      
I've always wondered how ReQL didn't look intuitive from this comparison and stayed away but I guess that wasn't a wrong assumption.

"Definitely, the act of writing queries in SQL was much faster for me than writing ReQL, despite me having used ReQL seriusly for over a year. Theres something really natural and powerful about SQL."

https://www.rethinkdb.com/docs/sql-to-reql/python/

16
crad 3 days ago 3 replies      
(Also posted to the comments section of the blog)

Great writeup! One of the issues I've run into with LISTEN/NOTIFY is the fact that it's not transaction safe. ie if you call NOTIFY and then encounter an error causing a rollback, you can't undo the NOTIFY.

I ended up building a system on top of PgQ (https://wiki.postgresql.org/wiki/SkyTools#PgQ) called Mikkoo (https://github.com/gmr/mikkoo#mikkoo) that uses RabbitMQ to talk with the distributed apps that needed to know about the transaction log. Might be helpful if you end up running into transactional issues with your use of LISTEN/NOTIFY.

17
thewhitetulip 3 days ago 0 replies      
> I care about solutions, not glorifying a particular piece of code for its own sakeI wish everyone were like this!
18
rattray 3 days ago 1 reply      
> I wrote code that automates creation of all triggers to do listen/notify

Care to elaborate and/or open-source? Sounds potentially enticing.

19
grizzles 3 days ago 5 replies      
A really nice thing about code written from the 90s and earlier is that it was designed from the get go to be perfomant, because it had to be. No one in conventional sw writes code like that anymore, not really. eg. When was the last time you used a profiler?

I recently updated from Fedora 24 to 25. I noticed a big performance drop until I shoved more ram into my desktop, and now it's fine again. I can't be certain but I'd wager that this might be because F25 is the first Fedora to use Wayland (over X) by default. X might be old and fugly but it was certainly written in an era where it had to achieve a certain baseline level of performance.

20
postila 1 day ago 0 replies      
Was curious when were LISTEN/NOTIFY implemented in Postgres.

Seems like 6.4 version already had it: https://www.postgresql.org/docs/6.4/static/sql-notify.html

6.4 was released in 1998...

https://github.com/postgres/postgres/blob/REL6_4/doc/src/sgm...

21
ssfak 2 days ago 0 replies      
I am a bit puzzled about the scalability of listen/notify in Postgres and its use in the article. Each "listener" in the code requires a connection in the database so it's not a good design to have one listen "query" for each user. You will probably need a dedicated connection in a thread (or a limited number of connections) for the database listening functionality. You can possibly use some async PG driver but still on the database end I am not sure how efficient and scalable this solution will be.

I can assume that this is a good solution if you don't have (need) a high rate of "notify" statements and a high number of subscribers waiting on "listen". Any comments on these limits of PostgreSQL?

22
perlgeek 3 days ago 0 replies      
I don't understand the approach, really. They manage a 25+ node cluster of RethinkDB, but are reluctant to introduce a message broker?

In my experience, a message broker isn't such a big operational burden, even more so if it doesn't have to persist messages. And a message broker with pub/sub doesn't make the architecture necessarily more complicated.

Somehow that seems like optimizing along the wrong axis.

23
jbhatab 3 days ago 2 replies      
I would LOVE a layer built on top of postgres for reactive db events. +1

If it also configured into graphql like you were mentioning at one point, even sexier.

24
Kiro 3 days ago 3 replies      
My use case: I run a couple of stateful multiplayer games where I have a Node.js server writing to MongoDB on each player update (e.g. "update player.x to 123"). I only read from the DB when a player logs in and is added to the game or when the server restarts (items etc). As long as a player is online the state is just kept in in memory (a big array of all the players) but is written to the DB every time it's updated.

This means the game could theoretically work without a DB at all, the time it takes to write to the DB etc doesn't matter as long as it happens in the correct order. Read speed is also not really relevant since it happens so seldom.

The MongoDB document is the same as the player object in Node.js.

I've been thinking of migrating to RethinkDB but I've also been looking at PostgreSQL. Would the JSON support cover this sufficiently and would it make sense? I don't need any schemas or anything like that. I just want to be able to add and update JSON objects.

25
mi100hael 3 days ago 1 reply      
> I didnt seriously consider MySQL since it doesnt have LISTEN/NOTIFY, and is also GPL licensed, whereas PostgreSQL has a very liberal license.

GPL/AGPL is perfectly permissive and doesn't require any sort of special disclosure if you're just running a vanilla distribution of a server without modifying its code...

26
jorblumesea 3 days ago 2 replies      
I'm confused, the situation clearly called for a relational database. Almost any RDBMS would have been better than any "schemaless" db in this case. Was it just to get the reactive architecture features? I'm confused why you would architect an application from the database up.
27
gamesbrainiac 3 days ago 4 replies      
Quick question, how does one impact PG's query planner other than creating indexes (partial or otherwise)?
28
ruslan_talpa 3 days ago 2 replies      
Good read and yes, postgres is king, no argument there but i do think the comparison is a bit unfair to RethinkDB. Work on Postgres started in 1986, you can't compare performance of tools when one of them had 30 years to work on performance and the other is like 5y old which had the benefit of only a handful of brains working on it. I would say it's remarkabl what RethinkDB did in this timeframe.

In relation to Postgres and real time messages, another approach is to use a real messaging server instead of using only the simple listen/notify interface pg provides. It's possible to connect them using this https://github.com/gmr/pgsql-listen-exchange

I am just wrapping up the integration here (http://graphqlapi.com) and so far it looks good. Postgres provides the power and features we all know and rabbitmq gives you all the realtime capabilities you need, and you can route messages in complex ways and have them delivered to a whole bunch of clients.

29
rattray 3 days ago 1 reply      
The author mentioned they're running in Google Cloud Engine. I'm curious; why not use RDS, which has Postgres support? (Especially considering that they are moving from a 3x-redundant setup to one with no redundancy)
30
devmunchies 3 days ago 1 reply      
No mention of RethinkDB joining the Linux foundation?
31
9gunpi 3 days ago 0 replies      
The year people discover RDBMS are quick and reliable. Again.
32
fapjacks 3 days ago 3 replies      
I think any rewrite is going to be orders of magnitude more efficient anyway. You've got the benefit of hindsight.
33
wildchild 3 days ago 0 replies      
OMG what next? Redis vs PostgreSQL?
34
bandrami 3 days ago 0 replies      
Isn't that kind of like saying "jackhammers vs. crescent wrenches: my personal experience"?
8
Hans Rosling has died gapminder.org
843 points by anc84  6 days ago   135 comments top 47
1
CapTVK 5 days ago 1 reply      
Most readers only know him as a statistician, gapminder (which he founded) and the ted talks but he also had a medical background and was prepared to go straight to work during the Ebola outbreak in Monrovia. He called and jumped right in.

http://www.sciencemag.org/news/2014/12/star-statistician-han...

"After he arrived in Monrovia, Rosling started by doing simple things, such as proofreading the ministry's epidemiological reports, which he says nobody had time for. He changed an important detail in the updates: Rather than listing "0 cases" for counties that had not reported any numberswhich could be misleadinghe left them blank. Next, he tackled the problem behind the missing data. Some health care workers couldn't afford to call in their reports, because they were paying the phone charges themselves; Rosling set up a small fund to pay for scratch cards that gave them airtime."

Rosling says he's tired of the portrayal of Africa as a continent of incompetence, superstition, and rampant corruption. I am astonished how good people are that I work with here, how dedicated, how serious, he says. When The New York Times reported that governmental infighting was hampering the Ebola response, Rosling tweeted: Don McNeil misrepresents Liberias EBOLA-response to win the MOST INCORRECT ARTICLE ABOUT EBOLA AWARD. His self-assurance and impatience with opinions he disagrees with can grate on others. I find him quite irritating, says one Western colleague. Mostly because he turns out to be right about most things.

That last line is the ultimate compliment.

He will be missed.

2
xenadu02 5 days ago 4 replies      
My favorite video of his: A huge chunk of the women in the world spend a depressing amount of their time washing clothes. The washing machine has done more for women than anything else:http://www.ted.com/talks/hans_rosling_and_the_magic_washing_...
3
widforss 5 days ago 2 replies      
Professor Rosling is just the type of man we would need in todays political landscape. A character with a strong belief in verifiable facts and using those facts to change the world for the better.
4
radicalbyte 5 days ago 1 reply      
Sad day. RIP.

For some context: Hans is famous here for his fantastic series of TED talks which cover population growth, poverty and development.

Totally changed (well, confirmed) my world view.

Start here:

https://www.youtube.com/watch?v=fTznEIZRkLg

5
Entalpi 5 days ago 2 replies      
Sad day for all of us who value a fact-based worldview in these dark days of rising nationalism and euroscepticism.

Vila i frid, professor Rosling.

6
Karlozkiller 5 days ago 1 reply      
I feel that the last round of attention Rosling got in Sweden gave the impression of a man determined to see ONLY good. But I do think this feeling got elevated by everyone else parroting uncritically everything he said taking it as the utter truth, and proof that anyone not thinking the exact same were crazy idiots. I guess it also connects to my aversion for simplification and fear of how easy some people seem to take anything at face value.

That being said I do not think my thoughts above lessens his work. I have deep respect for his vision and what he strived to achieve.

7
melling 5 days ago 2 replies      
Very sad. Another victim of pancreatic cancer. A couple of months ago, astronaut Piers Sellers died from it.

Ever since I heard Randy Pausch's "The Last Lecture", I take notice when people die from pancreatic cancer, which a decade later, is still basically a death sentence.

http://www.cmu.edu/randyslecture/

8
afoot 5 days ago 3 replies      
A sad day indeed. One of his TED talks changed my career forever:

https://www.ted.com/talks/hans_rosling_shows_the_best_stats_...

9
botswana99 5 days ago 0 replies      
Very sad. We need more like him to help us understand the true state of the world we live in today. And if you look at the data, like he did, the world is trending upward very well:

A quick article: https://singularityhub.com/2016/06/27/why-the-world-is-bette...

Hans Rosling's Gapminder website: https://www.gapminder.org/videos/dont-panic-end-poverty/

The Website 'Our World In Data': https://ourworldindata.org/

Some books that go into the world facts in detail: https://twitter.com/sapinker/status/814855168793554944.

10
lentil_soup 5 days ago 0 replies      
So sad, this guy was amazing and very enlightening in an era of misinformation.

Check out his presentations: https://www.ted.com/talks/hans_rosling_on_global_population_... (to showcase just one)

11
dorfsmay 5 days ago 0 replies      
For me, he both made me discover TED and be disappointed with every other TED video!

Who's going to carry on his amazing work now...

12
johansch 5 days ago 1 reply      
Wow. This was so unexpected. This hit me surprisingly hard. I guess I expected him to teach us about important misunderstood things for like 20 more years or so.
13
milesf 5 days ago 0 replies      
Aw man. What a loss! Hans is the guy that gave me eyes to see statistics as something beautiful and exciting. I still remember the first time I saw his TED Talk: http://www.ted.com/talks/hans_rosling_shows_the_best_stats_y...
14
braymundo 5 days ago 0 replies      
A sad day, indeed. I will miss his creative and entertaining ways of showing how the world is getting better. Especially in these dark times.
15
awicz 5 days ago 0 replies      
Hans Rosling truly changed the way I view information, and the world. A great loss indeed.
16
btilly 5 days ago 1 reply      
My favorite video of his is https://www.youtube.com/watch?v=jbkSRLYSojo.
17
yesbabyyes 5 days ago 0 replies      
Me and a friend participated in the Node Knockout 2011, we had decided to build a rap lyrics analytics engine and we called it Rapminder. The day before the hackathon started I ran into Hans Rosling right outside of our office and got his blessing. Serendipity.

http://imgur.com/a/bATQn

Rest in Power big homie. May the facts be with us.

18
e40 5 days ago 0 replies      
Pancreatic cancer has taken the majority of people I know that have died from cancer. A horrible way to go.
19
porker 5 days ago 0 replies      
Sad news, his visualizations and approach to communication were the first to get me interested in this field.
20
dsjoerg 5 days ago 0 replies      
Thanks you, Hans, for your excellent & inspiring work. I salute you!
21
mildlyclassic 5 days ago 1 reply      
Goodbye Hans. You will be sorely missed.
22
sleepychu 5 days ago 0 replies      
black bar, please?
23
kayoone 5 days ago 0 replies      
Wow, I still have this tab open for weeks with an article about him that i wanted to read:http://www.nature.com/news/three-minutes-with-hans-rosling-w...

Now hearing that he passed in the meantime is very sad indeed. What a great man.

25
dandersh 5 days ago 0 replies      
Awful news. I just started getting into his work and will be sure to watch some of his TED talks this evening in his honor.
26
robert_foss 5 days ago 0 replies      
:F

A more inspiring and constructive individual I have never encountered.This is a loss with a larger impact than most.

27
wallzz 5 days ago 0 replies      
I remember meeting him in Algeria, where he gave a speak on various economic data of every country and the expected changes in the future with some focus on Africa, it was such an inspiring speech, he has a way to make the data come alive.
28
tomjen3 5 days ago 0 replies      
This is on of his TED talks: https://www.youtube.com/watch?v=hVimVzgtD6w

Also fuck cancer.

29
ak39 5 days ago 0 replies      
What a loss to humanity. This man's lectures and explanations of population growth epitomized hope for me. Empathy in motion!

Heartfelt love and condolences to his family.

30
synicalx 5 days ago 0 replies      
Now what're the chances of that...

In all seriousness though, sorry to hear he passed. He's done a lot of good work and was still quite 'young'.

31
sixQuarks 5 days ago 0 replies      
Why do so many good people die from terrible ailments, while the evil ones like Dick Cheney keep having dozens of heart attacks and keep ticking on?
32
bostand 5 days ago 0 replies      
This is very sad.

He could explain very complex issues in a way everyone could understand. Something that is need more than ever now in the age of fake news and alternative facts...

RIP

33
headconnect 5 days ago 0 replies      
Truly a great loss, but his style and enthusiasm will endure! I'll never forget the first time I watched him speed up the world..
34
mckoss 5 days ago 0 replies      
One of great humans that will be missed by millions. It's a shock that he is no longer a part of our world.
35
ekianjo 5 days ago 0 replies      
Wow, that was a surprise. I remember seeing him on TV not too long ago, no idea he was already ill at that time... A sad day.
36
diegorbaquero 5 days ago 0 replies      
I will always remember his advocacy to teach, share and contribute knowledge. Amazing talks too. Sad and shocking day
37
manuelbieh 5 days ago 0 replies      
Had the honor to see a talk of him live at the TEDSalon in Berlin 2014. Very inspiring. Great loss. RIP Hans
38
cicloid 5 days ago 0 replies      
What a loss! Seeing his TED talks did make an impact on me. He truly was an inspiring person.
39
abc_lisper 5 days ago 0 replies      
Sad sad day! He seemed to have boundless energy in his talks, did not expect this...
40
markshuttle 5 days ago 0 replies      
A man with a wonderful mix of wit, intellect and humanity, he will be missed.
41
baxtr 5 days ago 0 replies      
What a sad day. I will miss his way of making facts really exciting
42
tigroferoce 5 days ago 0 replies      
Sad day. I will always remember his talks at TED. RIP.
43
ianai 5 days ago 0 replies      
I truly hate cancer.
44
dodysw 5 days ago 0 replies      
This person inspires me, very sad day.
45
unixhero 5 days ago 0 replies      
I am filled with sadness about this.
46
bobowzki 5 days ago 0 replies      
Very sad news. He inspired me.
47
jpatel3 5 days ago 0 replies      
Sad day :(

His story is inspirational.

9
Python moved to GitHub github.com
712 points by c8g  22 hours ago   247 comments top 24
1
payne92 17 hours ago 13 replies      
Part of Github's secret sauce: Web source tree browsing that's front and center, that's relatively decent, with OK search. (versus making the log/history the central part of the Web UI as other tools seem to do)

There are SO many times I need a short peek at something, and am glad don't have to clone/download, etc.

2
laurentdc 19 hours ago 5 replies      
Yes!

I quite like the idea of "centralizing" development on GitHub, or similar services. It makes it much easier for everyone to fork, test, make a pull request, merge, etc..

For example, one reason why I gave up contributing to OpenWrt was their absolutely legacy contribution system [1], which required devs to submit code diff patches via email (good luck not messing up the formatting with a modern client) on a mailing list. It took me an hour to submit a patch for three lines of code. It seems like Python wasn't much different. [2]

[1] https://dev.openwrt.org/wiki/SubmittingPatches#a1.Creatingap...

[2] https://docs.python.org/devguide/patch.html

3
di 18 hours ago 1 reply      
4
misnome 16 hours ago 8 replies      
Why on earth have they done:

> Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017

Rather than just

> Copyright (c) 2001-2017

5
agentgt 15 hours ago 4 replies      
This is a little disappointing for several reasons. I understand the merits of GitHub but I really wish Python at least stuck with Mercurial repository and some decentralization.

It's especially sad because Mercurial is just now starting to be incredibly powerful with evolutions.

I guess I'm an old fart but all the centralization has made me paranoid and I still absolutely prefer Mercurial (albeit with plugins) over git.

6
Fice 1 hour ago 2 replies      
This is scary. For increasingly many potential contributors a project effectively does not exist if it is not on GitHub. And, being a huge centralized service, GitHub is very susceptible to censorship (e.g. repos being taken down via DMCA or Russia blocking GitHub until they started to cooperate with the censors). I see this dependence as very bad and dangerous for the free software movement. Should we even consider convenience of a service that has serious ethical issues?
7
tbrock 12 hours ago 0 replies      
Wow if only we could get Gnome and Linux on there and give up this mailing list for patches nonsense we'd be golden.
8
anamoulous 13 hours ago 1 reply      
Wow the black bar settled it for them, huh?
9
EvgeniyZh 17 hours ago 1 reply      
I'd really like more big and important projects would move their development to GitHub-styled services. Maybe I'm just not hardcore enough, but I feel they make live easier, both for maintainers, contributors and newcomers.

But it's probably too hard to switch and core developers don't see point in it (since they're totally ok with working their way). Maybe when a new generation developers will take core positions...

11
hueving 12 hours ago 1 reply      
Sad day. Don't forget that github is a closed source tool. This is equivalent to them announcing they are switching to Jira.
12
imode 10 hours ago 1 reply      
I'd question why it wasn't GitLab but after the recent outage it would somewhat be in bad taste. :P

what exactly was Python using before/where was it hosted? all I can find are the source archives on python.org. I'm assuming this wasn't a hard transition but I'm genuinely curious as to their development strategy regarding distribution of source.

13
jaimebuelta 8 hours ago 0 replies      
Background info about why this migration and considered alternatives

https://snarky.ca/the-history-behind-the-decision-to-move-py...

14
dbalan 16 hours ago 3 replies      
So python eventually moved to git from mercurial.
15
meneses 5 hours ago 0 replies      
FYI, Python was on Github but it was the read-only version. You still had to go to mercurial to push the updates.
16
lucidguppy 4 hours ago 0 replies      
I wish PR's on github could be checked off per commit like in bitbucket.
17
gigatexal 15 hours ago 1 reply      
Python 3.7 already in the works. The effort to push python 3.x is picking up steam it seems.
18
faraggi 7 hours ago 0 replies      
Anbody know what happend to Guido's contributions between 2008-2012?

Maybe he had kids. :P

19
rectangletangle 14 hours ago 1 reply      
Where is the "issues" section? I wanna read about some gnarly low-level CPython bugs!

Other than that, this is a welcome change.

20
napolux 9 hours ago 0 replies      
Still waiting for WordPress to move. ;)
21
hayd 15 hours ago 0 replies      
and nearly 100k commits... time to start making some PRs!
22
dustinmoris 4 hours ago 1 reply      
23
echelon 11 hours ago 1 reply      
I can't help but think Gitlab would have been in contention for this move had they not had the recent outage. Can anyone from the Python org comment on what other choices were considered?
24
hexa- 15 hours ago 0 replies      
I was recently hit by an IPv4 routing outtage and had only IPv6 available to connect to the internet.

I was therefore unable to connect to github.com, as there is no IPv6 support available:

% host github.comgithub.com has address 192.30.253.112github.com has address 192.30.253.113

10
Man jailed 16 months, and counting, for refusing to decrypt hard drives arstechnica.com
616 points by doener  21 hours ago   391 comments top 41
1
realo 15 hours ago 9 replies      
I have a question...

Suppose the suspect Alice only has a portion of the key. Someone else (Bob...) has the remaining key bits.

Alice is busted, and 'compelled to give the key', and DOES provide her portion of the key.

Bob is never found.

Then Alice would be indefinitely imprisoned, even if she would have actually complied with the court order.

It seems unethical, to me.

Bonus question: Alice pretends that Bob exists, but actually he does not, but police cannot prove that. What then?

A possible answer to the first question: Alice is not compelled to provide the key. She is compelled to decrypt the drive. Obviously she can't do that without Bob. Alice is screwed and will spend the rest of her life in prison.

Seems harsh.

2
externalreality 20 hours ago 5 replies      
Not sure what the man's crime is here. Does he even remember his keys after sixteen months in the slammer? I don't even remember my Gmail password after 16 days of vacation. Basically, like the article says, it like not opening a safe for an inquisitor: you are damned if you do, you are damned if you don't. Encryption is nothing new people, you are just putting your data in a safe.

We have a tendency to misconstrue, willfully misinterpret, or altogether ignore the law when it comes to prosecuting individuals who we believe to be standing on much lower moral ground. We do so because we want so badly to punish the accused that we are willing to reduce or eliminate greater good that some privacy laws are aiming to provide (i.e. Trumps silly travel ban which is based on his hatred of Muslims built upon imaginary news stories and personal exaggerations of particular recent events -- all laws out the window)

3
AckSyn 20 hours ago 7 replies      
He shouldn't have to decrypt his hard drives, and I support his decisions.

The problems with this are numerous.

First of all no one has any duty to provide the police with evidence as a 5th amendment protection. It's not a "right" for the police at all to have.

Imprisoning someone for failure to disregard their constitutional rights is absurd.

They have no evidence to hold him period.

4
godelski 19 hours ago 4 replies      
From what I understand our legal system was designed to fail "open". Or rather that we are willing to let a guilty person go free rather than an innocent person go to jail.

I know everyone wants to have a perfect justice system but we have to ALSO decide which direction we would like it to fail until that time comes (never). In essence cases like this are more about this question. When the system fails, which direction do we want it to fail in?

5
spaceboy 20 hours ago 2 replies      
I thought the U.S had better key disclosure law[1] than other countries? Personally I would rather not self-incriminate myself by revealing a key, no matter how draconian and lengthy the sentencing was. Why, you ask? Well I consider all my own personal data likened to an extension of my own mind, and revealing a key is like slicing a thin part of my brain and attempting to pick its contents. Never a gentlemanly thing to do in any circumstance.

In terms of being stopped and searched when traveling, I just carry a TailsOS bootable live USB. My laptop doesn't have a hard-drive and boots entirely from my TailsOS USB stick. I did not enable any persistent storage and any bookmarks I need to remember, I simply remember them by rote, like in that movie The Book of Eli[2]. My threat model is such that I don't want anybody knowing my business when traveling. The intrusiveness should only go so far as one question, like "Business or Pleasure?" and that's all.

[1] https://en.wikipedia.org/wiki/Key_disclosure_law#United_Stat...

[2] https://en.wikipedia.org/wiki/The_Book_of_Eli

6
INTPenis 8 hours ago 0 replies      
Speaking from experience of 1 month vacation I'm not sure I'd be able to decrypt after 16 months of not touching a keyboard.

My most important passwords (passphrases for gpg used by password managers and luks) are in my head and muscle memory.

When I update passwords I tend to have them written down until I've typed them enough times.

So after a months vacation I often struggle to remember my work password for example. While using phrases makes all this easier these days, 16 months is a long time to presumably spend without your keyboard.

7
hysan 8 hours ago 0 replies      
What happens if the drives develop bit rot over those 16 months preventing them from ever being decrypted? Based on the wording of what he is in contempt of, it sounds like he would sit in jail until death. To me, it sounds like the prosecution is trying to play a word game to get around 5th amendment protections.
8
payne92 17 hours ago 1 reply      
If he were being forced to divulge the physical coordinates of a hidden thumb drive, it's highly likely he'd get Fifth Amendment protection.

But being forced to divulge the virtual coordinates of his hidden data is somehow different...

9
ust 20 hours ago 1 reply      
Professor Orin Kerr has wrote about this exact case extensively, and provides a good insight into all legal aspects. I think it is well worth a read, especially the part about the 'forgone conclusion'.

https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...

10
keithnz 14 hours ago 0 replies      
Seems to me the 5th should protect him. Question is, is that a good thing?

Should law enforcement have a right to search through court orders? In a world of unbreakable locks it seems very hard to get justice unless the law can do proper searches. If we end up in a world of unbreakable encryption everywhere, seems to me, criminal activity will have huge benefits. If we can't control crime, we can't have a just society. We can't protect a individuals rights if they are undermined by criminals. Of course, it's also hard if the state has too much power to protect and individuals rights. But somewhere we need pragmatic compromises.

11
downandout 10 hours ago 1 reply      
I'm curious why he doesn't just claim that he forgot the key. There's no way to prove conclusively that he's lying, and a judge cannot jail someone for disobeying a court order that they have no way of obeying. If he's told them he knows it and just won't give it to them, then this bridge is probably burned, but it's probably his only shot. Absent a ruling from the Supreme Court on this issue, they can and will hold him until he complies, dies, or has already served the maximum possible sentence for the crimes he is suspected of committing.
12
Zelmor 7 hours ago 0 replies      
Here we have an example of a forceful, totalitarian state. Where they cannot keep up their laws due to technological progress, they threaten you with violence and boredom (boredom as in sitting in a cage as long as you refuse to comply). In communist era Eastern Europe, secret police used to send in beautiful prostitutes and/or agents to deal with people held in captivity. Get a man to erect his penis. When that happened, strong men stormed the room, grabbed the victim, and inserted a glass rod into his penis. Then smashed it. I would not like to go into details on what they did to women. You can look that up.

Land of the Free - as long as you do not encrypt your shit, that is

13
danbruc 20 hours ago 3 replies      
This is probably a lot less black and white then it might seem at first. If there is sufficient evidence, one can obtain a search warrant and this forces you to possibly act against your own best interest by allowing the police to search your home. On the other hand you can usually not be forced to testify against yourself.

So this becomes the question where decrypting a hard drive lies on this spectrum. Is it more like testifying against yourself or is it more like allowing the police to search your home? Assuming one agrees with the way testifying against yourself and searching your home is currently handled by the law.

14
krick 11 hours ago 0 replies      
Just yesterday [0] someone (multiple people, actually) was claiming fingerprint locking on phones is unsafe based on fact the 5th amendment doesn't protect your fingerprints, but does protect your right to not reveal your password.

[0] - https://news.ycombinator.com/item?id=13622684

15
ohstopitu 2 hours ago 1 reply      
We seriously need self destructing time based keys.

Once a device does not have a password in a time defined by the user - all of it is wiped.

It goes without saying that this should be off by default but definitely a good feature.

16
GTP 7 hours ago 0 replies      
I don't reside in the USA and I don't know the US law. After reading a certain amount of comments seems that the heart of the discussion is the fifth emendament. I agree that if the 5th emendament protects you from giving the combination of a safe logically follows that the same applies for encription passwords. But I don't agree with the root of this argumentation: if a court has a valid reason to think that a safe contains evidence that is relevant for the lawsuit, why should make any difference if the safe is locked with a physical key or with a combination? Somebody who has a different opinion could please explain me his point?
17
planetjones 20 hours ago 8 replies      
As expected on HN I am not surprised to see people defending one's right to privacy and encryption. However, what's the solution then ? If all the "bad guys" who distribute illegal material do so encrypted volumes and refuse to give up the decryption key then what do we do ? It's a different world now; the police can't just take a drill out and open the safe.
18
sologoub 18 hours ago 1 reply      
Modern encryption isn't that much difference in basic concept to a cipher, such that it takes data in readable form and makes it unreadable.

In Apple/Gov dispute on the San Bernardino iPhone case, Gov brought up the Burr case from 1807, arguing that a 3rd party could be compelled to decipher the contents, provided there was no self-incrimination (Apple argued Burr did not apply): http://www.macworld.com/article/3046095/legal/burrs-cipher-s...

Does anyone know if there has been an attempt to equate decryption to deciphering in US courts?

19
orbitingpluto 13 hours ago 0 replies      
I have problems remembering a password if I don't use it for a month or two. After 16 months, I'd probably have no idea.
20
dwaltrip 15 hours ago 1 reply      
I have a related, seemingly analogous situation that I'm curious about.

If a suspect might have physically buried important evidence hundreds of miles of way in the middle of nowhere, such that it is effectively impossible to find, can the courts "compell" the suspect to give up any knowledge of this, in a similar fashion as the man in the article?

21
victor9000 11 hours ago 1 reply      
It seems like the state is holding the accused captive for failure to produce unencrypted hard drives. But how can the state prove that the accused has the ability to fulfill this request?
22
upofadown 13 hours ago 0 replies      
>"...Instead, the order requires only that [Rawls] produce his computer and hard drives in an unencrypted state."

This language makes it sound like the government is specifically asking the defendant to take an affirmative action to produce the evidence required to incriminate himself. That would be the same thing as issuing an order with the intent to compel an accused murder to tell the police where the body is. I really don't understand how a court could issue an order based on such an argument.

23
Elijen 19 hours ago 4 replies      
"I forgot the password"

Problem solved.

24
ioquatix 15 hours ago 0 replies      
Good to know that File Vault actually works.
25
geuis 19 hours ago 0 replies      
Laws protecting the worst of us protect the rest and best of us.
26
Esau 15 hours ago 0 replies      
To me, this is incredibly perverse: no one should be forced to assist the government in their own prosecution.
27
jwatte 17 hours ago 1 reply      
"My hard drive contains only the finest random bytes. There is nothing to decrypt. Please prosecute me for lying if you can prove otherwise."
28
Zikes 15 hours ago 0 replies      
Is the ACLU not at all involved in this case? Shouldn't they be?
29
EGreg 20 hours ago 1 reply      
This is a good reason to have TrueCrypt hidden volumes and other forms of steganography. You can decrypt and still have more stuff that looks like random junk. No one can be sure you decrypted everything.

The best idea however is to have sensitive stuff stored encrypted on freenet, and log in using incognito browser sessions.

30
megous 16 hours ago 0 replies      
Why don't they sue him based on what they already have. They have to have something strong enough to justify holding him for more than a year already, so wtf?
31
mtgx 6 hours ago 0 replies      
Remember when Obama passed the law that allows for indefinite detention without charge? And how his supporters said "Yeah, but it's not like he would use it"?

I think it's already used all the time across the country now. A law that "is not supposed to be used" should not exist. If it exists, then it will be used. I'm sure this is some kind of Murphy law or something.

32
yellowapple 7 hours ago 0 replies      
"The authorities say it's a 'foregone conclusion' that illicit porn is on those drives. But they cannot know for sure unless Rawls hands them the alleged evidence that is encrypted with Apple's standard FileVault software."

Then it ain't a "foregone conclusion". If it was, they wouldn't need him to unlock the drives; they could prosecute him with the evidence they used to arrive upon that "foregone conclusion".

33
ommunist 18 hours ago 0 replies      
Humanity is really close to produce tech that quite literally reads minds. Since those are not easily encrypted, what prosecution awaits judges so much obsessed with illicit topics, like the one discussed in this article? Either we'll have an orwellian "thoughtcrime" to become reality, or we change the legal system. For the moment, it is obvious that American legal system and basic information technology are quite different worlds.
34
dleslie 20 hours ago 3 replies      
Doesn't America have some rule against being compelled to provide evidence of your guilt?
35
cdevs 16 hours ago 0 replies      
We need John ham in that black mirror Christmas episode..
36
cdevs 16 hours ago 0 replies      
We should jail oj until he tells us he did it?
37
TheBobinator 15 hours ago 2 replies      
There's a burden of proof here the court needs to meet before holding the defendant in contempt.

For starters, it's reasonable to assume the defendant owns the hard drives in question if they're in their possession, irregardless of their testimony otherwise.

Given that piece of information:

1: The court has to prove the disks are actually encrypted. It is not merely enough for the cops to pick up the disks, see some garbled data, and determine it's encrypted. Now if you're using a file level encryption protocol that leaves enough un-encrypted stuff on disk that you can identify the filesystem and the file encryption, then you've met the requirement. If you are using full disk encryption, especially something designed to hide the data and filesystem from anything but a forensics package and even a forensics package see's garble, then there's effectively no way to tell the disk is actually encrypted or with what.

AND

2: They have to prove the defendant, at some point, had the encryption key. That requires proving the method of encryption and key generation. With a door lock, you know there's a key. With a safe combo, you know the combo could be 12 digits and broken up between a dozen people. With an encryption system, any combination of things you know, are, or have could be part of the key. Compelling the defendant to reveal all of that is absolutely a violation of their 5th amendment rights.

Example:

Lets assume we're using windows EFS. Lets further assume analysis of EFS indicates the user named "YOU" owns the account. Furthermore, lets assume there are files that have date modified dates within the end users folder that indicate they had logged in the day prior to the search warrant being served.

You give the court the "I don't remember" line.

In that case, forgetting the password is destruction of evidence, not contempt. If a key escrow is used and they can prove it, same deal, destruction of evidence.

38
guard-of-terra 21 hours ago 2 replies      
39
intrasight 20 hours ago 1 reply      
Why don't they let him out on bail until the court is able to hear the case?
40
BigChiefSmokem 20 hours ago 2 replies      
Let him go then.
41
m-j-fox 10 hours ago 0 replies      
My idea is to just hand over the password, which is 'asdf'. If the password fails it's because whoever is in possession of the computer has already logged on and changed the password -- no longer my responsibility.
11
Microsoft open-sources Graph Engine graphengine.io
608 points by dajoh  4 days ago   173 comments top 23
1
wyldfire 4 days ago 1 reply      
I find that very abstract software packages like this are difficult to visualize without an example. This page does not offer one, but TFM does -- see [1]. Also note that TFM describes the fact that this is Windows-only. That makes it substantially less interesting, IMO.

[1] https://www.graphengine.io/docs/manual/index.html#what-is-ge

2
bluejekyll 4 days ago 9 replies      
At this rate, are we going to see Windows open-sourced?

MS is on a roll. My bias since 1996 is being eroded with each OSS release they have, and multi-platform targeted support.

I started using VSCode regularly as my main Rust IDE, and I feel dirty for liking it. It's seemless across macOS and Linux.

3
crudbug 3 days ago 3 replies      
Interesting - Linux Foundation with IBM, Google and others announced last week JanusGraph [0]. Janus provides optional persistent storage option also.

What are the good use cases for these ?

[0] http://janusgraph.org/

4
sidcool 3 days ago 0 replies      
For a noob like me, what are the applications of this framework? What are the use cases? What current tools do this?
5
rch 3 days ago 0 replies      
This era of open source from MS is great, but my reaction is always "OK, here's the Microsoft version of something I've been using for a couple of years already".

I'd like to see the Spanner+Cyc GIS-capable global distributed real-time graph engine with FPGA accelerated OLAP support MS Research is probably sitting on, because we can almost / pretty-much hack that together with OSS now.

6
brilliantcode 3 days ago 3 replies      
Microsoft of 2017 makes me forget about Microsoft of 1997. It's insane how a shakeup of CEO and new cultural shift can seemingly add another major boost to it's brand.

I welcome open source, eventually, I believe that it will eat commercial software, if the right economic incentives are in place. Microsoft may be signaling this to the market.

7
mlmlmasd 4 days ago 1 reply      
Seems to be just a marketing page with no link to source code or even mention of open source?

Here is the github page:

https://github.com/Microsoft/GraphEngine

8
cobookman 4 days ago 6 replies      
Am I the only one getting SSL cert authority invalid error
9
adamnemecek 4 days ago 4 replies      
I've been running into the idea of computational graphs a lot recently. It's at the core of Tensorflow (and NN in general) but it also comes up for example in Apple's AVFoundation where all audio processing happens in a graph of audio units. Does anyone know what's the theoretical foundation of computational graphs?

EDIT: I've created a wiki page for computational graphs. https://en.wikipedia.org/wiki/Computational_Graph. Add your input.

10
ronack 3 days ago 1 reply      
Does Microsoft use this for anything in production?
11
VeejayRampay 3 days ago 0 replies      
Funny how Microsoft has been rocking hard with their OS releases lately (and good on them for doing so), but there's still a pervasive feeling in the dev community about their true eventual motives and strategy.

"Fool me once..." I guess, or as we say in France "Cold water scares the scalded cat".

12
vegabook 3 days ago 0 replies      
Is this in the same space as Storm, Flink, or Spark?

Does it do streaming data such as Flink or Storm? Or is it batch-optimized?

What languages does the compute engine support?

13
kensai 3 days ago 1 reply      
I still like the neat Pajek. Nifty little piece of software, unknown by most, but really powerful! Especially if you are into social network analysis. Who else uses it?

http://mrvar.fdv.uni-lj.si/pajek/

15
dredmorbius 3 days ago 0 replies      
16
infocollector 4 days ago 3 replies      
Does this have a python interface?
17
tempVariable 3 days ago 1 reply      
I see that the source code "os.h" file has directives to handle Linux and Apple. Has anyone managed to build it and use it in the even most trivial way?
18
purple-dragon 3 days ago 0 replies      
I'm a little confused. Is this competitive to something like Spark?
19
linux_devil 3 days ago 0 replies      
I am confused, how is it different from spark graphx?
20
zump 3 days ago 0 replies      
How is this different to D-Graph?
21
floopidydoopidy 4 days ago 2 replies      
Although astroturfing doesn't seem to be a problem on HN, I'd really appreciate it if you didn't do it.
22
anon987 3 days ago 1 reply      
How the hell does this story have so many upvotes?

HN really needs to do something about Microsoft's vote manipulation - it's becoming quite blatant at this point.

23
bitwize 3 days ago 1 reply      
This is not a cuddly new Microsoft. First comes the embrace (look at all this stuff on our github!), then comes the extend (run your Linux stack on Windows and never have to give up Visual Studio!), I'm sure you know what comes next. Hint: PC manufacturers no longer have to give you the option to disable Secure Boot.
12
Oxford Deep NLP An advanced course on natural language processing github.com
597 points by melqdusy  6 days ago   65 comments top 20
1
jkbschwarz 6 days ago 2 replies      
Im taking this course at Oxford and they have been working through the practicals 1-3 (further ones will be posted)

For anyone considering working through this outside of Oxford: I think the practicals are the real gems here and should be doable without the practical lab sessions that you get when attending the course. With that being said, they use a dataset a bit closer to a real world assignment. Therefore, it requires some patience when wrangling the data especially for the later practicals.

However, the patience should pay off and it is rewarding once you build your own nonsense spewing TEDbot!

2
demonshalo 6 days ago 6 replies      
here is what I don't understand about deep NLP (please keep in mind that I just began exploring this field):

I am currently working on an algorithm that uses elementary text cues in combination with large data-table lookups to determine things like relevant keywords of news articles scraped from various sites. I have given my results to hundreds of people independently to provide me with some feedback regarding the quality. Here is the current breakdown:

80% of the cases I get perfect score.

10% of the cases I get acceptable score.

10% of the cases needs improvement.

My questions here are:

1. if deep nlp can only provide us with the same level of efficiency/accuracy, then why the hell would we use it?

2. if deep nlp can provide us with more efficiency than what is stated above then wouldn't it be safe to assume that is UNREASONABLY efficient?

3. why are most people using deep nlp or ML in general right off the bat. Theoretically, it would be far more interesting to construct a model where the result of a statistical/linguistically parsing is fed to some sot of ML algo in order to tackle that 10% of bad cases.

3
seycombi 6 days ago 1 reply      
YOUTUBE-DL will download the lectures https://rg3.github.io/youtube-dl/
4
hmate9 6 days ago 0 replies      
I am currently taking this course at Oxford and definitely recommend following this.

We will be using TED talks as our dataset, to create Question Answering, text completion, generating entire TED talks ourselves etc. Definitely very interesting and it is being taught by leading researchers in the field!

5
roystonvassey 6 days ago 0 replies      
Began this course earlier today and I think they appear to be pulling off the right combo of first principles foundation and tough problem sets, like cs224n (Karpathys CNN class). Other NLP courses that I've taken so far have gone over my head.
6
orthoganol 6 days ago 2 replies      
> The primary assessment for this course will be a take-home assignment issued at the end of the term. This assignment will ask questions drawing on the concepts and models discussed in the course, as well as from selected research publications.

Comes as a surprise that it's not a project, as, in my experience, all ML/ DL courses I've seen online from US universities (Cal, Stanford, etc.) require. Different university culture across the pond?

7
melqdusy 6 days ago 1 reply      
Stanford's version https://web.stanford.edu/class/cs224n/Note: the videos will be available later.
8
Fede_V 6 days ago 0 replies      
If you want more advanced materials, both Kyunghyun Cho and Yoav Goldberg posted excellent notes: https://arxiv.org/abs/1511.07916 and https://arxiv.org/abs/1510.00726
9
kalal 6 days ago 0 replies      
'Advanced' course with 'Sesame Street' introduction: https://ox.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?id=b7d... One of them should go :)
10
mailshanx 6 days ago 1 reply      
How does the course content and rigor compare to the Stanford deep learning for NLP course? From a cursory glance at the practicals, it seems like the Stanford version has more variety and depth of problems.
11
webmaven 6 days ago 0 replies      
Link should probably be to the org rather than a specific repo: https://github.com/oxford-cs-deepnlp-2017
12
RegW 6 days ago 1 reply      
"Prerequisites: This not meant to be an introduction to Machine Learning course. Hopefully you've all got some knowledge to machine learning, otherwise you may find this a bit opaque. So at least you should understand/have taken courses in linear algebra, calculus, probability, ... we are not going to do anything particularly challenging in those areas, but ideas from those areas will be useful."

around 7 mins 30secs into the introduction

13
alfonsodev 6 days ago 1 reply      
Here [1] is an example of the videos, the player has a handy search feature and links to video parts.

update: It would be great to have a way to take your own notes, any chrome extension that can help with that ?

[1] https://ox.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?id=ff9...

14
option_greek 6 days ago 3 replies      
What are the practical uses of language modelling RNNs ? (apart from writing grammar/syntax checkers)
15
ejanus 5 days ago 1 reply      
Is there any good introductory material? I have tried severally to understand the theory and the spirit of DL and ML, but I have not been able to connect the dots. Please direct my path.
16
hoju 5 days ago 0 replies      
Darn - I did an MSc at Oxford last year and they didn't offer this course then
17
pratap103 6 days ago 1 reply      
I'm reading 'Deep Learning' right now so this is going to be really useful. Thanks a lot!
18
mcintyre1994 6 days ago 0 replies      
This looks amazing, thankyou for sharing!
19
mrcactu5 5 days ago 0 replies      
coookie monster and the fairy keep exchanging apples <----> bananas

how does this help me solve NLP

20
jray 6 days ago 0 replies      
Edit.
13
The most mentioned books on Stack Overflow dev-books.com
633 points by vladwetzel  5 days ago   244 comments top 51
1
JelteF 5 days ago 11 replies      
Although not directly development related. The most impressive book I've had the pleasure of reading is "Gdel Escher Bach: An eternal golden braid" (also known as GEB) from Douglas Hofstadter.

It's hard to explain what it is about exactly, but it contains ideas and concepts from mathematics, computer science, philosophy and conscience. All of it is explained in very clear and interesting way. I can recommend it to anyone interested in these topics.

The book won a Pulitzer and to take a quote from the Scientific American about it: "Every few decades, an unknown author brings out a book of such depth, clarity, range, wit, beauty and originality that it is recognized at once as a major literary event."

2
grabcocque 5 days ago 9 replies      
Ah, the good old Design Patterns book, responsible for more atrocious over-abstracted, unreadable, hard to maintain Java code than anything else before or since.
3
0x54MUR41 5 days ago 1 reply      
This is like Hacker News Book [1].

But, it scraps user comments that contain Amazon book link from HN and orders them by theirs point. If you wonder how much money Hacker News Book has been made, I would suggest to read this post [2]. The different part with this is that Hacker News Book shows not only the most mentioned books all the time but also per week. Hacker News Book has more diversity of book topics since HN is not limited for programming discussions.

Anyway, I just want to say congratz to the OP for launching this. I think you miss "Show HN" on your post title.

[1]: http://hackernewsbooks.com/

[2]: http://hackernewsbooks.com/blog/making-1000-dollars-in-5-day...

4
coderholic 4 days ago 0 replies      
I started playing with Google's BigQuery today, and it has StackOverflow and HN and GitHub datasets that make pulling data like this relatively trivial. I've been super impressed with it. Examples here: https://cloud.google.com/blog/big-data/2016/12/google-bigque... and some more at https://cloud.google.com/bigquery/public-data/stackoverflow

I just quickly hacked together this query which pulls out all amazon URLs in post answers:

 SELECT REGEXP_EXTRACT(body, r'[^a-z](http[a-z\:\-\_0-9\/\.]+amazon[a-z\:\-\_0-9\/\.]*)[^a-z]') AS link, COUNT(1) FROM [bigquery-public-data:stackoverflow.posts_answers] GROUP BY 1 ORDER BY 2 DESC LIMIT 20
It takes 5 seconds to run - over ALL stackoverflow answers!

5
blurrywh 5 days ago 5 replies      
Nice Amazon affiliate hack.

Would be great if the OP would tell us how many sales he made through this post (once he got the stats from the Amazon affiliate dashboard).

6
henrik_w 5 days ago 1 reply      
A relatively new book that isn't mentioned (but that I really like) is "The Effective Engineer" by Edmond Lau.

https://www.amazon.com/Effective-Engineer-Engineering-Dispro...

Edit: Here's why I like it: https://henrikwarne.com/2017/01/15/book-review-the-effective...

7
vram22 4 days ago 0 replies      
The Art of Software Testing by Glenford Myers is great.

https://en.wikipedia.org/wiki/Glenford_Myers [1]

The anecdote in the beginning of the book, where he poses a simple question - how many test cases can you write for this simple program? about the geometry of a triangle - is mind-blowing, and sticks in my mind many years after reading it.

He says most people he gave it to, even experienced devs, did somewhat poorly or just average on it. I gave it as a test to a team (of juniors) on a project I was leading, once, and it made them see some light. We subsequently went on to deliver a pretty well-tested project.

[1] Excerpts:

[ Glenford Myers (born December 12, 1946) is an American computer scientist, entrepreneur, and author. He founded two successful high-tech companies (RadiSys and IP Fabrics), authored eight textbooks in the computer sciences, and made important contributions in microprocessor architecture. He holds a number of patents, including the original patent on "register scoreboarding" in microprocessor chips.[1] He has a BS in electrical engineering from Clarkson University, an MS in computer science from Syracuse University, and a PhD in computer science from the Polytechnic Institute of New York University. ]

[ During this period, Myers also authored his first four books, including The Art of Software Testing, a book that became a classic and a best-seller in the computer science field, staying in print for 26 years before it was replaced by a second edition in 2004.[3] Myers also served as a lecturer in computer science at the Polytechnic Institute of New York University, where he taught graduate-level courses in computer science. Years later, he was the 1988 recipient of the J.-D. Warnier Prize for his contributions to the field of software engineering. ]

8
driscoll42 5 days ago 1 reply      
Is there any way to do this datamining, but not just looking at amazon links? By that I mean that I'm sure people mention "Code Complete" often enough, but don't bother linking to it on Amazon. That'd be interesting to see the results.

Also could be interesting to see this study, but also weighted based on the number of votes the posts referring to them got.

9
Unbeliever69 5 days ago 4 replies      
Didn't see SICP (Structure and Interpretation of Computer Programs) on that list. Surprised considering how often I hear it quoted on SO. You would think it was the bible of computer science. A little old in the tooth i'd imagine.
10
moron4hire 5 days ago 3 replies      
I wonder how many people actually read the books they were suggesting.

Like here on HN, Art of Computer Programming gets mentioned a lot. But I've not met anyone who has actually read it. And I own a copy. It's so hard of a read that I am just going to assume anyone who says they did is either lying or Donald Knuth.

I'm suspect of books as a means of information conveyance. They make great mediums for narratives, but narrative is too slow for technical writing. Similarly, I dislike videos, podcasts, and conference talks. Technical writing should be a wiki-like document where terms expand in-place. Start super high-level. Always written in clipped, imperative style. Single line per fact. Like mathematical proofs, but maybe in reverse.

11
hodgesrm 4 days ago 1 reply      
I'm really happy to see "Java Concurrency in Practice" at #4 on the list.

It's a great intro to concurrent programming with lessons that apply to virtually any high-level programming language. The chapter on the Java memory model is the best practical description of how languages map to multi-processor memory models I have ever read. (Chapter 16 in my edition.)

After reading this book it's easy to understand why concurrency features in other languages are necessary and what they are doing behind the scenes. Golang channels come lightly to mind.

12
peeters 5 days ago 0 replies      
Seems like a flaw in the algorithm if Effective Java is not in there. I filtered by the Java tag and "Effective C++" showed up, but not Effective Java.
13
artursapek 5 days ago 1 reply      
People who make these Amazon referral farm sites, is anyone willing to share how much money they make off theirs? Maybe like a $ per 100 pageviews stat or something? I'm curious.
14
tankenmate 5 days ago 1 reply      
One book(s) that I didn't see come up on the list was "The Art of Computer Programming". It would be interesting to see if that was because it's popularity got spread out because of multiple volumes or if people just don't mention it that much.
15
dj-wonk 4 days ago 2 replies      
The results from the "Compiler-Construction" are not good. You'll find better results from a search engine, Q&A site, or bookseller:

Here are the top 10:

 #1 Design Patterns - Ralph Johnson, Erich Gamma, John Vlissides, Richard Helm #2 Clean Code - Robert C. Martin #3 The C Programming Language - Brian W. Kernighan, Dennis M. Ritchie #4 CLR Via C# - Jeffrey Richter #5 Modern C++ Design - Andrei Alexandrescu #6 Large-scale C++ Software Design - John Lakos #7 Inside the Microsoft Build Engine - Sayed Ibrahim Hashimi, William Bartholomew #8 Programming Microsoft ASP.NET 2.0 core reference - Dino Esposito #9 Compilers - Alfred V. Aho #10 Accelerated C++ - Andrew Koenig, Barbara E. Moo
Ok, #9 is a sensible choice. The rest are not about compiler construction.

Now, the top 20:

 #11 Hacker's Delight - Henry S. Warren #12 Nos camarades Franais - Elida Maria Szarota (actually The C++ Programming Language by Stroustrup) #13 Compilers - Alfred V. Aho, Ravi Sethi #14 Inside the C++ Object Model #15 Code - Charles Petzold #16 Hacking, 2nd Edition - Jon Erickson #17 C Plus Plus Primer - Stanley B. Lippman, Jose Lajoie, Barbara E. Moo #18 C Interfaces and Implementations - David R. Hanson #19 Language Implementation Patterns - Terence Parr #20 LISP in Small Pieces - Christian Queinnec
Ok, #13 and #19 look relevant. The rest... not so much, at least by a quick skim.

Here are the top 30:

 #21 Linkers and Loaders - John R. Levine #22 Assembly Language Step-by-Step - Jeff Duntemann #23 The Garbage Collection Handbook - Richard Jones, Antony Hosking, Eliot Moss #24 Game Scripting Mastery - Alex Varanese #25 Domain-specific Languages - Martin Fowler, Rebecca Parsons #26 Computer Architecture - John L. Hennessy, David A. Patterson #27 The Elements of Computing Systems - Noam Nisan, Shimon Schocken #28 The ACE Programmer's Guide - Stephen D. Huston, James C. E. Johnson, Umar Syyid #29 Modern Compiler Implementation in C - Andrew W. Appel, Maia Ginsburg #30 Algorithms + Data Structures - Niklaus Wirth
Caveat: I am not a compiler writer, though I have read many of these books. Still, my point stands, a good search engine gives more convincing results. Let me know if I'm missing something.

16
ComputerGuru 5 days ago 3 replies      
Can someone recommend a good book on linear algebra for somebody that took it in college but needs a refresher plus some advanced linear algebra concepts for machine learning?
17
CalChris 4 days ago 0 replies      
The 2nd edition of the Dragon book is a worthy update to an old classic. It's the only book on the list I like.

I've been sitting in on Monica Lam's Stanford CS 243 lectures and she's covering scheduling and software pipelining right now. Lam definitely knows her material; she wrote the papers before she re-wrote the book. She's an excellent lecturer to all of 15 students and one who asks perhaps more than his fair share of questions.

18
sharmi 4 days ago 0 replies      
For all those people who wonder why is the popular XYZ book not on the list:

The algorithm here is to take the links to amazon and make a count of it. Plain and simple.

For a more popular book like SICP, it just going to mentioned by it's acronym as SICP. The expanded version of the title "Structure and Interpretation of Computer Programs" is rarely mentioned and so is the author name. An amazon link would be almost non-existent. This is because, the book is so popular the reader needs no introduction. Unfortunately, as per the logic used for this site, it will not be accounted. The same goes for books whose multiple versions show up in the result.

This is not to put down the dev-books.com site in anyway. To do a disambiguating parser that can parse any format of title and/or author name would increase the complexity and implementation time by orders of magnitude. It would also be completely against the mantra of "Release early and often".

19
jackschultz 4 days ago 0 replies      
Really cool here. Speaking of book mentions, I actually did a project a couple months ago that checks for all the Amazon products mentioned on Reddit: http://www.productmentions.com

I don't have it at the moment, but seems useful to do things like check for topics or something so it'd have the ability to check for book topics and things like that too.

20
kristopolous 5 days ago 1 reply      
I wonder how many (0?) users cite the same book over and over again and if that's accounted for.

Furthermore I wonder if peddling one's own book on SO leads to more sales.

21
forgetsusername 4 days ago 0 replies      
How valuable is this without any of the discussion context surrounding these books? It seems unlikely that I'm going to purchase a book based on a metric like "mentions", when much of the discussion could be negative. I mean, I get it from the perspective of scraping/development practice, but it doesn't feel overly useful. And I'm a book hunter.
22
deskcoder 5 days ago 1 reply      
This is pretty cool. Would also be neat if you could filter by date ... like what were the top JS books mentioned so far in 2017.
23
24
agentgt 4 days ago 0 replies      
I have owned many of those books at various times in my life (either because of school or I borrowed it errr accidentally stole it from work).

The only book I have kept and refuse to part with is the Kernighan + Ritchie book (and I'm not a C programmer).

25
akulbe 4 days ago 0 replies      
It seems interesting to me, how often books about C++ are mentioned in there. 5 of 30 listed

I say that, because it seems like people like to talk bad about C++ as if it's terrible language (rather than just another tool in the toolbox).

I don't have an opinion either way. Just making an observation.

26
jcahill84 5 days ago 2 replies      
This is great. Do you have the code you used to scrape and rank the books posted somewhere?
27
Malic 5 days ago 2 replies      
I'm surprised that Peopleware (DeMarco and Lister) isn't in there somewhere.
28
megawatthours 5 days ago 4 replies      
JS: The Good Parts is obsolete
29
40acres 5 days ago 1 reply      
Not a lot of algorithms books on this list, I was surprised to see CLRS at #14.
30
mooneater 4 days ago 0 replies      
Results for R remind me that R is not a search friendly name
31
LoSboccacc 5 days ago 1 reply      
too much hard coding books imho. 'the design of everyday thing' and 'don't make me think' should be read more.
32
hexagonsun 4 days ago 1 reply      
One of my older, much more experienced co-workers says "The Art of Unit Testing is last... how poetic". Perfect.
33
grabcocque 5 days ago 0 replies      
I got a problem with this code. I know, I'll use design patterns.

Great! I now have an AbstractProblemFactory to generate problems on demand.

34
maweki 4 days ago 0 replies      
Happy to see that Okasaki is on the list.
35
pc86 4 days ago 0 replies      
The C# button doesn't seen to actually filter the results, perhaps due to the #?
36
sAbakumoff 5 days ago 0 replies      
Nicely done, but what about python, ruby and php? Haven't you found any books on these topics at all?
37
codazoda 4 days ago 0 replies      
This is a great reference and, I hope, a great way to make some money being an affiliate. Good job.
38
RusAlex 5 days ago 0 replies      
When I select HTML tag there is "Design Patterns" gang of four on first place.
39
xycodex 4 days ago 0 replies      
Why is no one talking about the #1 book on legacy code?
40
kris-s 4 days ago 0 replies      
Just to throw my two cents in here: my favorite technical book is the fairly recent Go Programming Language by Donovan & Kernighan (www.gopl.io).

It's very readable and has _excellent_ exercises to do as you work through the book.

41
jordache 4 days ago 0 replies      
i see a regex book at #15. I really need that.

Is mastering regex simply a form of memorization? There doesn't seem to be any logical pattern to the various flags.

42
mongmong 4 days ago 0 replies      
Glad to see Domain Driven Design aka the Blue Book.
43
sigstoat 5 days ago 0 replies      
i'd really like to see this repeated for other top stackexchange sites. math, mathoverflow, software engineering, electrical engineering, etc.
44
JacenRKohler 5 days ago 0 replies      
This is a great resource. Thanks for sharing!
45
RawData 5 days ago 2 replies      
No Ruby on Rails books? Or Python?
46
erichmond 4 days ago 0 replies      
2,3,5,6,8,9,11,12,14,15
47
ska 5 days ago 1 reply      
Is there any reason to expect this distribution to vary much from "top selling" ?
48
TylerH 4 days ago 1 reply      
Should be a short list since Stack Overflow is not the place for book recommendations.
49
devsmt 5 days ago 0 replies      
beautiful!
50
SquareWheel 4 days ago 2 replies      
You're violating the terms of Amazon's associate program by including affiliate links without notice.

>"You must clearly state the following on your Site or any other location where Amazon may authorize your display or other use of Content: We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites."

https://affiliate-program.amazon.com/help/operating/agreemen...

51
ak39 5 days ago 0 replies      
"Patterns" are to the art of good programming what organized religion is to the art of good living. Everyone loves the ideas, everyone hates the implementations.
14
Apple proposes new web 3D graphics API webkit.org
573 points by mozumder  5 days ago   620 comments top 57
1
NickGerleman 5 days ago 11 replies      
"The major platform technologies in this space are Direct3D 12 from Microsoft, Metal from Apple, and Vulkan from the Khronos Group. While these technologies have similar design concepts, unfortunately none are available across all platforms."

So Apple, the only company not supporting Vulkan on their platforms, is complaining that there isn't a cross-platform solution?

2
shmerl 5 days ago 3 replies      
Will it be patent encumbered, like Apple's "proposal" for touch events API?

> Meanwhile, GPU technology has improved and new software APIs have been created to better reflect the designs of modern GPUs. These new APIs exist at a lower level of abstraction and, due to their reduced overhead, generally offer better performance than OpenGL. The major platform technologies in this space are Direct3D 12 from Microsoft, Metal from Apple, and Vulkan from the Khronos Group. While these technologies have similar design concepts, unfortunately none are available across all platforms.

Oh, really? And who is to blame, that Vulkan is not available on Apple platforms?

3
_ph_ 5 days ago 1 reply      
For all arguing "to use Vulcan": the API proposed is a high-level API which could be implemented on top of Vulcan. Or any other lowlevel API like DirectX or Metal.

When designing an API, you have to take the characteristics of the calling language into respect. Especially when you are trying to achieve the highest possible performance. You can define highlevel APIs, which try to minimize the amount of computation in the calling language, or a more lowlevel API, which gives only very basic operations, and the client has to implement more logic themselves.

When targeting "fast" languages like C/C++ you tend to design more lowlevel APIs, assuming that the client can implement abstractions more efficiently as they are more tuned to his needs. When targeting slower languages, you want to do more computation in the API.

So for designing a new Web API, you both have to consider that Javascript is slower than compiled C and on top of that, you have to consider security requirements. Your API should not create an attack vector to your device. This means, the best Web API would be an abstraction on top of the native low level API and the only question with respect to those would be: can the proposed API be implemented well and efficient on e.g. Vulcan?

4
hutzlibu 5 days ago 1 reply      
It is so sad to see this whole discussion.

Because I usually think, HN is different, from the angry, uninformed mobs out "on the streets".

I mean, I don't know much about low level graphic API's, I read about them, but I don't work with them directly.But apparently most of the people here don't know either!

They basically seem to know that they like open standards (so do I) and that Vulkan is a open Standard and they heard of a possible WebVulkan so that must be the solution then. Or just continue using WebGL. Because it works, right?

What they didn't heard of and what I came up with a bit of reading, before posting anything, is that WebVulkan first of all don't exist and secondly probably shouldn't exist mainly because of security. And that it is very, very low level. (e.g. https://floooh.github.io/2016/08/13/webgl-next.html)And that WebGL in itself has some flaws, so it makes sense to design a new API for the future, without them. Because even though WebGL is working and awesome right now, that's not a reason to not evolve ...

But why bother reading and really discuss a new WEB proposal, when "People just want Vulkan on MacOS and/or IOS."?

5
ytugiuyghvk 5 days ago 2 replies      
Related reading re. Vulkan

"What a WebGL2 successor could look like and why it cant be WebVulkan" - https://floooh.github.io/2016/08/13/webgl-next.html

See also the the author's sketch of a next-gen web graphics API on top of WebGL (https://floooh.github.io/2016/10/24/altai.html) which is (perhaps unsurprisingl) broadly similar to the sketch given the article (minus command queues, shader libraries, ...).

6
Benjamin_Dobell 5 days ago 3 replies      
I feel like Apple are just trying to get in early with a proposal so they don't get forced into supporting Vulkan (and wasting all that effort on Metal).

Mind you, it does at least look like they're trying not to be jerks about it (even if the motivation is somewhat selfish). They specifically mention the competition to Metal and how "webgpu" is ideally an abstraction that'll sit on-top of Vulkan, Metal and Direct3D 12.

It'll be interesting to see how this pans out. Vulkan, Metal and Direct3D 12 are all intentionally very low level, adding a wrapper of any kind may be seen as non-ideal by all parties.

7
nkkollaw 5 days ago 4 replies      
Apple proposing standards is ridiculous.

They only care about standards when they're basically forced to adopt/support them, or they're their own.

These are the same guys that:

- use lightning instead of mini USB

- removed the headphone jack

- are creating yet another proprietary connector for accessories

- don't allow their OS on third-party hardware

- only allow developing for iOS from macOS

- only allow apps on iOS if you install from the App Store

- only allow their own browser engine on iOS

- etc.

Everyone wants their own standard to be the standard, if they really cared about it they would contribute to Vulkan and create WebVulkan.

8
gfwilliams 5 days ago 2 replies      
I'd like to see Apple implement Safari's missing functionality before they try and come up with a replacement for WebGL.

Where are getUserMedia, MediaRecorder, WebRTC, filesystem, vibration, screen orientation, Service Workers, Web Bluetooth ? What about WebGL 2? I'm sure there are a few more I missed too.

Those are things that are really holding the web browsing experience back on iOS/MacOS, not an extra 20% 3D performance and slightly shinier graphics.

Or why not lift the app store restrictions so that Google/Mozilla can actually ship non-crippled web browsers and implement it all for you?

9
vvanders 5 days ago 3 replies      
Anything that's not based on the Vulkan spec is just a land-grab by Apple to push their own technologies.

As someone who spends a lot of time in that space I don't really see what this is solving, WebGL is good enough and anyone serious about performance/compute are going to drop down to native anyway.

10
vilya 5 days ago 1 reply      
All of you complaining about this proposal not being based on Vulkan seem to be overlooking the fact that Vulkan is actually quite cumbersome to use. Metal, on the other hand, is a really well designed API and in my opinion strikes just the right balance between performance and usability. If it was available for non-Mac platforms too, it would be my first choice of graphics API every time. So for me, a cross platform web graphics API based on Metal is really quite an exciting prospect - much more so than one based on Vulkan - and I applaud Apple for proposing it.
11
Zafira 5 days ago 0 replies      
I think this is a fruitful and promising discussion on the part of the WebKit team. The biggest problem here is that the antipathy towards Apple in response to the perception that Apple management thinks the Macintosh is a dead-end and this is starting to infect other discussions.

As someone else noted, Apple created OpenCL, helped make it a standard and dumped it. It embraced OpenGL when Mac OS X first came out and now it can't be bothered to implement anything beyond 4.1. All of these little irritations serve to remind people of the Apple of yore that had NIH in extremis and was the gang that couldn't shoot straight.

The hostility is unfortunate, but I fear it's going to become more frequent if Apple's senior management thinks the ship is fine.

12
sowbug 5 days ago 0 replies      
I find Apple Inc. as annoying as the next geek does, but I agree with them here. It's rarely -- possibly never -- a good idea to tie a web API design to a desktop API (see, e.g., WebSQL). The web platform is too different from desktop platforms for any nontrivial functionality to port slavishly to it.

Moreover, web standards last much longer than typical desktop APIs. Do we really want to take a snapshot of Vulkan's API today and live with it on the web for the next couple decades, as desktop Vulkan continues to evolve?

It's better to build a web API that is fluent for its target platform, taking care that it's possible to implement it performantly on foreseeable host platforms.

Let Apple lock themselves in if they want. Don't lock the web into Vulkan.

13
jolux 5 days ago 0 replies      
Why exactly are we laying it on thick against Apple here when they have the support of other browser vendors in proposing this and are trying to fix a legitimate problem? All of the comments seem to be "well wouldn't it be nice if they used WebVulkan instead" when WebVulkan doesn't fucking exist. If it were so obviously superior why isn't it in the pipeline already? Apple is not the only company that can propose standards, if there was really such an immense appetite for WebVulkan I think it would already have been proposed!

This doesn't even touch the reasons it hasn't been, and the most important of these for me is security. "Well yes, it's extremely low level and admittedly very dangerous, but we can make it secure enough to expose to the most hostile environment in the history of computing" is how we ended up in the mess we are in right now security wise. You want a secure system, you have to build it that way from the start preferably in languages that don't let you decapitate yourself like Rust. I'm pretty sure it's a consensus that it's impossible to write secure C code these days, so why is everyone so convinced we could sufficiently harden a low level graphics API for web use? This is like if when NetScape proposed JavaScript if everyone went "well you're just trying to dominate it, we already have C89 just put that in the browser." Fucking NO. When are we as an industry and a discipline going to learn our lesson with this "code first, secure later" crap?

And again, if this could be done and it's such a great idea, why is this the first proposal and not actual WebVulkan? The argument here is not WebVulkan vs WebGPU because again WebVulkan does not fucking exist. When or if it does, perhaps we can argue about why Apple won't support it if they don't, but until then, you're shooting down one attempt at a standard API with a hypothetical thing that does not exist and which a lot of experts in the field seem to think is a bad idea from several angles.

I will reiterate this one last time because it seems like everyone has missed this: there is currently no cross-platform solution for next-gen web graphics except for this proposal. Put up or shut up.

14
DigitalSea 5 days ago 1 reply      
Another anti-competitive initiative disguised as "Apple-led innovation" - maybe Apple should get on board with Vulkan, everyone else has, instead of complaining about there being no cross-platform solutions.
15
sova 5 days ago 1 reply      
Go for it! I would love to see more focus on browser-based what-the-future-of-code-may-look-like. That said, I don't think that 3D interfaces are the only kinds that need good language / representation. Not that the rules should be really rigid, but I think that if we pursue the notion of reversely symmetric UI-languages we can make a lot of progress. Imagine that you have a 3D scene, what is the minimal language you need to describe it? How can we make it so that language/code is not only minimised but extensible? We must strike a balance.

I would like to have such simplicity in a potential language that when I see a scene in my minds' eye, it's really easy to transfer to the digital realm. I think the simplicity of representation is key.

Being a web dev on my own hours for the past several years now has given me a pretty solid grip of all the needs of an interactive application, and I can say that there has to be some way for users to easily interact and offer all the possible inlets for the information. Starting a 3D-internet movement might require rethinking the inputs. Won't we just use holo-wands to navigate vast swathes of data rapidly? Run through this field of data sheets...

So yeah, rethinking the medium will naturally come up as a question in conversations around this, and I think that it's simply a matter of keeping "user input" as straightforward and easy as possible, in the local _and_ distributed sense. With that as a foundational block, the rest of the 3D scene can start to make sense.

16
SquareWheel 5 days ago 0 replies      
I'm glad to see Apple getting interested in web standardization again, but I'm not a big fan of this one. Wouldn't a WebGL 3.0 based on Vulkan make more sense? I'd much rather see Apple warming up to Vulkan rather than have so much Metal influence on a common web API.

From a dev perspective my thinking is this. If I were to learn a 3D graphics API for the web I'd like that knowledge to be transferable to native development as well. An API built on Vulcan - even if abstracted - would be more pertinent and more compatible with existing tools than one built on Metal.

I'll be curious to see comments from other committee members as they'd have more insight on the subject.

17
unsigner 5 days ago 1 reply      
"Only platforms without Vulkan are Apple's" is most certainly NOT true. The Xbox One is a widely available platform for which a lot of 3D effort is invested, and rewards this effort really well - much better than other platforms (like desktop Linux or old versions of Windows) that are frequently thrown around as arguments for "Vulkan everywhere".

It is a common observation among game developers who have really tried both that Metal is a much more accessible API than Vulkan; "90% of the performance for 10% of the effort".

18
warrenpj 5 days ago 0 replies      
An obvious use case for this API is to allow native games that use Vulkan to be ported to the Web without a proprietary abstraction layer. (The abstraction layer would be in the emscripten standard library rather than the game engine.) But is this a legitimate use case that someone actually needs?

I can see two views here. Firstly, that as the open standard, Vulkan should have a privileged position and be supported from the Web side, to make the Vulkan -> Web -> Native abstraction layer standard, small and efficient. As a developer you would just implement the necessary algorithms for your application once, using Vulkan.

The alternative view is that Vulkan is just too low level and doesn't fit in the web security model. Then, the purpose of WebGPU is not to implement Vulkan in Javascript. Instead, it's another target in addition to the existing three: Apple, Microsoft, Kronos, and web. To get maximum performance, developers must write an application-specific, high level platform abstraction layer, and implement that interface for each supported platform.

It seems to me that this article is of the latter view, especially as it doesn't put Metal in a strategically weaker position than Vulkan.

If you don't need state of the art or original rendering techniques and algorithms, I think there is already a high level abstraction which is compatible with the web: OpenGL ES 2, and soon OpenGL ES 3. (WebGL & WebGL 2).

19
msie 5 days ago 4 replies      
MS doesn't support Vulkan! Why are many people bashing Apple as the sole company not supporting it??? MS has abandoned OpenGL in support of D3D. Why have people forgotten this???
20
surfmike 5 days ago 1 reply      
How is this on security? Vulkan API is built on being able to modify command buffers and pass parameters by directly writing to memory, rather than the client/server model of OpenGL. I could see that being an issue with the sandboxed model of the browser.

Disclaimer: haven't read the new Apple web 3D spec, just curious about others' opinions.

21
AndrewKemendo 5 days ago 1 reply      
Biggest questions I have are:

1. What is the (proposed) backward compatibility across devices?

2.Given that it is structured for Metal shaders, what are the plans for other, non-apple devices? I see the hat tip to D3D and Vulkan, but I assume they need to get on board first - any early takers? After all common standard means cross-platform hardware support, something Apple has never really embraced.

22
suyash 5 days ago 0 replies      
This is a great move by Apple. Webkit team is thinking ahead about power of GPU's and opening it up for more than just 3D graphics. GPU are already being utilized for AI and ML. Web developers need better access to low level computation and simple api's.
23
elFarto 5 days ago 0 replies      
I'm not quite sure what Apple were expecting, when they have shown a clear contempt for OpenGL by leaving it at version 4.1 (while everyone else is up to 4.5, even Mesa).

But to the people saying use Vulkan for the web API, that wouldn't be a good idea. Vulkan is a very verbose low level API.

Here[1] is a sample for the most basic of renderers (renders a single triangle). That's 1,000 lines of (commented) code. It's not a good fit for web development. A far better solution would be to evolve the current WebGL API, than shoehorn Vulkan into somewhere it was never meant to be.

[1] https://github.com/SaschaWillems/Vulkan/blob/master/triangle...

24
thedjinn 5 days ago 2 replies      
The API looks like a direct port of Metal to JavaScript.
25
TazeTSchnitzel 5 days ago 2 replies      
That API looks potentially more pleasant to use than WebGL, which is a nice surprise given it's purportedly more low-level.
26
binarymax 5 days ago 0 replies      
Lots of negativity in this thread related to non-adoption of Vulcan. But we've been waiting too long for generic compute and the sooner Apple leads something to the browser for GPGPU then maybe it will get everyone else to finally act - competing standard or not.
27
wnevets 5 days ago 2 replies      
As someone whos knows almost nothing about 3d graphics and its APIs, how does Vulkan play into all of this? Why should this new API be used instead of adding Vulkan's API to the browser?
28
kin 5 days ago 0 replies      
The Safari API is severely lacking relative to Chrome/Firefox. It would be great if Apple would spend some time getting Safari up to parity so devs can create a more consistent user experience for their browser.
29
normalperson123 5 days ago 0 replies      
what a coincidence. just the other day i kind of stumbled across the situation with apple and vulkan and was totally mystified. why would apple, who was a supporter of vulkan, not implement drivers for it, and in effect block the possibility for a universal webvulkan standard and all the glorious benefits of it? lo there was a reason! i have to say that i think apple is being a complete dunce. so they refuse to implement vulkan drivers, or even modern gl drivers, and now their next move is to create yet another graphics api? the success of which would depend on everyone else supporting it when apple wouldnt support stuff? apple has enough money (tax dodgers) to support other standards. they should probably do that and do it well before telling everyone else what to do. /rant
30
greggman 5 days ago 0 replies      
This probably doesn't matter but one of the advantages of WebGL is it's just OpenGL ES so porting to it is relatively simple.

On the other hand all the big engines already support multiple backends so adding another for yet another API is probably not a problem

31
douche 5 days ago 0 replies      
Is it maybe time to step back and think about whether having a low-level 3D graphics API in the browser makes any sense at all? I know, we HAVE to run every conceivable thing out of the browser and on Javascript possible. But just maybe building native apps would be easier than trying to shoehorn an API into browser specs, where not only will you have to deal with hardware variations, there will be the inevitable incompatibilities between the implementations by each browser vendor.
32
socmag 4 days ago 0 replies      
First of all I'd agree with everyone that if any new Web 3D API should come to fruition, it should be based on a Vulkan like state model.

That said, from what I read of the proposal there were some interesting and useful ideas. So take those ideas and start going to the meetings to build consensus with the open standards instead of trying to co-opt it (which is how this comes off).

In addition, Apple might want to consider getting WebKit in line with the existing web standards before throwing curve balls. Currently lagging way behind in many areas, which means we have to do much more work for Webkit based browsers than anything from Mozilla, Microsoft or Google, and support for some features isn't even possible.

Don't mean to be a Debbie Downer, and I'm sure the team itself has good intentions, but come on.. Like everyone says the last thing the world needs right now is yet another proposal for a modern Web 3D. It will reset the clock. Again!

33
HelloNurse 5 days ago 1 reply      
The discussion about Apple not supporting Vulkan is irrelevant because the point of this Webkit proposal is replacing WebGL 2 for web applications, not replacing the sadly fragmented "real" GPU APIs for native applications.

As a WebGL replacement and as a 3D API in general, many details of the proposal are strange.

>Since we were building on Apple platforms we picked the Metal Shading Language.

Mature and portable technology...

>pipelineDescriptor.colorAttachments[0].pixelFormat = "BGRA8Unorm";

No enumerations? Are they going to validate strings?

>let vertexData = new Float32Array([ /* some data */ ]);>let vertexBuffer = gpu.createBuffer(vertexData);>commandEncoder.setVertexBuffer(vertexBuffer, 0, 0);

No internal structure?

>commandEncoder.drawPrimitives("triangle", 0, 3);

One triangle? No index buffers?

I don't understand whether they are offering an insultingly dumbed down overview of a rather complete (but presumably ugly) proof of concept implementation or throwing around gratuitously WebGL-incompatible ideas to dominate the standardization process.

34
bobajeff 5 days ago 1 reply      
So is this supposed to be like a WebVulkan?
35
ctdonath 5 days ago 0 replies      
Any word from Carmack on this?
36
walterbell 5 days ago 1 reply      
Could this be adapted for graphics virtualization, e.g. allowing several VMs to securely and performantly render 3D workloads on a single physical GPU?
37
intrasight 5 days ago 1 reply      
If Apple has like 4% of the OS market and seems to be abandoning the desktop, why do we really care what Apple thinks?
38
deafcalculus 5 days ago 0 replies      
This is awesome. IMHO, reducing the complexity of resource management in the app is the right way to go for a web API, and Metal is closer to what a web API should be than Vulkan.

Porting an app from OpenGL to Metal is much easier than porting to Vulkan. A naive port to Metal often performs better than the GL version, whereas getting performance out of a Vulkan port is a lot harder [1]. So, I really hope the proposed API will be a C-based API that can also serve as a middle layer on top of Vulkan/D3D12 for lesser mortals like me in writing cross-platform native apps.

[1] aras-p is a Unity dev: https://twitter.com/aras_p/status/628569113053528065 and https://twitter.com/aras_p/status/628569238794543104

39
ino 5 days ago 0 replies      
If this takes off, what are the chances it will spread out of the web and be the new standard way we draw graphics, the new abstraction available almost everywhere, the successor to OpenGL?
40
watertom 5 days ago 0 replies      
Apple can't even ship up to date desktop and laptop hardware, and own such a small percentage of that market, it makes the proposal almost laughable.
41
swipecity 4 days ago 0 replies      
I'm new to this, but does it mean that Apple is preparing for some AR stuff ? Thanks.
42
amelius 5 days ago 0 replies      
> Apple proposes new web 3D graphics API

Which works only on one size of screen? :)

43
mbrookes 5 days ago 3 replies      
Hey Apple, how about developing your browsers to support modern web technologies first?
44
cfv 5 days ago 0 replies      
I think it's fine, if they can manage to polyfill this fucker.

Otherwise it's just an incompatible wart like the getUserMedia implementation was for years, like webaudio still is, and like some of their latest hardware ideas have been.

Polyfill first, engine bloat later

45
macawfish 5 days ago 1 reply      
Hey Apple, where's your Web Midi support at?
46
neom 5 days ago 1 reply      
Kicking and dragging on WebRTC, WebCL, Vulkan. Business interests much?
47
natvert 5 days ago 0 replies      
How about just implementing serviceworker first?
48
giridhar50 5 days ago 0 replies      
this might be interesting.
49
doggydogs94 5 days ago 0 replies      
Yet another fine API.
50
whyileft 5 days ago 3 replies      
That is a very deceptive way to put it.

https://jakearchibald.github.io/isserviceworkerready/

Safari is intentionally crippled in several areas. Its very specific and obvious. It is dishonest for you to at this point pretend Apple is going full force for web standards while they are explicitly not implementing features that are available everywhere else. And how those specific features line up directly as features which allow web apps to compete with its native application market.

Edit: To anyone seeing this down-voted. Apple employees typically down-vote stuff like this so please do not take this being greyed as anything but manufactured opinion.

51
botverse 5 days ago 0 replies      
101 mentions of Vulkan so far
52
dzhiurgis 5 days ago 0 replies      
So much more value would be created if iOS supported Android-style progressive web applications...
53
hacktually 5 days ago 1 reply      
We don't need more new standards.

We need concise, clear and coherent code.

54
varenc 5 days ago 0 replies      
I'd rather Apple implement getUserMedia in Safari first.

Currently, it's not possible to access a user's microphone or webcam in Safari (desktop and iOS). They're the major outlier when compared to other browser vendors: http://caniuse.com/#search=getusermedia

55
phkahler 5 days ago 1 reply      
I don't agree with this. The web does not need a 3D rendering API. If you think "web apps" are an appropriate thing, then perhaps it makes sense. Ignoring that, the web to me is still a way to view content and as such I can see a use for having navigable 3D scenes, environments, or content, but the rendering would be left to the browser much the way rendering HTML is left to the browser. I really don't like the amount of code running in my browser and see no need to add more.
56
ClassyJacket 5 days ago 0 replies      
"The major platform technologies in this space are Direct3D 12 from Microsoft, Metal from Apple, and Vulkan from the Khronos Group. While these technologies have similar design concepts, unfortunately none are available across all platforms."

Vulkan isn't available on the Mac huh? Yeah Apple, and just who's fault is that??

57
jgord 5 days ago 1 reply      
Now that Apple has all power and wealth, they arrogantly push an api nobody wants, resulting in man-centuries of wasted effort and frustration .. instead of listening to their users [ in this case developers ]

We have seen this before - fortunately no matter how much money they put behind this bad idea, the internet is larger than apple, and open standard apis and choice will eventually win.

I don't want an apple api, I don't want yet-another-kool-language-fad they will drop in the next marketing cycle, I want old style javascript to run fast and be standards compliant on mobile, including on iOS devices.

15
Thousands of deadly U.S. military airstrikes have gone unreported militarytimes.com
565 points by 3131s  5 days ago   302 comments top 26
1
educar 5 days ago 13 replies      
If you still think of the world as divided into good and evil, then I have to break it to you that you have been played. Brainwashed by years and years of media programming.

One man's terrorist is another man's hero. The U.S routinely bombs other sovereign countries and this is show cased as a "hero" activity. If the bombed people retaliate, they are terrorists. The truth is not black/white. Same way U.S data surveillance is OK but same thing done by China is seen as backward regime.

2
dcposch 5 days ago 6 replies      
i'm disappointed that hardly any members of the media or Democratic party establishment talk about Obama's expansion of the drone program.

we have a long list of names. the executive branch of our government curates the list unilaterally and in secret. then, we cross names off the list by killing them. sometimes, they're American citizens. often, they're in countries we're at peace or even allied with.

there are children in Yemen right now who are afraid of blue skies. drones fly high enough to be invisible to the naked eye; most strikes are carried out in clear weather.

i'm near certain that history will judge us harshly.

in any case, it's Trump's drone program now. i hope the technologists who created this infrastructure think about that when they try to sleep.

3
megous 5 days ago 4 replies      
Other thing that is massively underreported on much bigger scale than actual air strikes are civilian casualties. Presumably so that Americans, or Russians or whoever of the other N countries that bomb people regularly can justify killing thousands of civilians in middle east a year by mistake. It is not a mistake though, it's a calculated acceptable collateral murder where the government determines some acceptable ratio of civcas vs combatant deaths and goes with that. Of course, now if civcas are underreported by the order of magnitude, ...

https://airwars.org/

Other thing to ponder is that US claims it killed around 50 000 terrorists in the last two years. Think about that. Those are supposed to be people who are said to be threat to US citizens or allies and therefore are OK to be killed by USG even at the cost of killing bystanders. Americans should start asking themselves who are these people they are supporting the murder of. Don't stop at the terrorist label.

Humanizing "them" might not be fun though, because you'll inevitably run into your propagandized fellow citizens, and start feeling decidedly not good about their capacity for empathy or nuance in understanding the world.

4
jessriedel 5 days ago 1 reply      
This is a definitional thing, right? Laymen think of "airstrike" in terms of anything coming from the air, but the military use of the term has always reflected the very different nature of long-range bombing mission and (say) helicopter support of infantry units.

> "Apaches for example, conduct close combat attacks as a maneuver element supporting a ground force in contact with the enemy. I would not consider this in the category of 'airstrike.'"

This is reflected in the fact that the helicopters are parts of the Army, not Navy or Air Force.

So you can certainly criticize the use of this term since it may prey on predictable/reasonable civilian misunderstanding of military language, but it's hard to agree with the author that

> The media and others have depended on these figures for years with the understanding they are a comprehensive rollup of all American and coalition activity...no one from the military ever has come forward to clarify that it is wholly incomplete.

unless the Army was also failing to include it in something like "groundstrikes".

My grandmother might think that the terms "wireless carrier" and "wireless data" should clearly cover WiFi, based on the plain meaning of the words, but it just turns out that this isn't included in the technical definitions. Yes, if someone technical was profiting off this predictable confusion it would be bad and possibly malicious, but it's not accurate to describe someone using these technical names as "inaccurate" or "quietly excluded".

5
jackvalentine 5 days ago 1 reply      
President Trump wasn't wrong when he said terror attacks are under reported but sadly the underreported ones are committed by the United States and her allies.

What exactly is the "big picture" I'm missing here that is achieved by levelling Yemen one mud brick building at a time?

6
sytelus 5 days ago 4 replies      
Wow... 1,700 air strikes just last year alone. That's more than 1 air strike every 6 hour for entire year! I'm wondering how much damage is being inflicted on civilians here. Not just people getting killed, but people becoming homeless or their livelihood getting destroyed. If these are not ultra-precise air strikes then we would be giving birth to future heavily disgruntled middle east population.
7
rconti 5 days ago 0 replies      
> I can tell you, unequivocally, we are not trying to hide the number of strikes," the official said. "That is just the way it has been tracked in the past. Thats what its always been.

That may be true, but I do not have a high level of confidence that there's not someone further up the chain saying "hey, let's use the Army for this one, eh?" with a wink.

8
fsloth 5 days ago 0 replies      
I feel the best popular representation of US military policy is the comic "Addicted to War" by Joel Andreas.http://www.addictedtowar.com/book.html

Everyone should be offered that as a reference before Chomsky, and then prodded onwards to Chomsky if they want to read more on this topic.

9
cyberferret 5 days ago 1 reply      
I understand that for operational secrecy reason, not all action would be published, but I would have thought there should be at least post mission briefings and outcomes being discussed by news sources and political leaders after the fact?

For instance, here in Australia I know there has been a squadron of F/A-18s conducting bombing and strike campaigns in Afghanistan, but we don't hear anything at all about the sorts of missions they have been running, nor their effectiveness. Indeed, many Australians I have spoken to actually have no idea that we had a sizeable amount of our Air Force over there conducting operations.

10
montyboy_us 5 days ago 0 replies      
Diplomacy alone doesn't afford us the lifestyle we so enjoy. I believe we are lucky to even have the option to ignore the where and how our privileged position in global society is sustained. No one is really hiding this from us, it is that we choose not to look.
11
theincredulousk 4 days ago 2 replies      
Is there any realistic expectation that there be a news story for every airstrike?

I'm not saying we're always the "good guys", but every airstrike looks bad without any context. We never get all of the intelligence that goes into ordering an airstrike, so how are we supposed to make the same judgements about collateral damage, effect, etc.? How many other airstrikes happen flawlessly?

Aside from hindsight being 20/20, it is naive to assume that the public is capable of reviewing complex, difficult decisions about military action based on anecdotal information in a 1-page news article.

12
kchoudhu 5 days ago 0 replies      
So the military faces the same regulatory reporting challenges banks do: ambiguous orders on what to report come down from regulatory committees with no context attached, which you are in turn forced to interpret. When your interpretation is judged (usually unfairly) to be incorrect by some smartass in the press whose understanding of the ground realities is incomplete at best, resulting in a dustup which usually ends up with you being hauled in front of a regulatory committee to grovel, admit fault and promise not to repeat the mistake again.

Bitter? Why would you think I'm bitter?

13
arca_vorago 4 days ago 0 replies      
'When war is declared, truth is the first casualty' - Arthur Ponsonby

Our issue should focus on the war itself, and less on the many untruths that have propagated because of it. For me, a primary question is one of constitutional power of the president to wage war, and if legislative actions such as the Authorization of Use of Military Force are sufficient enough to meet those constitutional requirements for checks and balances. I've listened to as many debates and talks on the subject as possible, and I think there is a lot of nuance ignored by the more radical right and left than there should be. In the end though, my primary issue (as a combat vet) is the complete misdirection of the American people for the true reasons of the war(s).

If truth is the first casualty, that may be to be expected, but at least tell us the truth about the reasons for the war(s) in the first place, otherwise even the founders recognized the dangers of the power of the executive to unilaterally wage war and declare emergencies and in doing so violate individual liberties.

I will say this though; I'm tired of hearing Kissinger style realpolitik subscribers abusing Hanlons razor in dismissing the idea of malice in such actions, for at some point not only is incompetence indetinguishable from malice, but incompetence is ripe for manipulation and abuse from malice.

The question is then, who is the malicous group, and what are their intentions?

14
ryanmarsh 5 days ago 1 reply      
There's only one way to find out the truth: FOIA the mission logs
15
facepalm 5 days ago 0 replies      
This is part of how I explain to me the outrage over the travel ban: people have been denial over being at war with certain countries. Travel ban makes it public - people get angry because their self-perception is challenged. (And of course there are the inconveniences of not being allowed to travel - but imo it is not completely nonsensical to limit travel from countries one is at war with).
16
adjwilli 5 days ago 0 replies      
How difficult would it be to create a map of worldwide military strikes, not just by the US, but all countries? Where is data like that available? What other countries are so transparent as the US in publishing that data? Do Russia and China? Could be an interesting project that serves the public interest globally. Sort of like crime maps, but at world scale.
17
capex 5 days ago 0 replies      
There's a documentary on Netflix[0] that might explain why all the 'extra' sorties are needed. Provocation? Seeding? Who knows, but its an interesting connection.

[0] http://dirtywars.org/shop/product/details/538/watch-on-netfl...

18
habosa 4 days ago 0 replies      
What exactly do we (America) gain from doing a drone strike against your average 'terrorist' (I'll be generous and use their terminology). At this point the idea that we're there to control the oil is past its time. Many of these countries have pretty much nothing we can't get elsewhere. Is it just about projecting power? Paying military contractors? Some sick game.

I truly don't believe that our government is stocked with wannabe murderers but clearly we commit murder at a distance with this drone program. I just can't figure out why we do it.

19
arprocter 5 days ago 0 replies      
I was reading this[0] yesterday which fleetingly mentioned "RANDs data is incomplete, as it doesnt include statistics on drone strikes and from Army attack helicopters"

[0]https://warisboring.com/the-explosive-rise-in-a-10-warthog-s...

20
helthanatos 5 days ago 0 replies      
This is why the US should get out of foreign affairs and tell hostile nations they are not allowed to communicate with the US unless they want an actual war.
21
desireco42 5 days ago 0 replies      
I like methodology they used to estimate number of airstrikes.
22
rascul 5 days ago 1 reply      
Probably just nobody thought to tell the Army to send these numbers in, or possibly even to track them.
23
lloydatkinson 3 days ago 0 replies      
Deadly airstrikes? no shit
24
hkjgkjy 5 days ago 1 reply      
War! What is it good for?
25
elastic_church 5 days ago 1 reply      
26
tadufre 5 days ago 1 reply      
this is not hacker news. if I wanted this dribble, I would have stayed on Slashdot.
16
YC Research: Universal Healthcare ycombinator.com
573 points by craigcannon  5 days ago   344 comments top 47
1
TuringNYC 5 days ago 14 replies      
(Full-time co-founder of a healthcare startup here): W/r/t the US specifically: it seems there is no shortage of inefficiencies and obvious solutions to the inefficiencies in the US healthcare system. To me, the real problem seems to be a system that has almost diabolically evolved to create competing interests that deadlock all sides into a sub-optimal solution. Specifically-- patients, payers, physicians, pharma, facilities and insurers almost all have indirect but competing interests much like the Dining Philosopher's problem we're familiar with in Computer Science.

I'm not sure what the solution is short of a total swamp draining, but our startup went overseas to develop/trial our product in a country with a single payer system. Not perfect, but much more amenable to finding efficiencies.

2
tyre 5 days ago 1 reply      
We sell to governments, which is similar to healthcare.

I cannot stress this enough: technology is not the hard part.

Do they have outdated software? Yes.

Can you build better software? Yes.

None of that matters if you can't get it into their hands. Procurement is the hard part. Can you empathize with the needs, fears, desires, quirks, and crazy of ten different stakeholders? Pry proprietary API specs from the cold-dead hands of one-off contractors? Educate users who's technological proficiency peaked at SMS to manage a full-featured SaaS product in 2017?

Don't focus on the software. That isn't the hard part. People are the hard part. People are always the hard part.

3
yummyfajitas 5 days ago 4 replies      
Interestingly, we already discovered a mechanism for drastically reducing the cost of healthcare back in 1986. It's a way of crowdsourcing the problem called high copays. Basically, you have to pay out of pocket for 90% of your health care up to a (high) cap.

It turns out that patients are very good at figuring out which health care will improve health and which won't - the high copay group had no statistically significant difference in health from the low copay group, and spent about 30% less money. What a crazy magic bullet, huh?

http://www.rand.org/health/projects/hie.html

We ran a directionally similar experiment in 2008, and got much the same result: low copayment causes people to consume a lot more medicine, but with no objectively measurable improvement in health. (Subjectively, people with insurance feel healthier even if they never go to the doctor.)

https://www.nber.org/oregon/

In both cases we ignored the result because we don't like it.

4
Eliezer 5 days ago 2 replies      
It boots nothing to subsidize that which is in restricted supply. So long as there are only 350 othodontists allowed to graduate per year, there's a corresponding limit on how many patients are allowed to have straight teeth regardless of who pays for what or what software is used. Improve the software, and the price of orthodontia must still equalize demand to the limited supply.

Offer free dollar bills, and a line will form until the cost of staying in line burns more than $1. Medicine isn't costly because it's inefficient, rather it can end up inefficient because the limited supply means it must somehow end up costly.

It is not possible to solve the healthcare crisis without somewhat deregulating the supply of healthcare and allowing it to increase. Until then, every subsidy just raises the price, and every efficiency improvement just creates room for more inefficiency elsewhere.

You can't solve the housing problem in San Francisco by building more efficient software for selling houses. Only interventions that somehow increase the total supply of living space can cause more total people to be able to live there.

5
lumberjack 5 days ago 1 reply      
Universal healthcare is already much more cost effective than fully or semi-privitised health care.

Sorry, I forgot to pretend that all the other developed countries haven't figured out healthcare already.

Geez.

----------------

Oh and btw, when you have a universal healthcare system payed by taxes (none of that bullshit insurance crap that only ends up being costly regulation/financial bloat) you can have entire and fully private hospitals and health clinics where you can get service for cash, and surprise, surprise, it's ridiculously cheap because it has to compete with the effectively free public healthcare system.

6
toomuchtodo 5 days ago 0 replies      
This is fantastic news. Congrats Watsi!

It has always seemed like this was the end goal; to build a proof of concept healthcare delivery platform for the third world. Very exciting!

EDIT: Sidenote: Thanks YC for funding Watsi as your first non-profit and attempting to tackle a hard social problem.

7
alexmingoia 5 days ago 4 replies      
What does this have to do with universal healthcare?

We know how to make healthcare more efficient. We know how to remove the administrative overhead. Other countries already have these systems in place. Look at Taiwan for one example. They have digital medical records and an extremely low administrative overhead because of universal care.

Healthcare will continue to be broken no matter how many YC research programs there are - because the US population lacks the desire and political will for universal healthcare.

8
temp-dude-87844 5 days ago 1 reply      
I applaud this initiative of collecting more data on this, by starting a small trial in an area with fewer confounding factors, and later applying those lessons learned in places with more interconnected systems in place.

One unfortunate fact is that a small proportion of people 'consume' most of the medical care. Operational inefficiencies, the concept of health insurance, a byzantine cost structure, and in the US, after-the-fact billing conceal -- or at least spread out over time -- some of the financial pain of care. This is a sort of societal compromise to avoid confronting the problem: a society either shoulders (i.e. subsidizes) the cost of care for its most unhealthy, or lets them perish outright.

Today, most civilized societies tiptoe around this subject by subsidizing medical care for the elderly for political expediency, where the marginal benefits (even for the particular individual) of life extension until funds finally run out quickly diminish, while leaving folks of prime working age bear a large portion of their own costs in case of misfortune, to say nothing of underserved minorities and the economic poor.

Perhaps the best value of conducting this trial in a developing country isn't solely to get away from the political machinery of a mature healthcare system, but to escape the political baggage of a post-industrial society and see if technological solutions can work if morals and politics aren't in the way.

9
esfandia 5 days ago 3 replies      
Healthcare definitely seems like the land of process inefficiency, even in developed countries like here in Canada, so there's plenty of opportunity for major improvement. There's still plenty of paperwork done on... paper, information that constantly has to be repeated when you go from one provider to another, and plenty of mistakes made.

Some time ago Ontario spent a massive amount of money on computerizing healthcare and it yielded nothing. I figure all the regulations, privacy issues, and overall complexity of the system makes it a tough Goliath to handle. And whatever happened to Google Health?

I feel that the solution has to come from the grassroots: get a bunch of health care providers to sync up for certain simple services, and go from there. Keep adding features little by little, keep expanding the number of participants. Do it using published and open source APIs and software. Don't try to be everything to everyone. Break a few rules, ignore some complicated standards if it can help get you there quicker. Hmmm, maybe for the latter to be possible it makes sense to start in less sue-happy countries.

10
EGreg 5 days ago 0 replies      
I have argued in favor of Single Payer systems on the basis of https://en.wikipedia.org/wiki/Monopsony . When buyers don't compete on price, then the price goes down. This is also known as "collective bargaining power".

You can see this borne out in the fact every developed country with a universal healthcare plan gets cheaper prices, often for the same or better outcomes than the USA. Including number of doctors per capita, which disproves the "shortages" myth. Domestically in the USA, Medicare squeezes doctors far more than other insurance companies. A "medicare for all" would do even better.

After the libertarians and anarcho-capitalists try to claim superior economic knowledge eventually they must admit simple supply and demand drives prices down in a single payer system.

But then I get the following objection: what about all the R&D that we do? Perhaps all that expensive health care in the USA results in better procedures and medical equipment, better trained doctors etc. ?

To this I say ... OPEN SOURCE DRUGS! http://magarshak.com/blog/?p=93

If you can introduce a patentleft movement in drugs the same as you have done in software, then innovations can come from anywhere.

And failing that, we can always do this compensation model: https://qbix.com/blog/index.php/2016/11/properly-valuing-con...

11
koolba 5 days ago 0 replies      
> For the initial project, Watsi will fund primary healthcare for a community in the developing world and build a platform to run the system transparently.

Have they decided what country (or countries) in which this will take place?

While I'm sure there are many worthy candidates worldwide, applying the same type of program to under served communities within the USA would be great as well.

12
judah 5 days ago 0 replies      
Love the ambition. Bring some transparency, reduce fraud, use technology to reduce cost where possible. Great idea, hope it works.

I'm skeptical it could reduce healthcare costs significantly simply because of the massive effort required to change the healthcare behemoth in even small ways. However, given the exorbitant costs of healthcare (currently paying $1800/month for a family of 4), it's worth certainly trying.

Is there a time frame on this experiment?

13
rsync 5 days ago 2 replies      
If you went back in time - say, 20 or 25 years ago - and you picked up a progressive, left leaning magazine - say, adbusters or mother jones - you would very regularly read warnings about the manufactured needs of medicine and healthcare and pharmaceuticals.

Barely an issue of such a periodical could pass without dire warnings of a future in which big pharma and insurance interests would convince us, through advertising, that we were foremost consumers of "healthcare".

What happened ?

The progressive left is now fully, fervently convinced that "healthcare" is a basic priority of human life. It is a rampant consumerism that reaches far beyond - and profoundly deeper - than the fears that good people have always had.

It didn't have to be this way.

14
abalone 5 days ago 0 replies      
> Watsis goal is to improve the efficiency of funding, making universal healthcare possible.

Universal healthcare is already possible.[1] Reducing waste is a noble goal but this is a startling sentence from a health tech startup team. It implies that the primary obstacle to universal care is cost, not political will, which fails to comprehend how universal care was achieved in most of the industrialized world.

[1] https://en.m.wikipedia.org/wiki/List_of_countries_with_unive...

15
fuzzfactor 1 day ago 0 replies      
Universal Healthcare is when a society is actually "rich" or "wealthy" in terms of truly having more than enough resources to perform essential care wholesale at no cost to patients,

and after that, when society chooses to prioritize the health care of all its citizens high enough to give equal care to all.

This doesn't usually happen, even in societies where the consistent waste of resources exceeds the total cost of universal healthcare.

Considering the resource shortage or surplus, when healing treatments are not denied to any needy members of society, that could be a fundamental marker of civilization, and an obvious measure of which societies are more advanced and which are more retarded.

16
intrasight 5 days ago 5 replies      
>Currently, up to 40% of all healthcare funding is wasted on operational inefficiencies

Your inefficiencies are someone else's revenue.

Or to say another way:

Healthcare is ~20% of US GDP

Reduce spending by 40% would reduce US GDP by almost 10%. That's a tough sell politically you have to admit.

17
dkonofalski 5 days ago 1 reply      
I wonder what the long-term on this is going to look like. It would seem to me like an amazing irony if the receiving nation ended up with better and cheaper healthcare than the US considering that YC and Watsi call the US home.
18
mikekij 5 days ago 4 replies      
Founder of healthcare startup here too:

This sounds like a great project. I love the idea of building technology for healthcare in a small, controlled, active care environment, and then scaling those tools to a larger audience.

The bigger issue in healthcare IMHO is that the American healthcare model, while hugely inefficient, seems to be the system that best incentivizes innovation. We pay 10x what Sweden pays for medical devices, but the US market is the only reason those device companies can be profitable. If we move to a single-payer system in the US, the economic incentives for innovation go way down.

If someone can figure out how to lower costs, while still providing a profitable market in which drug and device companies can innovate, we'll all benefit.

19
buyx 5 days ago 0 replies      
The article doesn't mention which developing country the trial will be in, but South Africa would make an interesting candidate. It has a public healthcare system that's in shocking condition, and a world-class private healthcare system, funded by health insurance, that's becoming more unaffordable (despite being funded and mandated by employers) each year because of high medical inflation. There are clear parallels to the US healthcare system, and the commodities downturn has stymied the government efforts to introduce universal healthcare, so there would be an ideological willingness to experiment.
20
Animats 5 days ago 0 replies      
"For the initial project, Watsi will fund primary healthcare for a community in the developing world and build a platform to run the system transparently."

Start with Tuskegee, Alabama, poorest town in the United States.

21
benologist 5 days ago 0 replies      
I read the other day that here in Costa Rica the health care 'caja' has 1 employee per 85 people, it's more like working there is the plan. I can't wait to see what Watsi does next.
22
KeepTalking 5 days ago 1 reply      
How much of the problem is actually the way (big) pharma conducts research? ( I know that I am over simplifying and dozens of startups are focused on improving the way research is done)

From a manf process standpoint, there are cheaper ways to create these compounds. Generic drug manufacturers have proved that ignoring the cost of research, the drug itself costs next to nothing to make, market and sell.

From an economics standpoint, healthcare costs are a significant part of GDP. In an ideal model if all research is funded directly via government grants and the key research is licensed through a free licensing - It should create a very competitive drug cost model.For a healthcare practice standpoint, legislation can really help. Stripping down some of the malpractice laws are a good starting point.

Additionally, the monopoly on medical education should be broken - Making medical education a national priority is a key step. We also need to make sure, that doctors are not the only healthcare providers. Enabling entrepreneurship among non doctor(nurses, mid wives etc) medical practioners can increase the market supply.

These 2 actions in theory should create more doctors and reduce the cost of practicing medicine.

23
kriro 5 days ago 0 replies      
Good choice for YC research investment.

I hope they don't try to reinvent the wheel in some areas (sounds like it from the post). It would probably be a good idea to benchmark how hard it is to set up a functioning and operational GNU Health system in community X for example.

There's a lot of potential for replacing nothing/no doctors with machine learning, especially in developing countries. Especially in areas where mobile phones are spread I can think of a couple of use cases. Take a picture of your swelling/strange looking skin/whatever and have a classifier tell you what it could be. Last time I checked the algorithms actually beat expert panels (for skin cancer). Could probably be coupled with a "doctor as a service" system that optimizes routes based on this sort of data.

The more I think about it the more I should catapult working in this area up my job application list :)

24
Kluny 5 days ago 2 replies      
I was thinking about this lately. Can universal health care be solved by the free market, if the free market decides to enforce checks and balances on itself?

That is to say, could someone start a not-for-profit health insurance company that offers excellent coverage for affordable rates, and build it from the ground up with a culture of clarity and transparency? At a bare minimum they should have a searchable database where you can type in "broken arm" and find out what price this company has negotiated for casts, x-rays, and doctor time, and what it will cost you in co-pay.

It seems like insurance companies are so universally bad and corrupt that there would be no trouble signing up a critical mass of users by simply being a little better than the norm, and once it's the biggest insurance provider in the US, start applying muscle to hospital administration.

Yes, I know I'm oversimplifying it. Can anyone think of a way that it might be possible, though?

25
dominotw 5 days ago 1 reply      
Why can't we import more doctors like UK and other countries in EU. Isn't that a low hanging fruit?
26
bawana 5 days ago 0 replies      
hospitals are BIG business. They will never let their inefficiencies be addressed by an external force. They do not even share their price lists. Can you imagine going going into best buy and not knowing what anything costs? But having to get the price by researching it on the net?
27
X86BSD 5 days ago 0 replies      
I love this, its like Kiva but for third world healthcare procedures, just fantastic.

They need to make browsing for potential patients easier. After 22 pages of "View more patients" my browser starts to bog down.

A search would be good. As well as a map to select a country to view those in need.

IMO.

But really great startup!

28
WalterBright 5 days ago 0 replies      
> up to 40% of all healthcare funding is wasted on operational inefficiencies, fraud, and ineffective care.

Any system where the consumers, the providers, and the payers are not accountable to each other is never going to operate efficiently.

29
egonschiele 5 days ago 0 replies      
I love this idea, and this seems like the right way to do it. Operational inefficiencies are a huge burden and it would be great to find a solution for it. I really like the idea of starting this in a small community and scaling up.
30
maceo 5 days ago 0 replies      
US spends over $8,000 per capita on healthcare, compared to about $4,000 in UK and Japan, both of which have universal health care.

This isn't a problem tech can solve. It's a problem only politics can solve.

31
mtrn 5 days ago 0 replies      
Glad to live in a country that has something close to universal health care. Everybody needs to contribute a monthly share (independent of their condition) which amounts to over 200B per year in total. This seems enough for modern infrastructure, equipment, prophylaxis, medication.

That said, it's not super efficient and the incomes vary greatly between employees with strong lobby groups and laborers covered by legislation only.

32
20years 5 days ago 0 replies      
I would love to see more transparency in where the costs for dr visits is going. A recent 1/2 hour visit to my daughters doctor for a basic checkup and a couple of shots resulted in a $1500 bill to the insurance company. We paid a fraction of that but it still blows my mind that the bill was so high. I am assuming most of that was for the shots. If watsi can develop software that makes these costs more transparent maybe then we can address ways to lower them.
33
narrator 5 days ago 1 reply      
The prices in the U.S system are wildly divergent from the rest of the world and enhancing international competition is a good way to remedy this. Thus, one way to implement universal health care is to allow import of any prescribed drug and the government will pay for any medical procedure + plane ticket if that cost is less than it is in the U.S. Perhaps a doctor visa would also help with costs.
34
a3n 5 days ago 0 replies      
> Once the platform is in place, Watsi will start to experiment with improving the quality of care and reducing the cost e.g., by streamlining operations, minimizing waste and fraud, and identifying medical errors in real-time.

That sounds like any politician ever, campaigning for office by promising to do the above, for government in general, the Defense or Energy or Education department, etc.

Good luck, and I sincerely hope it works. This time.

35
joshuaheard 5 days ago 0 replies      
I think the problem with our health care system is economic and political, not technological; unless you are talking about some new revolutionary technology like this: https://www.sciencedaily.com/releases/2017/02/170207092724.h...
36
jclos 5 days ago 1 reply      
Pardon my cynicism, but I don't like the idea of choosing a patient you're going to "spend your money on" as a replacement for a basic universal healthcare. Healthcare shouldn't be a popularity contest. As an addition to a normal "basic" healthcare it's fine, but please don't replace existing systems with this stuff.
37
kumarski 5 days ago 1 reply      
I'm a patient with an auto-immune disorder. I'm going to share some of my lessons/surprising things I learned in healthcare/drug discovery.

I did YC fellowship with a healthcare startup in the clinical trials space. I am one of Watsi's biggest fans(zero hedge) and excited to see them go after this.

Here's some hard things I learned over 8 months entrenched in industry, meeting everyone from Hospital execs to drug development experts.

* The top of the funnel is screwed by food environments in the USA. Completely preventable metabolic syndrome accounts for a large percentage of clinical trials research.

* One of the unfortunate realities in the USA is that a lot of our advanced drug research is financed by metabolic syndrome related drugs. There's 8K clinical trials a year and a non-trivial percentage are from metabolic syndrome related problems.

* We have a patent system that encourages developing drugs that interact with a small number of enzymes and molecules that we already know and understand how they operate. Low, if not zero risk.

* The rules around patenting pathways, treatment methodologies, research tools, and assays are flawed/seem poorly designed. As an outsider looking in, these things seem like a paralyzing bottleneck for the industry. These need to be looked at much closer.

* GPO Squeezing. The manner in which GPOs squeeze medical device companies to create an artificial monopoly and drive prices up has to be examined in a much closer way.

* Ground game & Synthetic chemistry- The reason startups in the pharma space get acquired based on my dicussions with R&D folks at multiple Fortune 500 pharma companies is two fold. 1/ The drug companies have enough sales reps to push product fast. There's massive room for some sort of disruption here to allow small scale medical device and pharma startups to push product. 2/ This one's tough, but the large pharma companies have enough money to do all the synthetic chemistry to go from lab to scale. That's changing though. What used to be a $400M requirement has shifted to a $100M requirement, but we'll see how this evolves. It's a lot different from software. The know-how is extremely well hidden behind private walls.

* Aggregated healthcare and genomic data has little value. There's 68,000 genetic marker tests on the market and 8-10 new ones come out each day. Knowing what they do and/or how they create proteins that block/assist efforts is a monstrously tough problem that isn't waiting for computation, but is waiting for actual experiments on humans.

* The mathematical complexity of drug discovery is hard. Even if the data is maximized, the throughput of discovery is low. We have 7Bn people, 15K diseases, and 3Bn genetic base pairs. Bonferonni Corrections and Family wise error rate abound. We're not waiting for super computers or for an ease of aggregating data.

* The tricky part of selling to hospitals is that you have to create ROI within 6 months.

If anyone here is building a healthcare venture or drug discovery venture and believes I can help, don't hesitate to reach out.

Godspeed.

38
alkonaut 5 days ago 0 replies      
Would there be legal issues (apart from political will) with forming a publicly owned health insurance company for the general public in the US, rather than a subset like Medicare etc?

It seems that if you just form a large enough public insurer it could soon start undercutting the prices of the private insures.

39
jankotek 5 days ago 0 replies      
In Central Europe triple bypass hearth surgery costs ~ $6000. Until you fix the cost, there is no help.
40
tabeth 5 days ago 0 replies      
Great news. Once it gets going I'd be interested in seeing the strategy to make it sustainable. I believe donation models are inherently unsustainable so it'll be a challenge.
41
dpflan 5 days ago 1 reply      
Cool and interesting. I'm not very familiar with Watsi, but is its innovation mainly in business processes for healthcare non-profits - mainly improving information and resources flows?
42
mkaziz 5 days ago 1 reply      
I really wish Congressmen didn't have federal healthcare, and they had to use the same insurance us plebeians use. That would help them fix up the system real fast.
43
dandare 5 days ago 0 replies      
TIL: US healthcare is completely nuts
44
Jyefet 5 days ago 0 replies      
Invest in value-based healthcare - it's the future (in like 15-20 years, that is)
45
pebblexe 5 days ago 0 replies      
GNU health is a good starting place
46
xyzzy4 5 days ago 1 reply      
Healthcare isn't truly 'universal' until it is also applied to non-humans.
47
whb07 5 days ago 14 replies      
Unless you remove people out of the equation, universal healthcare will never work. There's no incentive for anyone to be efficient, more frugal, work harder, provide a better service in universal healthcare. Humans arent wired for this.
17
Fluid Paint Simulation david.li
600 points by anonfunction  4 days ago   87 comments top 52
1
zokier 4 days ago 2 replies      
You can compare this to what I believe is the state of the art system, "Wetbrush":

https://www.youtube.com/watch?v=gwyqh4d-WU8 (SIGGRAPH Asia 2015)

https://www.youtube.com/watch?v=k_ndr3qDXKo (Adobe demo)

2
mholt 4 days ago 2 replies      
His other work is cool too: http://david.li/
3
kepano 4 days ago 1 reply      
This seems obvious but somehow I never considered that you need three dimensions to replicate the feel of most paints.
4
chris_st 4 days ago 0 replies      
Another "natural paint" painting program is ArtRage [1]. They have, bar none, the friendliest, most supportive discussion forums I've ever seen on the internet.

Oh yeah, the painting program's pretty good too :-)

[1] https://www.artrage.com

5
mistercow 4 days ago 1 reply      
This is very cool. One thing that I think could be improved: It looks like the model is treating the white background as if it is also paint. This leads to some weird results when painting over existing strokes.

Really awesome work though.

6
dorianm 4 days ago 0 replies      
I loved it, I feel like it could make some nice backgrounds and decorative paintings will a little more tries: http://imgur.com/a/XBUgn

And it's open source: https://github.com/dli/paint (I posted my issues there)

7
Justsignedup 4 days ago 1 reply      
Fantastic. One criticism. On click please make the paint "dip" and paint, and the paint on the brush should wear out. Then it'll really feel like painting. Click again is to re-dip in the color of choosing.

This will even let for mixing of colors like on a painter's palette.

:) Very nice tool.

8
haxiomic 4 days ago 0 replies      
Great stuff! Reminds me of Verve Painter[1], which is a fleshed out fluid-simulation-based oil painting app

[1] https://www.youtube.com/watch?v=PBO2hNv_tTE

9
Ono-Sendai 4 days ago 0 replies      
Cool. Something like this could combine nicely with my automatic painting algorithm:http://www.forwardscattering.org/post/42http://www.forwardscattering.org/post/44
10
roesel 4 days ago 0 replies      
20 seconds of drawing with default parameters and laptop temperature went from 40 to 70 C. Not bad :D.

Nice work though, very beautiful and realistic.

11
ww520 4 days ago 1 reply      
This is awesome. 3D work really have the wow factor. This is definitely at the WOW level.

Edit: I would say couple this with VR and it would be a truly awesome experience.

12
overcast 4 days ago 0 replies      
Looks sweet, but the paint definitely continues sliding in the direction of the stroke for too long. Paint basically just sticks immediately where you put it, it doesn't have that type of momentum. Even at the lowest fluidity level.
13
fonosip 4 days ago 1 reply      
A similar app without opengl (for ipads and such) http://ba.net/util/finger-oil-painting/
14
tantalor 4 days ago 0 replies      
Touch event support please!
15
craigleehi 4 days ago 0 replies      
Here is another eastern watercolor painting tool "Expresii"[1].

[1] http://www.expresii.com

16
anonfunction 4 days ago 0 replies      
17
bsenftner 4 days ago 3 replies      
So bizarre: I saw this post last Saturday, loved the site and David.il's other pages, showed some friends... Then I could not find the HackerNews post again. Here it is now, yet it says it was posted within a day, with no history of being posted before. Odd...
18
jonr8 4 days ago 0 replies      
See a history of fluid sim applied to watercolor-like painting: http://www.expresii.com/blog/innovations-in-digital-painting...A quick look at David Li's source code seems to suggest Li's work is based on Mark Harris' GPU implementation (with Jacobi iteration) of Jos Stam's method.
19
jefe_ 4 days ago 0 replies      
Amazing how well it handles painting on top of previous strokes.
20
robodale 4 days ago 0 replies      
My MacBook Pro sounds like it's taking off a runway :P
21
olegkikin 4 days ago 1 reply      
Doesn't have dark colors. The darkest paint is brown.
22
mfisher87 4 days ago 1 reply      
Reminds me of Verve. I watched the author's channel on Youtube religiously, hugely enjoying his video demos with each new release. Sad to say there hasn't been one in like 2 years.

https://www.youtube.com/user/333taron/videos

23
ygra 4 days ago 0 replies      
Reminds me of https://www.microsoft.com/en-us/research/project/project-gus... with later became Fresh paint.
24
devwastaken 4 days ago 0 replies      
Quite nice, though the paint colors don't combine like they do in this kind of oil painting. Can't make bob ross art without that. Also when you try to paint on the edges, the brush goes away, so its difficult to make a full picture without making it larger, and then cropping down.
25
BuffaloBagel 4 days ago 0 replies      
Did you mean to use the term fluidity rather than the term viscosity? I believe they are mathematically inverse.
26
koliber 4 days ago 0 replies      
I can't seem to create anything beautiful worth sharing with this. But damn, it is soothing and relaxing to play with. It feels almost therapeutic! Thank you!
27
aperrien 4 days ago 0 replies      
This reminds me of Bob Ross and his "Happy Little Trees". I think I could paint some on here if the canvas were bigger, and there was an area off to the side to mix the paint.
28
joakleaf 4 days ago 0 replies      
Is this using Navier-Stokes based fluid simulation?
29
cr0sh 4 days ago 0 replies      
I can't paint or draw worth beans - but I like it!
30
pacaro 4 days ago 0 replies      
See also FreshPaint on any version of windows > 7
31
noonespecial 4 days ago 0 replies      
Very nice. Lots of fun. The only thing missing is that the paint doesn't mix, ie. yellow and blue don't turn greenish etc.
32
kin 4 days ago 0 replies      
At first I was like, "Wow, this is a cool effect, let's try a different col-- HOLY SHIT". Color me impressed!
33
itomato 3 days ago 0 replies      
This is more of a 'mop' than a 'paint brush'.
34
_Codemonkeyism 4 days ago 0 replies      
Like that, though I wish it would work with FF/Win10/XPS13 touch screen.
35
nilved 4 days ago 0 replies      
This is too slow to use on my MacBook... I like the idea though, and it seems really cool.
36
marai2 4 days ago 0 replies      
This is oddly therapeutic and calming. My Macbook CPU was just fine as well. Very cool!
37
superplussed 4 days ago 0 replies      
Amazing. It seems like it'd be really useful to be able to zoom in and out as well.
38
quakeguy 4 days ago 2 replies      
With Opera 43.0.2442.806 (PGO) i cannot do a stroke at all it seems. Just reporting.
39
divbit 4 days ago 0 replies      
This is great for calligraphy with surface pen - can't find the black ink though.
40
heurist 4 days ago 0 replies      
Pretty cool. Next step is mixing colors and different thicknesses like real oil :)
41
andyfleming 4 days ago 1 reply      
This is pretty impressive. It'd be really neat if the paint slowly dried!
42
huangc10 4 days ago 0 replies      
Was playing with your other projects too. Great stuff and super fun. Sharing.
43
kdamken 4 days ago 0 replies      
I feel like you deserve the nobel prize for this or something. Mind = blown
44
santaclaus 4 days ago 2 replies      
Cool! Any idea what model or method is used here (or is it heuristic)?
45
tucaz 4 days ago 0 replies      
Holy moly! Awesome work. It looks very realistic. Congrats!
46
throwaway2016a 4 days ago 0 replies      
Very nice work. Reminds me a lot of Corel Painter.
47
Jordrok 4 days ago 1 reply      
Very cool!

Man, does it peg the hell out of the CPU though! :P

48
ffwacom 4 days ago 0 replies      
so cool, consider adding an option to remove paint from the brush so you can scumble the fluid to blend.
49
amelius 4 days ago 0 replies      
What primitives is this using?
50
mshenfield 4 days ago 0 replies      
Color me impressed
51
shankar_mj 4 days ago 0 replies      
Wow !
52
btzll 4 days ago 1 reply      
Very cool, but please change the color selector. It is not intuitive, and it's very hard to pick even the most obvious colors.
18
H-1B visas mainly go to Indian outsourcing firms economist.com
454 points by known  3 days ago   397 comments top 38
1
loph 3 days ago 22 replies      
This one sentence says it all:

"The Economist found that between 2012 and 2015 the three biggest Indian outsourcing firmsTCS, Wipro and Infosyssubmitted over 150,000 visa applications for positions that paid a median salary of $69,500. In contrast, Americas five biggest tech firmsApple, Amazon, Facebook, Google and Microsoftsubmitted just 31,000 applications, and proposed to pay their workers a median salary of $117,000."

None of those salaries listed are competitive with what a non-H1B (read citizen or permanent resident) would earn. Indeed.com quotes the average SD salary in Seattle (think Amazon and Microsoft) as 126,000 and San Francisco at 134,000. Companies sponsoring H1B need to be held to the letter of the law -- the salaries must be competitive. The demand for H1B visas would fall if the imported labor was paid fairly.

2
planetjones 3 days ago 4 replies      
Is this disagreeing with Blake Irvine. See

https://www.google.co.uk/amp/amp.timeinc.net/fortune/2017/02...

This is the man who refers to H1B visas as genius visas. I have worked with many Indian outsourcing companies and while talented people do exist, calling their employees genius is wholly inaccurate (as would be calling most software devs in the Western world genius).

I did laugh when I read irving's original post on LinkedIn and some former employer of GoDaddy expressed just how Mr Irving was using his H1B allocation I.e. to get the same job done for less dollar...

3
geebee 2 days ago 6 replies      
The economist proposes getting rid of the rule that requires H1B workers remain within the company that sponsors them

That sounds reasonable, but why then require that a company sponsor an immigrant in the first place? Why not let that immigrant choose where he or she will work?

In fact, why not let the immigrant choose what to study, where to work, where to live, all in response to market signals?

People have posted various lists for the average H1-B salary at what they consider top companies, like Google. $130k. Is that the salary in mountain view?

You know, I actually think that salary is somewhat low for what a talented and well educated person can earn in the Bay Area. Why force these people to get hired as developers? Why make them study what google says they should study, take interview exams on second year data structures and algorithms the way google says they should? Why on earth should google get to have this power, over anyone?

Let's just have immigration. All immigrants arrive in the US free, free to choose what they will study, where they will work, how they will go about it. They can sell real estate, install drywall, write python code, write novels, paint portraits, or whatever they wish to pursue. Nobody owes them success, but in the US, they should have the freedom to pursue happiness as they define it.

Not as Facebook defines it. If working as a dev in an open office so big it has a horizon line for a CEO who says things like "young people are just smarter" for $152m a year doesn't sound as appealing as the flexibility and stability of working as a dental hygienist for $110 a year (roughly the median salary in SF), then that's the market's answer.

I still maintain this - any immigrant system that allows corporations to decide who gets to come here is flawed. Allowing immigrants to quit once they're here would be an improvement, but it still allows corporations to decide who does and doesn't get to come here.

4
koolba 3 days ago 6 replies      
What's the argument against using an auction for H1-B visas rather than a lottery? That'd maximize the tax collected from their salaries and ensure the salaries are on par with the going rate for said workers. Arguably it's in the interests of everyone besides companies trying to get cheaper labor via H1-B visas.

The only counterpoint I've ever heard is "It's not fair for company XYZ in low cost of living Podunk, USA because we can't compete at those high salaries against banks / SV / expensive cities". So what? I doubt they can compete against the ability of TCS or Infosys to game the system and get the lions share of the visas either.

5
throwaway251 3 days ago 2 replies      
Looking at the discussion so far - probably going to get trolled/downvoted but here goes:

From all of the above comments - people are trying to undo only the parts of globalization that they don't like (wage arbitrage is one of them - stop crying!). I truly wonder what would happen if the Indians (and the rest of the world) started treating Americans the same way the America treats them and starts to roll back the impacts of globalization:

1. Stop American businesses from getting favors under trade deals and especially with sales of military equipment

2. Ask each American to provide all their social media account information when entering the country or throw them out

3. Set quotas for American businesses to sell their products/services.

4. Force Google, etc to locate servers and data-centers in China/India directly and give the keys to local governments (if the US government can get access why should other governments not?)

If a trade war did happen:

Specific to India: Their economy is mostly non-export oriented (except the IT services part) - they will probably take longer to raise the quality of living for their population - but it will probably be a better path to take (a trade war would probably help grow domestic businesses faster)

Specific to China: The USA needs access to the Chinese markets rather than the other way. Plus they can always dump all those treasury notes

Perhaps a trade war (rather de-globalization) would be a good idea for the developing world - it would bring better balance to the world and undo globalization as a whole and not parts of it (which is exactly what USA voted for when they elected Trump).

P.S. Please don't give a self-righteous BS response about USA being the land of the free and so on.. I think it's pretty obvious most immigrants are there for the money and quality of living (the kind of quality that comes with money and not society, safety, etc)

6
throwaway100217 2 days ago 1 reply      
I am on an H1-B work authorization.

The company applied for my position with a salary of ~$100K/yr (on the LCA and the H1 application) but they actually paid me ~$240K/yr. This year and the next, it will be well north of $300-$330K/yr.

Why would they do this? Simple - to be able to pay me a prevailing wage and keep me in status in case the shit hits the fan.

If they applied to the government saying they'd pay me $240K/yr, and for some reason they had to give me a pay cut, I'd be out of status or we would have to make a less-confident amendment to my H1 auth. A pay cut amendment should be viewed with skepticism in my opinion and I would avoid it.

It's like underpromising and overdelivering.

My actual salary will never be reported in an H1 database. But it will be on my tax returns.

This isn't a common case but just something to keep in mind.

7
anjc 3 days ago 5 replies      
>Although it is true that foreign workers at the Indian consultancies receive more visas than higher-skilled workers at better-known firms, a simple solution exists. Congress could raise the number of visas issued. Given that the unemployment rate for college graduates sits at 2.5%, it is fair to say that most native workers displaced by H-1Bs land on their feet.

Absolute scum. Native workers displaced by H1Bs is ok because the fired workers eventually find other work? Vile.

A significant proportion of IT and STEM graduate are unable to get work in their chosen industry, and proceed to waste years of education by going into other areas out of necessity.

I can't believe somebody could shit out the quoted text and have it published.

8
throwo5 2 days ago 0 replies      
I used to work at American Express as full time H1B employee, Arizona location as an Engineer I (10 years exp). They paid me 80K while they paid 110K starting salary for fresh American graduates from Arizona State for Engineer III position. (Engineer I > Engineer III). Also they were promoted from Engineer III to Engineer I within an year, while I did all the hard work with no promotions or salary raise.

H1B visa abuse at its best by an American Company. I am not in US anymore. Left it for good.

9
almightykrish 2 days ago 0 replies      
Ex-TCS employee here: Seen this every year during my tenure at TCS. Every year in February / January project teams are requested by HR to send "list of eligible candidates" to apply for H1-B visa. The eligibility is weirdly composed, like they keep out graduates who have CS background and senior associates out of it. This is to ensure the associates stick longer with the company once they move to US.

Finally the list comprises of 1000's of applicants for whom the Job Position is either falsely certified by Labour Department (LCA). (Dont understand why the Labour department never carries out an investigation).

10
234dd57d2c8dba 2 days ago 2 replies      
This just confirms that H-1B visa is yet another piece of corporate welfare pushed through by lobbyists for the benefit of large corporations at the expense of small businesses and the middle class.

My small software business can't compete with slave labor from India that large corporations with an army of lawyers can acquire.

Just one of the many reasons that self-employment and small businesses are struggling against a tide of complicated regulations and legislation and immigration meant to crush the workers and small businesses.

As you can see, self-employment has been on a steady downward trend since 1967:https://www.bls.gov/opub/mlr/2010/09/art2full.pdf

11
rodionos 3 days ago 3 replies      
Number of H-1B visas issued for Indian citizens, 1997-2015: https://apps.axibase.com/chartlab/1bc51064

Top H-1B countries: https://apps.axibase.com/chartlab/04040e14

12
darkdreams 3 days ago 2 replies      
Slightly off topic. My understanding is that for every H-1B application that is filed the US government takes a ACWIA fee that is supposed to be used for improving competitiveness of the American worker and providing scholarships.

From https://www.uscis.gov/forms/h-and-l-filing-fees-form-i-129-p...

"SEC. 414 Collection and use of H-1B nonimmigrant fees for scholarships for low-income math, engineering, and computer science students and job training of United States workers".

I am curious whether they could quantify/prove/debunk the skills shortage theory using the scholarships that are given. Does anyone know about this?

13
bischofs 3 days ago 0 replies      
Citing the low unemployment rate of STEM graduates to indicate that native workers have nothing to worry about is silly. Basic economics says that wages do not increase until full employment is reached. I may have a job but my wage would be higher if I wasn't competing with 100,000 H1-Bs
14
itissid 3 days ago 0 replies      
As an H1B visa holder, I think the issue of abuse at its core has to do with two things(at least):

1. Too many tech firms require/opt for low cost workers to subsidize their payroll bill. If you raise H1B salaries, firms that have thin margins might automate and outsource. I think to scrap the lottery, and add the market based approach to H1Bs like in the House Bill that was proposed in Jan is a better alternative, it would force firms to be more productive and boost payroll more organically.

2. The program is underfunded, its entirely fees driven and the fees is clearly not enough to prevent abuse we keep hearing about so much.

Remember top end silicon valley's don't really need to care about the salary issue as long as the reform does not dry the talent from coming to the US completely.

15
forgotAgain 3 days ago 3 replies      
A constant theme for support of H-1B's is that they supply the US with an irreplaceable resource for starting new technical companies.

Despite this I have not heard reference of any individual who came to this country on a H-1B visa to work for an Indian outsourcing company who later participated in a significant successful startup. Honest question, does anyone know an example of this having occurred?

16
winter_blue 3 days ago 1 reply      
This article is repeats the false idea that you cannot switch jobs to other companies. You can switch jobs to other companies that are willing to transfer your H-1B visa. Plenty of tech companies will happily do a visa transfer. Yet, this facetious lie is often repeated. It just goes to show how inaccurate and poorly-researched this article was.
17
paulus_magnus2 3 days ago 1 reply      
(EU citizen point of view) I'd never consider a role in US that pays less than 150k (outside SV) $200k (SV/NY).

H-1B is really bad because the holder has limited bargaining power vs US citizens hence he's forced to accept a lower salary when competing for jobs.

There's also no clean way to allow talent to move around the world.

A fair solution would be to agree on visa free movement of specialists earning above certain threshold ($100, $150k etc), even if it starts with a group of "most favourite nations".

18
unsupak 2 days ago 1 reply      
Everyone knows that most H1-B visas are used up by Indian firms for Indian nationals. Not by American firms for international talent. All the politicians have known about this abuse for years but this part of US foreign policy and lobby group efforts. India is a considered a natural ally against China and the Muslims.
19
ausjke 3 days ago 0 replies      
This has been true for 15 years at least, which is one reason why Wipro/Infosys was getting most IT assignments for US market. When you look into how it worked it is amazing to realize the way the system was abused while no action was taken for decades.
20
mattfrommars 3 days ago 1 reply      
Well it takes The Economist to point of this fact which I've been telling about it for ages. None of them believed that it was Indian who are favored to get H1B compared to other nationality. Indian take so much pride with Indian worker in these large tech firms and other places instead of realizing the fact they are favored. Why not give other nationality a chance to see what is going on? I've seen it happen with firm like Microsoft with HR manager being Indian and favoring Indian internee and granting him job then Pakistani developer who was without a doubt better performer.

Would love to see some H1B crackdown happening.

21
nicholas73 3 days ago 0 replies      
I would go further and say that underpaid H1-B's are a straw man to the real mechanism that depresses American wages. Even if imported workers are paid exactly market, that still means market prices do not go up as there is no one to bid up salaries. That means people are not being compensated for delivery high value, or for developing skill in a difficult or rare area.

There is a lot of hot air between the salary a person would accept versus the value they generate for a company. Having some unemployed people makes it so that the balance always tips towards the low end.

22
caseysoftware 3 days ago 0 replies      
> That is not a good argument against them

It's odd that the Economist has this sub-head but then goes on to make the case that Indian outsourcing companies are the biggest consumers and paying a fraction of what the others are paying.. so they are abusing the system.

The rumor (proposal?) is that Trump is going to shift the minimum salary from $60k to $130k which makes it closer to software dev salaries in Seattle, SF, NYC which I think would address this.

23
CodeSheikh 2 days ago 1 reply      
Why don't we just stop accepting visa applications from Indian outsourcing firmsTCS, Wipro and Infosys for one year and see how it unfolds? It will resolve the ongoing debate -- at lease prove it one way or another. From a candidate's point of view, I am all in for finding new opportunities in a foreign prosperous land. But gaming the system is not great for the local US economy. Maybe having a new visa category with temp status and easy renewals every six-months. If those candidates are good enough, they can easily find regular H1-B jobs with regular American companies. Some foreigners spend a lot of money and go through severe hardships to get educated at American universities adhering to American socio-economic values. If they are good enough, they get hired by American companies with regular pays. I think it is unfair for them to get categorized under the same blanket rhetoric of "Abusing of H1-B visas".
24
nottorp 3 days ago 4 replies      
Isn't a H-1B a form of indentured servitude? As in, if you change jobs you lose your visa?

If yes, you don't get the competent ones but the cheap ones who have no choice. The good ones work on their terms for whomever they please.

Edit: combine that with kls's answer about incentives, and you see why this visa system isn't quite working.

25
leovonl 2 days ago 0 replies      
Funny, you'd think non-immigration visas for USA would be much stricter than for Canada, but lately I've been realizing that's not the case at all. That's specially true for rules applying to the companies (like salary requirements).
26
Technophilis 3 days ago 0 replies      
The articles gives a good overview of the "H-1B situation". However, if you want to dig deeper here is some data I put together http://h1bpay.com/blog/2017/01/30/h-1b-visa-basics-applicati...

Basically, only Microsoft is among the top 10 sponsors and not of the major sponsors is among the top 10 average salaries.

27
perseusprime11 3 days ago 3 replies      
This thing is the only thing that is keeping the salaries low. I can easily imagine a good software engineer getting paid at least 250-300K if we don't have H1-B visa system.
28
aaron-lebo 3 days ago 5 replies      
Have there been attempts at pumping H1B money and similar efforts into schools?

I've always wondered if we want minorities to code and we are worried about job loss in the US, why not stop dumping money into hiring foreigners and instead dump it into CS programs at community colleges?

It would take some time before you could build up a domestic work force as talented as foreigners, but would it not solve several issues? Or is it just not practical for other reasons?

29
nashashmi 2 days ago 0 replies      
I just want to put this out there: There are many fields outside of computer science and engineering where the average salary is not very high, and yet those fields have a desperate need of talent, not just because there is some niche thing only someone from outside can do, but also because the number of people going into the field are dangerously low.
30
hackerboos 3 days ago 0 replies      
Planet Money republished their podcast on immigration recently: Episode 436: If Economists Controlled The Borders

http://www.npr.org/sections/money/2017/02/08/514152963/episo...

31
comments_db 2 days ago 0 replies      
I can confirm my employer pays me at or above market rate. I am on H1B, but never been part of the outsourcing firms.
32
pinaceae 3 days ago 1 reply      
yes, and it's filling shit jobs that no American CS grad wants to do.

who wants to do outsourced QA for Oracle? menial, mind numbing clicky work.

who wants to maintain monster codebases built 20 years ago for some internal bullshit system at a Fortune 500?

most of Software work by now is akin to facilities management. hence you give it to motivated foreigners, just like in farming, etc.

33
Consultant32452 2 days ago 0 replies      
I wonder how many people against H-1Bs are also opposed to Trump's worker protectionist policy re: NAFTA. Seems that at least philosophically they're aligned.
34
omouse 3 days ago 0 replies      
BAM, there is no fucking tech labour shortage! They're just hiring outsourcing firms instead of training. Fucking knew it.
35
harichinnan 2 days ago 0 replies      
Many of the people who are against H1B are missing the bigger picture.

1. H1B employers pay wages based on the numbers Department of Labor provides them.

2. Labor department contains a la carte of titles available for the same job to choose from. Employers generally chose a title with lower wages. Programming Analysts and Software Engineers do essentially the same job with upto 50K spread in wages.

3. The limit on H1B works against American Employers who usually pay much higher wages than the Indian consultancies. The American employers have to compete with a flood of applications from India. This forces big companies to subcontract Indian firms.

4. Companies in India randomly select employees to file for visas irrespective of whether they have actual business in US. Many H1B recipients don't actually come to US. Many wait in India for years before their companies arrange for a actual job in India. The lottery forces everyone to game the system.

5. US cannot actually deny Indian consultancies from operating in US. That would be denying market access to India. Indian companies does 60 Billion dollar worth of business in total. That's paltry compared to trade between US and India. US sells everything from Boeing to Starbucks and any restrictions on visas could invite trade war from India too.

6. People who oppose H1Bs actually dream of earning Wall Street salaries in Silicon Valley. That's a pipe dream unless you work in quant engineer jobs in Hedge funds and HFT. You can't build businesses that would pay the average worker 400K in Silicon Valley.

7. Restrictions on H1B would move American programming jobs to Asia much like the manufacturing jobs of rust belt.

8. The world is a much bigger place than America. An Indian worker taking a job doesn't necessarily mean a loss of opportunity for an American worker. It's not a zero sum game. H1B workers take up tech jobs. In most big companies, the ratio of tech to non tech jobs is atleast 1:6. Look at Amazon creating 100K non technical jobs in last few years. A few million technology workers in America make most of the high tech for the rest of the 7 billion human beings on the planet. Trade restrictions are not something you need. There could be a Chinese firewall in every country to protect local jobs and to rob Google and Facebooks of business opportunities. There would be more Baidu's, Youku's, Wechats and Alibabas in every country on the planet.

9. There's actually such a thing as skills shortage at every level. Even at the blue collar end, Many Americans cannot pass tests at 9'th grade level and are functionally illiterate. Tech jobs require a whole lot more skills that the coal miners in Michigan won't be able to take up. https://www.nytimes.com/2017/01/30/education/edlife/factory-...

To summarize, if you are concerned about H1B lowering wages in your market, campaign for worker mobility and higher wages for H1B and the market would work it out. Americans are best at creating free market solutions to problems. H1B lottery system and the restrictions is a socialist solution that best works in restricted economies. Also campaign for a startup visas. Both for entrepreneurs and employees. The Frech Tech visa could be a model http://visa.lafrenchtech.com/ . This would help many of the people languishing in H1Bs to create more companies and more jobs here in US.

36
sergiotapia 3 days ago 0 replies      
So President Trump was right about this.
37
redsummer 3 days ago 0 replies      
You could make the Sanders and Trump people happy if there was free college and education for economically underprivileged Americans for STEM / Programming jobs. It doesn't seem likely given the political divide.
38
calvinbhai 3 days ago 2 replies      
Because citizens of countries other than india and china don't really need h1 visas to work in US.

Many countries have treaty work visas (Canada / Mexico citizens can work on TB visa)

And those who study in US, if their employer starts green card process they can get their EAD before OPT expires.

Only Indians and Chinese have to rely on h1b.

19
Takeover.sh Wipe and reinstall a running Linux system via SSH without reboot github.com
504 points by tambourine_man  2 days ago   75 comments top 15
1
gizmo 2 days ago 5 replies      
Pretty cool, although I'm pretty sure I would never use something like this.

What has saved my skin on a number of occasions is the ability to boot remote servers into rescue mode and chroot into the broken system. That way you can use package managers, all your diagnostic tools, and everything else the boot image doesn't provide.

Basically you just mount the different partitions and then chroot just swaps /proc /sys /dev of the rescue image with the real ones, and BAM you're back in business.

For details see threads like:http://superuser.com/questions/111152/whats-the-proper-way-t...

I know that for many of you this isn't rocket surgery, but for those who don't know you have to google for "chroot" when you boot into a rescue image and discover you can't do anything, you might just remember this post.

2
predakanga 2 days ago 2 replies      
For anyone interested in adding this to their toolkit, I would suggest reading this StackOverflow answer: http://unix.stackexchange.com/a/227318/189858

In short, the answer details how to switch your running system to use an in-memory only root filesystem, without restarting. This allows installing a new OS, resizing the OS disks, etc.

It's a risky operation, but the linked answer covers many pitfalls that you might run into - I recently used it to shrink the root partition on a remote server, very much appreciated the detail.

3
notaplumber 1 day ago 0 replies      
This sounds similar to the more cleverly named FreeBSD Depenguinator project which could be written over top of a remote Linux server replacing it with FreeBSD, without console.

If you have remote console access, a similar thing can be done for OpenBSD by dd(1)'ing a miniroot ramdisk install image.

4
rdslw 1 day ago 1 reply      
Another nice trick of this family (with reboots, or without with using systemd-nspawn) lies with clever btrfs usage. Long story short:

* use btrfs, and create your main root filesystem as a btrfs partition subvolume and another btrfs subvolume for snapshots (also a sub of master btrfs partition)

* to start any experiment (e.g. installing whole gnome and 500 different packages you MIGHT WANT TO REVERT in the future) create before the risky operation snapshot (btrfs subvolume snapshot / /.snapshots/yournameofsnap) of current filesystem

* experiment in any way :)

* switch between old root (snapshot you created) or the new one with (btrfs subvoulme set-default)

* delete any of them (btrfs subvolume delete)

btrfs copy-on-write allows all of these commands to happen instantly without (almost) any actual copying. Also booting from both volumes is possible without any additional steps as long as master btrfs partition is the one to be booted from UEFI.

https://wiki.archlinux.org/index.php/Btrfs

5
camtarn 2 days ago 1 reply      
Seeing if I understand what this is doing: this keeps running the same Linux kernel and kernel modules, but swaps out absolutely everything else up to and including the init system - is that right?
6
simon1573 2 days ago 2 replies      
I guess this could be really useful for installing distributions that are not available at some VPS providers.
7
ce4 2 days ago 0 replies      
Reminds me of Debian Takeover from more than 10 years ago :-)

https://wiki.debian.org/DebianTakeover

8
zimbatm 2 days ago 0 replies      
https://github.com/elitak/nixos-infect is similar but doesn't require to pivot root.
9
Aissen 2 days ago 0 replies      
FYI, there's vps2arch that does the same thing with a different approach:https://github.com/drizzt/vps2arch

Edit: it doesn't really do the same thing. vps2arch could be implemented on top of takeover.sh for better reliability.

10
nashashmi 2 days ago 3 replies      
Somebody correct if I am wrong, but this script somehow allows the session to live in the RAM. Once the OS is running directly from the RAM, the hard drive can be wiped and a new OS can be installed. The system is then booted to run off of the hard drive.
11
dredmorbius 2 days ago 1 reply      
This is conceptually similar to the chroot installation method, which has been a documented, if not entirely standard, method on Debian for quite some time.

https://www.debian.org/releases/stable/amd64/apds03.html.en

https://wiki.debian.org/chroot

12
geoffmcc 1 day ago 1 reply      
I wonder if this would help me switch from Ubuntu Desktop to Ubuntu Server on my laptop that has a broken screen.
13
technologyvault 1 day ago 0 replies      
Wish I had known about this hack before today, even if it is just experimental at this point.
14
NGTmeaty 2 days ago 0 replies      
Holy shit, that's really cool.
15
dpweb 2 days ago 3 replies      
removed. No delete on HN comments? interesting
20
GitLabs Secret to Managing Employees in 160 Locations: Write Everything Down ycombinator.com
515 points by craigcannon  4 days ago   308 comments top 32
1
bryanh 3 days ago 9 replies      
160 employees remote is impressive and commendable. Zapier is fully remote as well (but half the size in employee count). I'd say "write everything down" is a great shortcut to the sorts of practices you need to cultivate.

We've also noticed that over-communicating is critical but hard - it is surprising the things that are "yeah yeah, we know" to some but are "oh we're doing that?" to others. This is only natural - organizations become complex as they grow, and individuals are busy doing their thing. You often have to bring the important data to them.

On another note, working remote is awesome. I recommend everyone give it a spin once in their careers - but try to find a team that embraces it. I've heard mixed experiences from those who were the single remote person on a team.

2
deepaksurti 3 days ago 5 replies      
>> 2:41 GitLab values boring solutions: our product should be exceptional

Exceptional products have exceptional UX. Gitlab IMHO has the worst UX of all git based products out there, I much rather take BitBucket over Gitlab. I tried using Gitlab, but no, I would much rather pay the 7$ to GH for my private repos.

I sincerely hope they make an exceptional product. And 'should' better be 'must'!

3
dandersh 3 days ago 4 replies      
The most frustrating thing about working remote was the complete lack of documentation around anything -- meetings, requirements, internal frameworks/libraries, etc. Good for them for emphasizing documentation.

I lost track of the amount of times I was told to "look at the source code" when asking about documentation for the internal framework.

Also fun was being assigned a feature and having it's functionality explained over the phone, with any subsequent follow-up being met with "Didn't we already talk about this"?

I never would have thought that a remote based team would have the worst documentation out of anywhere I've worked, but that's exactly what happened.

4
manojlds 3 days ago 6 replies      
After the mess up, I don't really like seeing these posts about Gitlab. Maybe this is their problem after all.
5
Kiro 3 days ago 2 replies      
Given the complete backlash I'm seeing on HN it seems like transparency actually hurt you. Very sad.
6
JustSomeNobody 3 days ago 2 replies      
Wow. You all in the comments must be a bunch of perfect people, huh?

Good grief.

Also, nice cameo by the office cat.

7
Cofike 3 days ago 1 reply      
Judging by the number of joke and low hanging responses on this thread I think they should have held this article for a little longer.
8
geoelectric 3 days ago 0 replies      
While I was at Mozilla, John O'Duinn gave a pretty great presentation about this sort of stuff.

The core message (mixed with my own takeaway) was that you have to consider all of your offices, even HQ, more like just another field office or coworking site--no location is "central" compared to others. As long as you consider accommodating remote people to be a separate task, it's a task that can be deprioritized. Ideally, anyone in your office should be able to work remotely at moment's notice with little-to-no change in procedure.

Much of his presentation was outlining concrete techniques towards making this actually doable. He still has the slide deck online.

http://oduinn.com/blog/2014/11/09/we-are-all-remoties-nov201...

In practice, of course, real life was imperfect and there's no question that you take a productivity hit over people in the same room. But if you do want a heavily location-agnostic organization as a core value, his take is a nice start.

9
nedsma 3 days ago 0 replies      
Glad that Gitlab is pushing the remote working boundaries even further. Yes, document everything, write easy to understand code, offer your remote coworkers to call you whenever they need help, leave feedback, praise work as if they're local. It all matters and even more when remotely.
10
smarx007 3 days ago 0 replies      
It would be so much better if YC would made it a podcast instead (or in addition). Would be a perfect thing to listen on the go.
11
peterwwillis 3 days ago 0 replies      
I think the worst team I've ever worked on was the one that had a team lead that literally never wrote anything down. Everything was verbal, and what wasn't verbal was private messaged. We would have the same discussion three times because nobody wrote it down the first or second time. Nobody even knew what the changes to the formal design were, because they never had a documented change review.

I always tell people on my team that if it isn't written down, it doesn't exist. When you get into the habit (and learn how to manage all that information) it really saves your bacon.

The other thing that would have saved them from the backup drama was learning how to document and date your to-do list. Suddenly someone notices that you've had a test process for backups on the to-do for over a year and they bring it up at the next meeting.

And another thing: in team meetings, people can write their own meeting notes so everyone becomes responsible for documenting their responsibilities and what affects them. It's easier to do this remote than at a stand-up, because you're already at your keyboard.

12
siliconc0w 3 days ago 1 reply      
People seem to have some subtle insecurities around broadcasting their activities which contributes a lot to the problem of distributed teams. Ideally all communication goes to everyone and everyone can apply their own client-side filters for what they want to care about. The problem is people want to present a certain face up the org and a different one to their peers so they PM each other and create isolated channels of communication (distribution lists, slack channels, etc). Now have some cognitive overhead of whom to loop in when which creates some other complications because it means if you're receiving a message it's explicit rather than implicit. TLDR - broadcast everything to everyone by default and create a culture of transparency.
13
ceejay 3 days ago 1 reply      
I think hindsight will reveal that the things that make a "distributed company" successful are really the same things that make a "localized company" successful. I think it's just that having all employees on-site probably makes it easier for companies to "fake it" and stumble into success through sheer grit and determination. Probably a lot of times even when they have less-than-adequate (but highly motivated) human resources.
14
gravypod 3 days ago 2 replies      
I can't wait for a remote-work company that has a policy like this to get into some kind of legal troubles that warrents subpoenas. This is a prosecutors gold mine. I feel bad for the defense lawers already.
15
newsat13 3 days ago 0 replies      
Last I checked a year ago, GitLab had just 80 employees. So, it's now doubled. That is quite some growth. More often than not growing companies just collect lots of employees and then a startup comes and beats them...
16
whoiskevin 3 days ago 0 replies      
I have to watch a video that then tells me to write everything down?Sorry I couldn't resist.I believe and practice that as much as I can because it makes things easier regardless.
17
Paul-ish 3 days ago 0 replies      
This is interesting, as I interned at Mozilla, and they have a large number of remote employees. There was only one person (other than me) on my team in my location. Mozilla also records a lot of things, but the intent is more to publish that stuff publicly, in the spirit of openness towards the community. For example, all my team meetings had public notes. I wouldn't be surprised if this tendency to record everything for the sake of the community also has positive benefits inside Mozilla.
18
winteriscoming 3 days ago 1 reply      
More than knowing how they manage remote employees, at this point, with more than a week since the data loss incident, I would be genuinely curious to know if their backups are now functional and what plan they have put in place or planning to, to verify backups work.

Furthermore, I won't be surprised if they have seen more concentrated spam attack, after the news of this data loss surfaced.

19
MrFurious 3 days ago 0 replies      
Obviously, Gitlab don't have the secret for best uptime.
20
jgalt212 3 days ago 0 replies      
160 employees? That seems like a lot support a product that is a wrapper around another product. I know the lede is skeptical, but does anyone have any idea how those numbers break down? i.e. is there a large number of sales/support due to complex installs?
21
chmars 3 days ago 0 replies      
What kind of tool would you recommend to write everything down, especially a self-hosted tool?
22
uladzislau 3 days ago 0 replies      
Obviously what they say and what they do are two different things based on the recent events.
23
isaac_is_goat 3 days ago 0 replies      
And yet their database gets blown away, and now their redis is acting up. Go Gitlab.
24
WalterSear 3 days ago 0 replies      
And don't store it on gitlab.
25
the-dude 3 days ago 1 reply      
I have the impression this is being moved off the frontpage in an accelerated manner ...
26
tra3 3 days ago 0 replies      
How do you organize internal knowledge base?
27
krisdevelops 3 days ago 1 reply      
write everything down where? and in what format?
28
hartator 3 days ago 0 replies      
or "Shut Everything down". Ok, that was lame.
29
Philipp__ 3 days ago 1 reply      
Everybody is prone to making a mistake! As long as you learn something from it, so that it never happens again, it's ok. And I really don't get why people give them bad rep for this post. I mean it's obvious why it's posted and we all know what happened week ago, but consider their size and scale, it's not easy to manage so many people remotely.
30
johansch 3 days ago 0 replies      
I mean, really: after figuring out after the fact that five out of five backups were malfunctioning - go here to read our secret to managing 160 employees remotely?

Nah... no thank you. I think I will get my management advice from a company that is not totally broken. I don't care how transparent you are, or how remote-worker friendly you are, if you can't do the basics right. You are in the data storage business! Shut down the company and refund the money to the investors.

31
pzh 3 days ago 0 replies      
So after losing a ton of user data and repositories about a week ago, because a remote engineer couldn't bother to check to which remote machine he was issuing 'rm -rf' commands as superuser, now GitLab is teaching us its 'secret' to success?
32
yunolisten 3 days ago 1 reply      
> Write Everything Down

Then rm -rf the paper....

https://www.theregister.co.uk/2017/02/01/gitlab_data_loss/

21
CIA Declassified Coldwar Russian Jokes [pdf] cia.gov
458 points by bifrost  6 days ago   260 comments top 50
1
dragandj 6 days ago 2 replies      
Wlodek, a rural farmer, has decided that might be safer not keeping his money under the mattress. So he takes his horse and cart and goes off to the nearest town to talk to the bank.

"Right," says Wlodek, "I want to make sure my 50 zlotys are safe. Like, what happens if someone robs you and takes everything in your vault?"

"Oh, don't worry about that!" says the smooth bank manager. "The main branch in the city would cover you!"

"Okay," says Wlodek. "But suppose the whole bank went bust? I know these things happen."

"Well," says the bank manager. "People have a right to be worried, of course. So that you can feel completely secure, the Polish Central Bank still guarantees your savings."

"But suppose the Polish Central Bank ran out of money?" asks Wlodek. "What would happen then?"

"This is very hypothetical," says the bank manager. "But if it ever happened, we have a treaty with the Soviet Union. They would still make sure you weren't out of pocket."

"But what if the Soviet Union went bust?" asks Wlodek stubbornly.

The bank manager sighs. "Look," he says. "Wouldn't that be worth 50 zlotys?"

2
guscost 6 days ago 3 replies      
A bit different but here's my favorite European joke:

An Italian politician invites his Greek politician friend over for a visit. The Greek pulls up in front of an elegant manor house and is welcomed by the staff. He walks in through a foyer with marble floors and a huge marble staircase with ornamental banisters and a crystal chandelier. They walk through to a lovely veranda overlooking the river, and sit down to eat.

The Greek is very impressed with everything and asks "How did you manage to get this place?"

The Italian points to a shoddy concrete bridge over the river and says "See that bridge over there? It was supposed to be a steel suspension bridge, but we found a lower bidder to build that one instead, and with all the extra money I was able to buy this!" The Greek compliments his friend on the house, they finish a delicious meal while talking about various politics, and the next day they part ways.

Several months later the Greek invites the Italian over. The Italian arrives at an enormous estate with a marble facade. He walks in to see an even bigger staircase, and a banister and chandelier that are trimmed in 24 karat gold! They sit down for a meal on a huge terrace with a staggering view of the harbor.

The Italian is completely blown away, so he asks his friend "How on earth did you afford this place? It's fantastic!"

The Greek says "Well, see that bridge over there?"

The Italian says "What bridge?"

3
zokat 6 days ago 3 replies      
I like this one:

Russian engineer got fed up of having all responsibility and low salary, so he moves to another city and pretends to be an ordinary worker, same salary and peace of mind. However, not long after communist party sends him to evening classes. On his first day there at maths class he was asked about circle circumference formula, but for some reason he could not remember it off hand, so he goes on blackboard and tries to work it out with linear integral. After exhausting whole blackboard he finally gets the result:

-2RPi

Then all of the sudden he hears all of the class whispering to him: "Change the direction of integration!"

4
lb1lf 6 days ago 2 replies      
Three men meet in a GULag camp, and conversation turns to why they are there.

"I got twelve years hard labour for speaking out against Gennady Karasov", says the first.

"That's funny, I got twelve years hard labour for supporting Gennady Karasov!" says the second. Attention turns to the last man.

"I am Gennady Karasov".

-----

Three men have to share a hotel room in Chelyabinsk during a congress. Naturally, in the evening, they start drinking. One thing leads to another, and they find themselves telling political jokes. Concerns that any of the others may be KGB informants or that the room may be bugged are readily dissolved in alcohol. Everybody is having a great time.

One is tired and really feels like sleeping; he decides to pull a joke on the others. He excuses himself, runs to the lobby and gives the receptionist a few bills. "Please send someone to my room with a bottle of vodka, some rye bread and salt in ten minutes." He then returns to the room.

After a few minutes, he notes to the others that stocks of refreshments are running low. "Not to worry, comrades! I have good contacts."

He leans over towards the potted plant in the corner, grabs it and loudly says, as if speaking into a microphone:

"Comrades at the listening post, this is lieutenant Dyatlov! We urgently require a bottle of vodka, some rye bread and salt to our room! Make haste!"

The others laugh their asses off - until a minute later, there's a knock on the door and vodka, salt and rye bread is served.

You could hear a pin drop. Our man goes to sleep, enjoying the quiet.

When he wakes up in the morning, the others are gone. A note is left on the table. "Comrade! A couple of your jokes yesterday would easily get you to Siberia! (The one about Stalin's maid, while hysterical, could get you in front of a firing squad!!!) However, we liked that room service joke so much, we'll let you off the hook this time. Sincerely, KGB."

5
kamyarg 6 days ago 5 replies      
Here is my favorite one:

The KGB, the FBI and the CIA are all trying to prove that they are the best at catching criminals.

The Secretary General of the UN decides to give them a test. He releases a rabbit into a forest and each of them has to catch it.

The CIA goes in. They place animal informants throughout the forest. They question all plant and mineral witnesses. After three months of extensive investigations they conclude that the rabbit does not exist.

The FBI goes in. After two weeks with no leads they burn the forest, killing everything in it, including the rabbit, and make no apologies: the rabbit had it coming.The KGB goes in. They come out two hours later with a badly beaten bear. The bear is yelling: Okay! Okay! Im a rabbit! Im a rabbit!

6
mlillie 6 days ago 7 replies      
OK, I need to add my favorite Russian joke here:

A German and a Russian die. Neither has been the best person in their life, so they get sent down there. When they arrive in hell, the devil says, "Well, especially bad people have been dying lately, and we're all full up. I can only accept one of you, the other will go to purgatory and get a chance at redemption."

He proposes a simple test of their human decency: Each man is given a dog, a huge crate of sausages and one month to teach the dog a trick.

One month later, the devil returns to the German, who has clearly bonded with his now-plump dog. "Alright, let's see what you've got!" the devil says. The German plucks out a sausage and proceeds to wiggle it in the air. The dog, perfectly balanced on its hind legs, does an acrobatic pirouette. "Wow!" says satan. "Impressive!"

He walks over to the Russian his dog, whose relationship seems strained. The dog looks like a wild animal, but the Russian seems satisfied enough. "OK, show us your trick," the devil says. The Russian plucks out a sausage and proceeds to wiggle it in the air. The dog, wide-eyed, says "Please, Vanya, just one sausage!"

7
ksrm 6 days ago 1 reply      
There's a great one at the start of Slavoj Zizek's Welcome to the Desert of the Real:

In an old joke from the defunct German Democratic Republic, a German worker gets a job in Siberia; aware of how all mail will be read by censors, he tells his friends: Lets establish a code: if a letter you will get from me is written in ordinary blue ink, it is true; if it is written in red ink, it is false. After a month, his friends get the first letter, written in blue ink: Everything is wonderful here: stores are full, food is abundant, apartments are large and properly heated, movie theaters show films from the West, there are many beautiful girls ready for an affairthe only thing you can't get is red ink.

8
ChuckMcM 6 days ago 3 replies      
I like this one:

An American tells a Russian that the United States is so free he can stand in front of the White House and yell, "To hell with Ronald Regan." The Russian replies, "That's nothing. I can stand in front of the Kremlin and yell, 'To hell with Ronald Regan,' too."

9
nicolas314 6 days ago 0 replies      
Three guys just arrived in the Gulag. They ask each other what they did to end up there. First one says:

- I came to work 5 minutes late, I was sentenced to 10 years for sabotage

Second one says:

- I came to work 5 minutes early, I was sentenced to 10 years for espionage.

Third one says:

- I came to work precisely on time, I was sentenced to 15 years for contraband of foreign clocks.

10
RalphJr45 6 days ago 0 replies      
If anyone is interested, a book titled Hammer And Tickle: A History Of Communism Told Through Communist Jokes by Ben Lewis has a few gems in:

An inspector is at a factory conducting an inspection. He addresses one worker:

'What are you doing here?'

'Nothing.'

'And what do you do here?' he asks another.

'Nothing.'

He writes in his report: 'The second worker may be released for unnecessary duplication.'

11
chx 6 days ago 0 replies      
Oh my, old socialist jokes? Here's one from Hungary: The lion calls the congress of forest animals and declares: thanks to the tireless work of our scientists, we now know two times two is six. Everyone claps loudly. Only the old rabbit sighs to himself, the way I learned in school, two times two is four. Two giant timber wolves appear, haul the rabbit away and noone sees the rabbit for years.

A few years later, the lion calls the congress of forest animals and declares: thanks to the tireless work of our scientists, we now know two times two is five. Everyone claps loudly. Only the old rabbit, quite haggard now, sighs to himself, the way I learned in school, two times two is four. Two giant timber wolves appear and invite the rabbit to the pub across the street and tell him: - Look comrade, you can think whatever you want but do not be so loud about it. Or do you want it to be six again?

12
simonh 6 days ago 0 replies      
The Mayor of Moscow is getting ready to take his wife to the Ballet.

Why have you not put on your dress?

But darling, I dont have any dresses good enough for the ballet. replies his wife.

Nonsense the Mayor declares, opening the cupboard.

Theres this blue dress, this green dress, hello comrade Dzerzhinsky, and this lovely white dress.

https://en.wikipedia.org/wiki/Felix_Dzerzhinsky#Director_of_...

13
popeshoe 6 days ago 1 reply      
A Prague citizen came to local police station in fall 1968. At the desk he claimed "Officer, a Swiss soldier stole my Russian watch". Officer looked puzzled and responded "I guess you mean that a Russian soldier stole your Swiss watch." The man replied "It might be so, but remember that you said that. Not me."
14
DCoder 6 days ago 2 replies      
A man is walking down the street carrying a 12-roll pack of toilet paper. People surround him, all excited: "Where'd you get that?" The man answers, "I just got it back from the dry cleaner's!"

(source: Viktor Suvorov's fascinating Kuzkina Mat [1])

[1]: http://andrewnurnberg.com/book/kuzkina-mat/

15
ptaipale 5 days ago 0 replies      
It is 1985. Vladimir wants a car. A Lada. He submits the application to purchase one, and when it is processed, he collects the documents at the office. The clerk says: "You are now in the queue. Your Lada will be delivered on February 7, 2017."

Vladimir says: "I'm sorry, I can't pick the car up on that day. Do you have any other days on that week?"

The clerk asks: "How come? The time is over 30 years away, how do you know you're not available?"

Vladimir: "The plumber comes that day."

16
edw519 6 days ago 0 replies      
A soviet village with only a bull purchased a cow from Irkutsk. But the cow would not let the bull mount her. No matter what the bull did, the cow moved the other way so that mating was impossible.

The villagers brought in an expert government official who inspected the bull and cow's behavior.

He asked the villagers, "Did you get this cow from Irkutsk?"

They responded, "Wow. That's amazing. How did you know?"

"My wife is from Irkutsk."

17
ommunist 6 days ago 1 reply      
Three men in a platzkart train are telling political jokes on their way to Moscow. The fourth is coming in and he hisses "Oh my, do not tell those, you'll be taken". "Oh, come on, man, says one of the three". The alerting guy than goes to the stewardess and asks for four cups of tea delivered to that platzkart seats in exactly 4 minutes. Four minutes later he continues to hiss on his travel companions "Guys, if you won't stop to tell political jokes, they will take you!", getting the same "Come on, man" in response. "All right he says, look here". He stands up near the small lamp in the corner and says "Comrade mayor, four teas to platzkart seats 14, 15, 16 and 17, please". The stewardess brings the tea. Everyone shuts up, and soon goes asleep. In the morning, our hero discovers that his three travel companions' seats are empty. He asks stewardess whether they took off in Tver? "No", the stewardess says, "they were taken". "And why they did not took me too?". "Because comrade mayor liked your joke about the tea very much".
18
hprotagonist 6 days ago 3 replies      
Regan was actually not bad at telling soviet jokes.

https://www.youtube.com/watch?v=mN3z3eSVG7A

19
andrei_says_ 6 days ago 5 replies      
Makes me wonder, what jokes is the CIA not declassifying?

Yes, this is a call for more Russian jokes.

20
paganel 6 days ago 2 replies      
Over here in Romania the Radio Yerevan jokes were some of the most famous (http://www.armeniapedia.org/index.php?title=Radio_Yerevan_Jo...). My favorite one:

"""

Q: Is it true that Ivan Ivanovich Ivanov from Moscow won a car in a lottery?

A: In principle yes, but:

1. it wasn't Ivan Ivanovich Ivanov but Aleksander Aleksandrovich Aleksandrov;

2. he is not from Moscow but from Odessa;

3. it was not a car but a bicycle;

4. he didn't win it, but it was stolen from him.

"""

21
DenisM 6 days ago 2 replies      
Russians have a user-generated site that contains every single Russian joke that there was: http://anekdot.ru The site has been up since mid-nineties, iirc.
22
nborwankar 6 days ago 0 replies      
Long line outside the general store in Moscow. Manager comes out and addresses the crowd "Comrades I have good news and bad news.Bad news we have no more toilet paper.Good news we have no food either"
23
atemerev 6 days ago 1 reply      
Back in USSR, everyone thought that it is KGB who tracks these jokes and those who spread them.

Who knew it was CIA all the time? :)

24
GigabyteCoin 6 days ago 5 replies      
I understood every single joke on that list except this one:

>A man goes into a shop and asks "You don't have any meat?". "No," replies the sales lady. "We don't have any fish. It's the store across the street that doesn't have any meat."

I just don't get it at all... is it just a bad joke?

I understand that stores ran out of products in that situation, but if you just had to walk across to street to get what you were after, that doesn't really seem like a joke at the expense of the communist government.

25
leephillips 6 days ago 7 replies      
These jokes aren't bad. What is the story? Are these genuine Russian jokes, or jokes inserted there by the CIA?
26
pgtan 6 days ago 0 replies      
here is a bulgarian one: electricity and water meet in a typical socialist apartment building. Sorry, I'm here only up to the second floor, says the water. No need to excuse, says the electricity, I'm here only for two hours.
27
CamMacFarlane 6 days ago 1 reply      
Best cold war "joke":

https://upload.wikimedia.org/wikipedia/commons/0/01/ReaganBe...

Found from searching for more like OPs

28
krzrak 6 days ago 2 replies      
The real question is: why CIA had classified Russian jokes on their file?
29
cm2187 6 days ago 0 replies      
A lottery at the French communist party's fair (fete de l'humanite). The first prize is a week of holidays in Moscow. The second prize is two weeks in Moscow, the third three weeks...

---

A European tourist discuses with a Cuban local.

- How is life under Fidel Castro?

- I can't complain

- Interesting, so not that bad

- Well, I really cannot complain

---

Alexander the Great, Caesar and Napoleon are watching a Soviet military parade:

- If only I had soviet tanks, said Alexander, I would have been invincible

- If only I had soviet planes I would have conquered the whole world says Caesar

- If only I had the Pravda, no one would ever have heard about Waterloo says Napoleon

---

A young officer waits in front of Stalin's office for his audience. The door slams open and Marechal Joukov, furious, leaves the office grumbling "cockroach with a moustache". Introduced to Stalin, the young officer says it is his duty to report what he heard. Stalin calls back Joukov and asks him "what did you mean by cockroach with a moustache?". Joukov: "I was refering to Hitler of course". Stalin then turns to the officer: "who did you think he was referring to?"

---

A discussion at the goulag:

- what are you here for?

- for being lazy

- how is that?

- we had a few drinks with some friends then we started telling each others political jokes. I went home and before going to sleep, I thought I should report what happened to the KGB first thing in the morning. Well, my friends went to the KGB that same evening.

---

Why are there always 3 miliciens? One who can read, one who can write, and another to watch these dangerous intellectuals.

---

East German joke: why does toilet paper always have a double sheet? Because one copy always must be sent to Moscow.

---

Do you prefer socialist or capitalist hell? Socialist of course, either they run out of matches, or there is a fuel shortage, or the devils are away at a party meeting.

---

Tito asks his chauffeur to stop the car to discuss with a peasant on the side of the road.

- where are you going, asks Tito

- just shopping, I will buy a few suits, several pairs of shoes and a new car. And my wife asked me to bring a few other things back: a fridge, a washing machine and a new TV

- you must be very wealthy

- I am, this is the socialist miracle

- that's right, and you know who I am? You owe all that to me!

- oh, you are comrade Tito? I am sorry I didn't recognize you. With this big car I thought you were an american journalist

30
rbanffy 6 days ago 0 replies      
My mom leaked many of these secret jokes to me when I was a kid.
31
ommunist 6 days ago 1 reply      
Yes, the USSR was so secret, that it took decades for CIA to declassify its mortally dangerous jokes.

However, the "war of jokes" was integral part of the Cold War and do not underestimate one. There is terrific Russian novel about this battle, it took off in the late 60-ies.

Very important to know that a class of jokes about Russian Civil War heroes, Petyka and Vassily Ivanovich Chapaev was a viral campaign set off by the KGB to combat US/British jokes designed for Russians.

I like this one. "Can you drink a glass of vodka, Vassily Ivanovich?", asks Petyka. "Sure thing", boss answers. "How about two?" "No problem!" "And how about a full bucket of vodka?" "You know, it is only Vladimir Ilyich Lenin, who is capable to drink the full bucket of vodka!"

32
jbuzbee 6 days ago 0 replies      
If collecting jokes tells you what the "common man" is thinking, the Russian intelligence agencies must be having a field-day with the current situation in the US...
33
jumasheff 6 days ago 2 replies      
Why on Earth these jokes had been classified?
34
gozur88 6 days ago 4 replies      
Some of these seem more like jokes the CIA would have liked to have Russians tell each other more than jokes they actually told each other.
35
guard-of-terra 6 days ago 0 replies      
Most of those aren't particularily funny.

Compare with the good selection from Wikipedia:https://en.wikipedia.org/wiki/Russian_jokes

36
zmix 3 days ago 0 replies      
This one is about propaganda:

Pravda's (the main press organ of the USSR) headline:

"The Great Republic of the Soviet Union has scored 2nd in an international competition of car-building nations. Just right behind the USA."

The rest of the message was withheld: There were only two nations taking part in that competition.

37
phs318u 6 days ago 1 reply      
Say what you will about the current political situation in the US, Donald Trump has been an absolute gift to comedians.

http://politicalhumor.about.com/od/Donald-Trump/ss/Best-Dona...

And in the interests of "balance":

http://jokes4us.com/celebrityjokes/barackobamajokes.html

38
nabla9 6 days ago 0 replies      
This is the best Russian joke/story I have heard, it's not political:

During the conversation among the newly found friends one of the teachers (lets call him Dmitriy Petrovich) mentiones that it is a medical fact, that it is impossible to take a light bulb out of ones mouth once it was inserted there. This meets active disbelief of his two opponents who start questioning him as to what kind of light bulb he means and how come you cannot take it out, if you can put it in. Dmitriy Petrovich replies, that he is talking about a standard 100 Watt light balbs such as the one lighting their room, but lacking medical education he doesn't know the reason for not being able to remove it. Discussion heats up, and at some point one of his opponents desides that an experiment is necessary.

Mind you, that all of the teachers in the room are PhDs in various fields of exact science. Obviously not one of them is a medic. The light bulb is then removed and the most loud opponent (lets call him Vladimir) puts it into his mouth. In a few seconds it becomes clear that Dmitriy Petrovich was right, and it is quite impossible for Vladimir to remove the light balb due to peculiar clenching of jaw muscles.

After a short discussion the three friends decide to get Vladmir to a doctor. They get out of the hotel, and stop a cab. They drive to the hospital where they have to relate the story of the accident to the night nurse, who, after almost choking herself with giggles, calls the ER doctor. The doctor carefully examines Vladimir, and unexpectedly hits him with his fist in the back of the jaw. Vladimirs jaw falls open and the doctor returns the light bulb to Dmitriy Petrovich, explaining that Vladimir is not going to be able to use his mouth for a couple of hours due to the over stressed jaw muscles.

The three teachers get back into a cab and start driving home, when the third teacher starts complaining that the other two are playing him for a fool, that this is medicaly impossible for such phenomenon to exist and that he is about to prove it. He puts the light bulb into his mouth, the cab makes a U-turn and speeds back to the hospital. At the hospital, the nurse starts giggling when the three men enter the emergency room, and after hearing their new story falls of her chair laughing. After a little while she calls the surgeon, who chuckles, hits the 3rd teacher in the back of the jaw and removes the light balb.

The cab has left, so the three friends catch another one. Dmitriy Petrovich gets noto the front seat and puts his mute friends with their jaws hanging open in the back. Cab driver is mildly surprised by the unusual company of an obviously drunk giggling man and two others looking ilke village idiots, and asks about it. Dmitriy Petrovich asures teh driver that the other two are not idiots, but most educated people and the problem is their small argument about a light balb. After carefully listening to the whole story the driver asks what kind of light bulb is he talking about, and Dmitriy shows the hotel light bulb saying "this one". "Impossible" says the cab driver and in a few seconds the cab turns around and goes to the hospital.

When the nurse sees these guys the 3rd time inside 2 hours, she starts having rather serious breathing difficulties trying to laugh much harder then mother nature designed. After getting her in shape Dmitriy Petrovich makes her call the surgeon who, promptly hitting the cab driver in the jaw takes the light bulb and smashes it on the table saying that this should put an end to the story. The four men get back into the cab and drive to the hotel.

On the way they are stopped by the road patrol police unit. The policeman (militianer) is very surprized to find that the only person able to speak in a car full of people is a rather drunk man who tells him a wierd story about light balbs. "I will be right back" replies the policeman, goes back to the road side station, Dmitriy and companions whatch the ligh go off inside the station, and in a few seconds the policeman appears again. Using gestures he asks people on the back seat to move over. A metal end of a light bulb is sticking out of his mouth.

The cab goes back to the hospital. The nurse becomes hysterical with joy. After a few minutes of recuperation she goes to the cabinet of the surgeon to call him. She opens the door and falls to the floor unconscious. In the doorway appears the surgeon with his jaw hanging wide open.

see also:

http://diaryru.com/blog/funrussia/man-light-bulb-his-mouth

http://englishrussia.com/2011/08/23/how-to-get-a-light-bulb-...

39
ssebastianj 6 days ago 0 replies      
This somewhat reminds me to the "The Funniest Joke in the World" by Monty Python: https://en.wikipedia.org/wiki/The_Funniest_Joke_in_the_World
40
kchoudhu 6 days ago 0 replies      
Cold War humor was an odd beast. I came across this book in my father's collection a while back:

http://www.goodreads.com/book/show/3536429-no-laughing-matte...

It's a hoot.

41
sAbakumoff 6 days ago 1 reply      
Wow, the joke that ends up with "I can stand in front of Kremlin and yell To Hell With Ronald Reigan too" is the famous one. One can even find the references to it nowadays in Russian blogs. CIA is good!
42
ommunist 6 days ago 0 replies      
"What is the difference between the socialism and capitalism? Under capitalism one man exploits the others. And under socialism its the other way around".
43
JonRB 6 days ago 1 reply      
My understanding is that DDCI stands for 'Deputy Director of Central Intelligence' - What would this document have been for?
44
jwhitlark 4 days ago 0 replies      
http://redprimer.com/

The red primer for children and diplomats is a good example of this sort of humor in visual form.

45
_joel 6 days ago 2 replies      
Obligatory, the funniest joke in the world (weaponized) - https://www.youtube.com/watch?v=ienp4J3pW7U
46
47
Shivetya 5 days ago 0 replies      
Do a search of youtube for Ronald Reagan doing russian jokes, some were pretty good
48
JensRantil 6 days ago 0 replies      
I guess those jokes got old..
49
johnhenry 6 days ago 1 reply      
Context?
50
cjbenedikt 6 days ago 2 replies      
No jokes about state of US infrastructure? No laughing matter, I guess
22
Structured Procrastination: Do Less and Deceive Yourself archive.org
380 points by Tomte  2 days ago   78 comments top 31
1
neckro23 2 days ago 2 replies      
This is a longer and more lucid version of a quip I have about my procrastination:

"I can do anything in the world... as long as there's something more important I'm supposed to be doing instead!"

2
danm07 2 days ago 4 replies      
I think this is kind of a thumbtack solution to a more systemic issue. Self-deception, in my opinion, never works: you have to force yourself to be slightly dumber than you are.

The other reason why it doesn't work in the long-term is that you will always be working on things that are adjacent to what's truly important.

Clairvoyance is a better solution. Ever get stuck in a circular argument? After a while, you realize its going nowhere and you walk away. Procrastination, at least in my mind, is almost the same thing. If I let myself observe the mundane things I do, I'll eventually get sick of myself and stop doing it.

Success in dealing with procrastination really a question how viscerally you feel a dead-end coming, and also making the necessary adjustments to remove triggers if its difficult to stop yourself in the act.

3
treehau5 2 days ago 3 replies      
But if I put big seemingly important but not really important things at the top and then work on the bottom ones, I will know that I am doing this, and resent myself. That's the biggest issue with my procrastination: my self loathing.

(Ironically, here I am, reading this article about how to do the things I am suppose to be doing with at least 5 things that need to be done before this week is over)

4
tammer 2 days ago 1 reply      
My tip? Make your to-do list your twitter. I.e., when you get that ping of dopamine depletion that makes you want to pop open a social network or similar, open your to-do list instead. Start out by just scrolling through it, then maybe go do whatever you were going to do. Ideally you'll have some small, easily completable tasks on the list along with the larger more annoying ones. If so, maybe sometime youll spot one you can get done really fast. Make sure you check off the item when youre done. Do it enough & the goal is to rewire your reward drive towards productivity, eventually building up a chemical response to checking off items thats greater than dipping into the infostream.
5
afarrell 2 days ago 1 reply      
A far more effective method in my experience has been to focus not on the end product, about which one has a feeling of dread, but on the process. Frame the task as "spend N minutes doing X."

Of course, there is the possibility that you start doing X and just find you don't know what to do with yourself in those minutes. That's useful information! You've just discovered that you don't know enough about the task to get started. Now your task is to write a coherent request for help/clarification.

6
jcoffland 2 days ago 1 reply      
Better advice is to allow yourself to do other things sometimes. We all have this idea of what we should be doing, usually work related and when we are not doing it we feel guilty. It took me a long time to accept that it was OK to sometimes go off on a tangent to satisfy my own curiosity and that not only did I not need to feel guilty about it but that it was actually good for me mentally and for my career because I learned new and valuable skills and avoided burnout.
7
golfer 2 days ago 3 replies      
Hard to tell (on my mobile screen anyway), but this was written by John Perry, philosophy professor at Stanford [1]. I've been a huge fan of his ideas on this for ~20 years. He has a number of other light essays. His website appears down at the moment though.

[1] https://en.m.wikipedia.org/wiki/John_Perry_(philosopher)

8
thomasahle 2 days ago 1 reply      
> Procrastinators often follow exactly the wrong tack. They try to minimize their commitments, assuming that if they have only a few things to do, they will quit procrastinating and get them done. But this goes contrary to the basic nature of the procrastinator and destroys his most important source of motivation. The few tasks on his list will be by definition the most important, and the only way to avoid doing them will be to do nothing. This is a way to become a couch potato, not an effective human being.
9
schlowmo 2 days ago 0 replies      
> "The key idea is that procrastinating does not mean doing absolutely nothing."

I felt that way since I know the word "procrastination". When I talk about "procrastiantion" with other people I propose exactly this definition: Procrastination is a way to get stuff done, only that it's not the stuff with the closest deadline.

> Indeed, the procrastinator can even acquire, as I have, a reputation for getting a lot done.

This is why I felt the idea of the "instant gratification monkey"[0] doesn't fit my definition of procrastination so well. It's not just a pet in my head, but also friends, flatmates and co-workers whom I doing favors while I'm procrastinating.

Anecdotal example: When somethings broken in our flat, my flatmates asking me when I have much work to do, because then will be the time when I will fix the broken stuff instead of doing the actual (paid) work. There's also the joke about me that when I stop doing (paid) work the whole house will go down because I will have less motivations to fix things.

But there's also a dark side of this kind of behaviour: When I call it a day and review the things I have achieved that day, all the things I got done can easily drown in the sea of things which I have not but were on the top of the ToDo list. Sometimes this is the moment where "panic monster"[0] is seeing its chance.

And yes, even reading article about procrastination is still procrastination in the sense of my proposed definition.

[0] http://waitbutwhy.com/2013/10/why-procrastinators-procrastin...

10
xyzzy4 2 days ago 0 replies      
Procrastination is often caused by wanting to do something but not knowing exactly what to do. The solution is to cut it up into bite-sized pieces and start doing them one at a time.
11
EnFinlay 2 days ago 1 reply      
Second time this has been posted. It's a cute idea but doesn't solve the underlying problem that most people need to simplify and give themselves time/permission to not be "succeeding" at all moments.
12
buzzybee 2 days ago 0 replies      
I use a to-do ordered by perceived energy and alternate between low and high energy tasks over the course of the day. On most days this means that I do a lot of low energy things and few or no high energy things, but as with the structured procrastination approach, I am getting a lot done, the only difference being that I'm not couching it in terms of ineptitude and avoidance.

Edit: And I also had a huge issue before with concerning myself about the "right time" to do a thing. The right time is now when I have the energy and there are appropriate external conditions(time of day, weather).

13
urahara 2 days ago 0 replies      
The only thing that helps me to be more productive is allowing myself to procrastinate as long as I want and do whatever I want. Because any other scenarios make me caught in an endless "try to force myself - get nervous and lose self respect because it didn't work - hate all work on earth forever" cycle. If I allow myself to do whatever I want and procrastinate as long as I want, I simply get bored soon and return to work. Or find better ideas what to do next.
14
Michielvv 2 days ago 0 replies      
Although I can see the reasoning, this absolutely does not work for me. The times procrastination bothers me is exactly in those cases: when there is something important but not exactly specified at the top of the list. I'll feel guilty about not working on it and then end up in a sort of limbo between the task I could be doing and the 'important' one.

What helps for me is exactly the opposite: deciding this important task does not have a clear path forward and therefore go write down each and every question I have about it and need answered before moving forward or explicitly decide I don't have the proper energy/focus for it at that time and move on to something easier.

15
codingdave 2 days ago 0 replies      
Slightly different approach -- I put my big projects on top of my list, and smaller bites of those projects below. The smaller bites are more approachable, and I tend to want to tackle them quicker. And sometimes I surprise myself, realizing a project IS done, as I tick off the last little bite and realize there isn't another one to start.
16
asimjalis 2 days ago 1 reply      
I find it even more effective to pretend I have already completed the task and then bask in the gratification of being done.
17
closed 2 days ago 1 reply      
I used to try stacking the deck in the way this article mentions, until I tried Adderall and realized that it made this kind of task juggling totally unnecessary.

It was really bizarre to experience. However, to be honest, I don't need laser focus most the time, and like the sort ambling approach he discusses. My strategy for the next few months is going to be using it one day a week, for sweeping up the boring things (that otherwise haunt my waking life).

18
MetallicCloud 2 days ago 2 replies      
This seems a lot like the idea a lot of people I know do. They set their alarm clock 10 minutes fast, so in theory they will assume they are running late and get moving in the morning.

I honestly don't know how people think this will work. You know you changed the time, so people quickly just adapt to the new 'running late time'

19
proee 2 days ago 0 replies      
My most innovate times are during procrastination. My brain subconsciously looks for something else to do, and my very best ideas come at this time.

When I'm razor focused on the task at hand (i.e. NOT procrastination), there's no "creative" freedom to capture a tiger by his tail and follow him wherever that may lead.

20
daxelrod 2 days ago 0 replies      
When I tried this technique, all of the self-deception stressed me out to the point where I got nothing done.

I feel like it probably works better for people only balancing a few tasks that aren't interrelated.

21
ccvannorman 2 days ago 2 replies      
Except for the super annoying blue bar popping in and out of this horrible website I'll never visit again, great article!

I have found myself inadvertently taking advantage of my procrastination in this way before, and it's useful to codify it in the language this article uses.

EDIT: The horrible website is one I have respect for, archive.org, which didn't use to have this eye-gouging UX. I'll send them a friendly feedback email about it.

22
jcoffland 2 days ago 0 replies      
This fits in with games like setting your clock forward 5 mins to trick yourself into being on time. It's a slippery slope.
23
l1feh4ck 2 days ago 0 replies      
Imagine you are doing procrastination in a structured way. Everything you do will go on to this algorithm before you even do it. So the combined effect of all these might take you to a new direction in life which you intent not to be in.
24
stenlee 2 days ago 0 replies      
"Good and Bad Procrastination" (Dec 2005)http://www.paulgraham.com/procrastination.html
25
indubitably 1 day ago 0 replies      
Apparently nothing was ever more important than renewing the domain
26
marcosscriven 2 days ago 0 replies      
Oddly, my mobile provider Three (UK) blocks this, claiming it's 'adult' content.
27
notlikeme 2 days ago 1 reply      
The only thing i feel about procrastination -- problem exist only when it receives your attention. Have no hunger? Keep it simple: don't eat.
28
youare123 2 days ago 2 replies      
It seems the book the art of procrastination by Perry is about this kind of thing. Has someone read that book?
29
porter 2 days ago 0 replies      
Spot on. How else are we all supposed to rack up HN Karma points?
30
mamarjan 2 days ago 0 replies      
Holy shit! This is me. Now everything makes sense.
31
chris0x00 2 days ago 0 replies      
This looks like it might be inline with a philosophy that I've been developing independently. I should totally read this article at some point.
23
A Low-Cost Solution to Traffic governing.com
286 points by jseliger  2 days ago   281 comments top 37
1
aetherson 2 days ago 9 replies      
I'd like to see a little more from new urbanists than for them to endlessly restate their thesis. Yes, okay, we've now seen the 1,000th reiteration of the idea that dense mixed use residential/commercial developments have a number of advantages.

How do we actually get there? Is "just" changing zoning actually enough? Is there a case study? Over what time-frame? With what downsides?

NIMBYism isn't a magical spell cast by Satan: it's an organic outgrowth of people's incentives. What is the way around it? I don't believe that then 1,001st reiteration of the advantages of mixed use developments is the answer. What kind of compromises work to keep NIMBYism from obstructing all of these developments?

2
ryanobjc 2 days ago 7 replies      
The problem is zoning. If zoning didn't outlaw the type of cities they are describing, then perhaps it would happen.

This is why people are now becoming skeptical of new construction, new neighborhoods in the Bay Area. For a San Francisco example, check out the new mission bay developments. The area still feels dangerous and empty. Not enough realistic businesses - there is no legitimate reason to be on the street, other than wanting to 'hang out' in public spaces.

Once Mission Bay becomes a nightlife draw, with a mix of uses - like the Castro or Polk for example - then it will be an example of something done well. Until then, nope.

3
brudgers 2 days ago 1 reply      
'Building Cities' is a non-trivial activity that occurs on a time scale such that by the time it's substantially implemented driving won't mean what it means today. Suburbia is not so much a response to poor planning as a reaction to the mechanization of agriculture that sparked a demographic shift from rural to 'urban' living.

Mechanization meant that a minimal family farms became approximately an order of magnitude larger (from 1/4 section of 160 acres to a couple of sections and >1000 acres). Along with all those farmers the shopkeepers had to find someplace in the city too. Automobiles encouraged the migration by making it easier to relocate off the farm.

Cities had not planned for that influx. Or for cars. Moreover, cities were increasingly discouraging tenements...with sound scientific reasoning. The suburbs were about the only quick fix. Cities take a long time and a lot of money to build. It also costs more in political capital and financial capital than building roads...multiple jurisdictions will float bonds for transportation infrastructure versus multiple private interests that must agree for many modestly scaled real-estate development projects...https://nypost.com/2014/09/19/nyc-church-bags-71m-for-air-ov...

In the past couple of years, I've started thinking about real-estate in terms of monopolies. Locations are not fungible and control of a parcel is an absolute but localized monopoly. What suburbia does is disrupt (or maybe bypass do to the locality) entrenched monopolies. Forty minute commutes are somewhat fungible. Boxes made of ticky-tacky are also somewhat fungible. Chain retail is very much fungible.

4
aidos 2 days ago 3 replies      
The simplest hack for the traffic issue I've heard is to just change work schedules. Either run cities in 2 general shifts, or do several days of longer hours and then take a day off.

Certainly in London there seems to be a slip towards more of the working from home 1 day a week. It definitely feels like there are more working parents doing flexible hours out of necessity but there's a a fair amount of bias in that I've seen a lot more of that struggle from living it myself for the last 5 years.

I take the train from a commuter area and there's no real financial incentive to do 4 days commuting with 1 day off. Which I find sad - I think Train lines should _have_ to include discounted 4 day per week season tickets. Then again, I'm in a southern trains area so the trains aren't running half the time anyway....

5
justinzollars 2 days ago 5 replies      
I will never live further than 10 minutes from work again. Traffic is such a waste of life.
6
tomcam 2 days ago 2 replies      
Low cost is not defined in this article. Absolutely no strategy is given other than "build new walkable cities someplace else", which seems hard to justify as low cost to me.
7
greggman 2 days ago 0 replies      
The average commute in Tokyo is 80 minutes on an extremely packed and uncomfortable train with lots of sick and often smelly people. Urban density is not in and of itself a solution traffic
8
sh87 2 days ago 1 reply      
I hear so much about traffic and congestion, its ill effects, disadvantages and just rants of frustration. I face and feel it everyday. So why is it not solved already ?

Here's why : 1. Traffic affects those who cannot/will not make a significant difference to the problem and 2. the ones who can make a difference don't face it/ don't consider it a top tier problem / can pay for a way around it (live closer, don't have to drive the same route each day)

You also won't hear any political/marketing campaigns about reducing traffic and congestion unless public transport becomes a private thing. I believe there's tons of money to be made here but the initial funding required is astronomical and not to mention deep connections in the public sector (licensing) required to even get this off the ground. I can imagine a decentralized mechanism to do this but there are just too many failure points in any strategy that I can think of. Not a problem that a bunch of kids could start solving in their garage you see.

I don't see this getting fixed anytime soon anywhere.

9
DubiousPusher 2 days ago 1 reply      
I think this article misunderstands a bit about how cities grow. Any place that's now high density in a city, used to look something like a suburb. Especially in the US, every place that has a recently built 5 story midrise building probably used to be 1/4-1/3 acre lots with 2,000sqft houses on them.

Doesn't sound like a big deal but consider that land is cheap, land with buildings is not. You can find an acre around where I live for 250,000-350,000 dollars but you want to buy four neighboring houses and build density, that's going to start at 2.4 million.

And of course, you have to pay for each house what someone looking to live in it would be willing to pay. Density is only acheived when the price of density becomes worth it. If you're building a city from scratch, density is cheap but very few cities get built from scratch.

10
jgord 2 days ago 1 reply      
The _real_ problem is psychological - "managers" feel the need to control "workers" by physical monitoring.

I suspect 80% of all office work could be done remotely.

We need a cute psychological trick like "daylight saving time", to give people permission to break convention and do things like work remotely and work in offset shifts [ so as to spread out rush hour peaks ].

11
mason240 2 days ago 6 replies      
People want the personal freedom of driving cars, they want yards, and they want room. They don't want to be packed into stacked boxes.
12
fangsout 2 days ago 3 replies      
I live in Austin, and the traffic is ridiculous. just this week a co-worker quit because he can't stand the traffic from Round Rock to Downtown
13
nwah1 2 days ago 1 reply      
Tax policy is key here. The most effective approach is to reduce underdevelopment of prime land, and this can be achieved through shifting taxes away from improvements and onto land value. see: Land Value Tax
14
jandrese 2 days ago 5 replies      
I've got an incredibly cheap solution to your problem!

Step 1. Tear down your cityStep 2. Rebuild a completely new city in its place.

How expensive could it be?

16
rojobuffalo 2 days ago 0 replies      
Relevant TED talk "4 ways to make a city more walkable": https://www.ted.com/talks/jeff_speck_4_ways_to_make_a_city_m...
17
tromp 2 days ago 0 replies      
This site has some interesting proposals:

http://www.carfree.com/

http://www.carfree.com/intro_cfc.html

18
austinjp 2 days ago 4 replies      
US-centric commute conversations always seem to revolve around the primacy of the car. So, USA people reading this... Where are you, and could you commute by cycle or public transport instead of by car? What prevents you doing this?
19
transfire 2 days ago 0 replies      
Government needs to reduce property taxes and find ways to lower cost of construction of building mix-use towers. While they don't need to go quite this far, something like https://en.wikipedia.org/wiki/Shimizu_Mega-City_Pyramid is the direction they need to move in.
20
dangjc 2 days ago 1 reply      
I'm very much looking forward to self driving cars to save us. It's hard for public transit to cover enough of a city, but we can easily build trunk lines or commuter lines and have self driving cars carry people the last mile. Public transit could then focus its limited dollars on rail for the most heavily trafficked corridors. No more slow and infrequent public buses. This would also encourage density around stations, while allowing those who want their back yards to live further out.
21
Shivetya 2 days ago 0 replies      
Well one thing people always love to overlook. Married couples just might not have employers with sites near each other, employers can also move, and you can change jobs.

most people just cannot pick up and move to make it easier to get there. all the solutions usually involve telling people where to live and how which is completely the opposite our society wants.

I am still waiting for a viable arcology to be built, might take a big religious group to do it but I doubt technologist ever will except off planet

22
tabeth 2 days ago 1 reply      
Isn't the low cost solution just to ban cars or tax them ridiculously and install a network of buses, instead? Cars have terrible density, in theory and in practice.
23
kashkhan 2 days ago 2 replies      
rebuilding cities with new housing is the opposite of low cost.
24
jsilence 2 days ago 0 replies      
The book "A pattern language" by Christopher Alexander might give some good ideas to achieve this.
25
keepkalm 2 days ago 0 replies      
Really need to talk about why people choose to live where they do, and usually the answer is school districts. It's difficult to say exactly what kind of living environment families in the USA would choose if public schools in urban areas were better choices compared to suburban schools.
26
keyle 2 days ago 0 replies      
This article, all respects to the writer, brings no solution what so ever but to cram more people in smaller places.

Instead we should be talking about effective decentralisation of work places, with more satellite offices and/or effective tele-presence and remote work.

27
ensiferum 2 days ago 0 replies      
Here in Europe we have this amazing thing called public transportation with dense high capacity high frequency trains, trams, busses and metros. Works great!
28
edblarney 2 days ago 0 replies      
Yeah, it's called 'Europe' or 'Asia'.

Cities built before cars.

The 1970's idealists in 'Urban Planning' are the one's who created our 'new urban utopias' built around cars ...

29
njharman 2 days ago 0 replies      
TIL that building cities is cheaper than building roads!
30
mentos 2 days ago 0 replies      
I think the best solution to traffic is remote working!
31
visarga 2 days ago 0 replies      
The "Low-Cost Solution" is merely "building cities". Well, that was cheap!
32
BoysenberryPi 2 days ago 0 replies      
Places that don't require driving more often than not have high cost of living.
33
randyrand 2 days ago 0 replies      
"Low cost" means building entire cities from scratch. hmmmm.
34
agumonkey 2 days ago 0 replies      
Makes me think we should design cities as kd-trees networks.
35
mac01021 2 days ago 0 replies      
Building cities doesn't sound low cost to me.
36
gonzo 2 days ago 1 reply      
> Metro Austin has 2 million people. All of them seem to be driving on Interstate 35 all the time.

Not really, no.

Author is a professor at Rice. In Houston.

37
known 2 days ago 1 reply      
One SSN = One Car

This will reduce air pollution;

24
Our long term plan to make GitLab as fast as possible with Vue and Webpack gitlab.com
373 points by miqkt  6 days ago   285 comments top 28
1
fleshweasel 6 days ago 10 replies      
One of the biggest reasons I favor React is that it's much easier to add a templating language to a programming language (i.e. JSX) than the other way around. Every construct for making decisions based on your data, traversing your data, etc. is more cumbersome and harder to validate in handlebars or whatever identical looking templating language the community came up with this week.

I also am strongly against string references to model properties in your template. Again, it's much better to use tools that provide some static validation of what you're doing.

Give me React with TypeScript to help me make sure I'm passing around what I said I'm expecting to receive at each point as features are changed and added, and I'll be in business.

Honestly, I use React in spite of my opinion of Facebook.

2
iamleppert 5 days ago 4 replies      
You want to make your site fast?

Generate the markup on the server and send it down to the client! It's post-modern web development. Back to black.

You don't need 100 KB of code to spit down a table of data or show someone a directory listing of their github project. You don't even need an SPA, bunny.

And for goodness sake, when you do need to do anything in javascript, you can use document.createElement and document.createDocumentFragment. These are perfect 1:1 browser APIs that allow you to do everything you've ever wanted, there's no magic, they call directly into the browser engine to give you what you need.

If you want to increase performance, start first by measuring everything. Time to first byte. Time to DOMContentLoaded. The page onload event. window.performance timings; do real user monitoring, not TODO app benchmarks on the latest framework flavor of the week.

The entire web community needs a healthy dose of pragmatism. But it's okay, it gives me extra work. I really enjoy doing freelance performance work and telling everyone which code/library/framework is to blame for performance issues and rewriting everything so simply. Show people what works and you'll find they shut up real quick.

3
jorblumesea 6 days ago 5 replies      
I really think for most cases you only need a view library instead of a heavy application framework. There are only a few cases there something as heavy as Angular or Ember is justified and in many scenarios a thinner view layer like Vue, React, Inferno etc is much better suited. Most of the web is simple enough to not need fancy http features or complex routing support. Everyone rushed to Angular without considering if they needed such a heavy library. How Angular even runs on mobile is anyone's guess.
4
OJFord 6 days ago 2 replies      
I'd only heard of Vue as a name before this, but I followed the link to the documentation, which looks fantastic:

 <div id="app"> {{ message }} </div> var app = new Vue({ el: '#app', data: { message: 'Hello Vue!' } })
> Hello Vue!

> This looks pretty similar to just rendering a string template, but Vue has done a lot of work under the hood. The data and the DOM are now linked, and everything is now reactive. How do we know? Just open your browsers JavaScript console (right now, on this page) and set app.message to a different value.

I guess that didn't take any extra work to setup, since it's a fair assumption the Vue docs are rendered with Vue (!) - but a really easy yet nicely motivating introduction.

Obviously this isn't too tricky with 'vanilla' JS, but there's certainly more ceremony involved, and I'm sure the templates can be more complex such that the JS/Vue contrast would be much greater.

5
merb 6 days ago 6 replies      
Sadly I think the problem of GitLab's sloweness is not the UI framework :(We are 3-5 users on gitlab ce and use 3GB of memory + 4 CPU cores (vCPUs from XEN) and it still feels slow. Even big Java Applikations use less memory, for that amount of users.
6
jetter 6 days ago 1 reply      
Vue.js is just a better option for every-day development in smaller and mid-sized teams, it gives more freedom working with arbitrary html, which is huge, it also gives easy start - you don't need compiler to use Vue across your legacy codebase. React is a good thing if you are a hard-core fulltime frontend dev in a big team, I guess. That's why potential of Vue.js popularity is ~25-30% of jQuery worldwide usage while React will probably might get 5-10% at most - that's just my impression after using both React and Vue. http://pixeljets.com/blog/why-we-chose-vuejs-over-react
7
rkwasny 6 days ago 2 replies      
Something went terribly wrong in our field ....

"On GitLab, our pages would load an average of 20kb on each page load versus the full JavaScript file size of 800kb+."

What exatly takes 800kb? I don't see a 3d animation/game on every gitlab page...

IMHO the solution to all this craziness is just generate small mostly-static page quickly and do not have 200 onLoad() functions.

8
dntrkv 6 days ago 10 replies      
Can someone explain why someone would choose Vue over React (or one of the clones)? When I looked at the docs for Vue it reminded me of my Backbone days.
9
jcoffland 6 days ago 4 replies      
It's good to see vue.js getting some love. I believe it would be the preferred Web framework these days if it had backing from FB like React does. Too many people fall into the trap of believing a tech is the best just because some big Corp sponsors it. I fell for Angular once for the same reason.
10
chiliap2 6 days ago 5 replies      
> For example, the profile page could be potentially very light, and there would be no reason for that if someone is linked directly to the profile page; it should load every single piece of Javascript in our project.

There are plenty of ways to do that with single Page apps; it's not a great argument against all single page apps, just poorly designed ones.

11
calcsam 6 days ago 3 replies      
There's an entire cottage industry of 3 to 5-year old series Bish startups porting Backbone // jQuery apps over to more modern frameworks. We are moving ours over to React at Plangrid (mostly done), Gusto is mostly done with their migration as well. Would be interesting to figure out how many startups are in this category.
12
educar 6 days ago 2 replies      
Sorry to be negative but GitLab's performance is embarassing :/ It is so slow and it's not clear why. Is this because of rails? Just seems very poorly engineered.
13
craigcabrey 6 days ago 0 replies      
Misleading title, from the article:

> We are not rewriting GitLab's frontend entirely in Vue.

14
ZenoArrow 6 days ago 4 replies      
I'm not aware of the performance bottlenecks with GitLab, but are there any plans to speed up the backend as well, such as moving to a faster Ruby implementation?
15
Tade0 6 days ago 0 replies      
I'm happy to see Vue in a project that I recognize.

To all the people arguing about JSX: Vue 2 supports it.http://vuejs.org/v2/guide/syntax.html#ad

16
rubber_duck 6 days ago 1 reply      
No mention of TypeScript - I wonder why ? It's such a powerful tool when your code base scales.
17
megawatthours 6 days ago 1 reply      
In webpack.config.js

> if (IS_PRODUCTION) {> config.devtool = 'source-map';

Ouch! This will make for a huge bundle. See https://webpack.github.io/docs/configuration.html#devtool

Merge request here: https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9028

18
creo 6 days ago 1 reply      
GitLab rushing good news about frontend when backend crashes is good short term PR support plan.
19
wikyd 6 days ago 2 replies      
Are you using Webpack directly with Rails? I'm curious how your development environment works.
20
jaequery 5 days ago 0 replies      
sounds like he never gave vue a try because thats also how i felt before trying vuejs.

but once you go vue, you will be very happy you did. i see a lot of ex-react devs who tell similar stories, myself.

with the sudden rise of vue and just how fast vue is gaining traction, given the trends and lifecycle of frameworks, i wouldnt be surprised if vue over takes react as de facto of frontend frameworks soon. it really makes development much pleasant and faster.

21
benologist 5 days ago 0 replies      
Wouldn't ignoring the code have the same outcome if you upgrade servers next year? Client devices will become faster too in that timeframe too.
22
kingosticks 6 days ago 0 replies      
> as our current development takes loads of time to refresh the page while developing GitLab

Really? What's 'loads of time' here?

23
andrew_wc_brown 5 days ago 0 replies      
VueJS is a lightweight version of Angular but has the same pitfalls when its comes to being isomorphic.

React is an engineers overkill solution. JSX is bluck.

I use mithril.

24
omouse 6 days ago 0 replies      
Webpack is a mistake imo. Accidental complexity everywhere.
25
camdenlock 6 days ago 5 replies      
The escalating abuse of the word "awesome" is like the overuse of dynamic compression in music. When every expression is fever-pitched, there's little room for interesting expression.

I also notice that the most shameless abuse of "awesome" seems to come from public-facing software developers, e.g. community managers and the like. It's become some kind of advertisement for a bland, safe, comfortable community where nothing is risked, and competition is frowned upon.

All right, I may have gone a little too wide with that, but AUGH, we need to try way harder to increase our expressive range if we're going to be writing articles that aren't exhausting to read.

26
richardwhiuk 6 days ago 4 replies      
27
rkunnamp 6 days ago 2 replies      
We at http://www.reportdash.com uses Backbone + Backbone Layout Manager + Jade

Every time I see a post like this , I feel sad for being less cool. I try to learn a bit of the mentioned cool framework. Then I realise, how awesome my current set up is.

28
40_pending 6 days ago 0 replies      
Vue really has the best of both worlds. You can use it like old-style angular or take the component approach. You can just include the 71kb minified script and take off or you can use a build system with components - Take a look at Vue-cli: https://github.com/vuejs/vue-cli
25
Redash Connect to any data source, easily visualize and share your data github.com
459 points by handpickednames  5 days ago   88 comments top 30
1
arikfr 5 days ago 8 replies      
Hi! I'm the author of Redash :) Nice to see Redash at the top of HN.

I'll be happy to answer any questions about Redash, open source and about my journey to make Redash a self sustainable project.

For any questions that don't fit on HN, feel free to reach me at arik at redash.io.

2
xtracto 5 days ago 2 replies      
I tried Redash some time ago while I was looking for a very simple "BI" like tool that our non-technical or low-technical colleagues could use (people in Client Satisfaction team).

I settled for Metabase (http://www.metabase.com/ , https://github.com/metabase/metabase ), mainly for the ease of use and installation (a simple java -jar metabase.jar does the trick).

Redash was a close second (one advantage at that time was the ability to have users that could only read queries), but after fighting with it for a while in order to install it, I tested Metabase and haven't looked back since.

3
Signez 5 days ago 3 replies      
Looks great! What about open-sourced Superset[1] from AirBnB? Both looks great, but Superset seems to have a superset of Redash features (bad pun intended).

[1]: https://github.com/airbnb/superset

4
davb 5 days ago 1 reply      
Redash is fantastic. Things like scheduled queries just work, and the AMI is a great way to get up and running quickly. The docs are decent and the upgrade script works well. I also really appreciated the (optional) Login with Google feature and the ability to limit it to certain email domains (we use Google Apps, so it worked really well). We've been trialling it casually in our engineering and data science teams.

However, the latest version is a little rough around the edges. Bugs like this one https://github.com/getredash/redash/issues/1520 (can't delete users, even at the CLI) and this one https://trello.com/c/6siqvcxh/39-admin-should-be-able-to-dis... (there's no option to delete a user in the admin UI, requested since September 2015) make fall just short of production-ready.

5
estsauver 5 days ago 1 reply      
We use Redash via (https://redash.io/) for a truly hideous amount of stuff at our startup. We use it for almost every operations dashboard, and we've also prototyped out a task allocation and field agent management tool using them and Zapier.

They're really great, I'd highly recommend them to anyone.

6
vitorbaptistaa 5 days ago 1 reply      
We've been using it on a project to aggregate clinical trial data from many different sources (https://opentrials.net) and it has been great!

It allows researchers (not necessarily devs, usually medical doctors) to peek our raw data, and it's a great excuse for them to learn at least the basics of SQL. The response has been great.

We also use it to do some small data checks on the data quality, with alerts sent to our Slack.

Highly recommended.

7
vaidik 5 days ago 0 replies      
We have been using Redash at our company for almost a year now. Every single release just proves how promising the project is. You can make useful dashboards in minutes. Support for multiple databases is amazing. We are using it with multiple PostgreSQLs, Redshift, MongoDB and InfluxDB.

The most valuable feature is alerts though. I work at an ecommerce and operations heavy company where we have tons of connected components. Where alerts come really handy for us is that anyone in the organization can add quick alerts for proactive monitoring of events recorded in on-field ops and act when things go bad. This almost like building a feature on one of the internal tools, just doing that yourself without any engineering support. This comes in really handy.

Kudos to the team! Looking forward to some more amazing stuff in Redash!

8
Maarius 5 days ago 1 reply      
For those of you looking for a another fully hosted solution, check out https://www.cluvio.com (full disclosure: I am one of the founders).

Like redash, Cluvio allows you to run SQL queries against your database and quickly visualize results as beautiful, interactive dashboards, which can easily be shared within your company or externally.

We developed our own custom grammar on top of SQL which makes writing time-range related queries a lot easier and allows to parametrize queries, which powers the dashboard interactivity.

We also allow to run custom R script on top of the SQL results, have SQL Alerts that run at specified schedules, allow you to create SQL Snippets and offer a free entry plan.

Currently supported datasources are Postgres, Redshift, MySQL, MariaDB and Amazon Aurora.

9
gk1 5 days ago 0 replies      
I'm a marketing consultant and Redash has worked its way into my preferred stack for analytics: Segment + Redshift + Redash.

It replaces event- and user-tracking tools like Mixpanel, Heap, Woopra, and others. Those are just UI layers on top of SQL queries. If you or your marketers know (or can learn) even basic SQL then Redash is what you want.

10
spapas82 5 days ago 2 replies      
Hello, it seems that there are no instructions on step-by-step installing redash in your own server. The only thing I could find was info on how to install it on AWS or using docker (https://redash.io/help-onpremise/setup/setting-up-redash-ins...). However I want to install it normally, using my server's nginx / postgresql / python / supervisorctl etc. Can it be done? Can you provide some step by step instructions ?

Thanks !

11
notdonspaulding 5 days ago 0 replies      
We're hosting on our own EC2 instance using the docker image. Setup was mostly smooth. We love the integration with Google Apps so our entire organization automatically has access.

We primarily use it to connect to Amazon's Redshift and pull some data out for quick visual analysis. It's a very good combo. We're at the early stages of using it but it seems like a very solid product.

Kudos to the dev!

12
huy 5 days ago 1 reply      
I really like Redash, its one of the early tools that introduce this concept of turn SQL into chart to developers, and also teach developers to learn and write better SQL, altogether without any cost. I evaluated Redash during my past company back in 2013 (we were also using Tableau), but due to some Redashs lack of features (no support for filters, lack of permission control, sporadic performance), we went and build something inhouse with similar approach (turn SQL into charts).

And inspired by the same path Redash founder took, that internal project turned into a startup by itself.

Were relatively new but getting good momentum. Some of our customers went with us after evaluating both. While we dont have a self-hosted open-source version, our pricing only starts at 49$/mo for up to 5 users (pretty affordable for startups IMHO).

You can check it out here: https://www.holistics.io

13
ing33k 5 days ago 0 replies      
Also check out Zeppelin :

"A web-based notebook that enables interactive data analytics. You can make beautiful data-driven, interactive and collaborative documents with SQL, Scala and more."

https://zeppelin.apache.org/

14
uberneo 4 days ago 0 replies      
Added the missing Teradata support as the source (kind of) - https://discuss.redash.io/t/teradata-data-source-support/83/...

Waiting to resolve few more --

1) Functionality for CSV/XLS file import , like in Tableau . Some discussion on - https://discuss.redash.io/t/support-csv-data-source/216/5

2) Functionality to select "ALL" in filters

3) Functionality to merge data from multiple data source. I know it does exists in Paid version but waiting for it come in Opensource version.

15
CalRobert 5 days ago 1 reply      
Redash is handy, especially if you want to go from "I have a DB of events" to "here's some handy dashboards" in just a few minutes.

Its filtering options were a little limited, at least as of November (you can't inject anything in to the "where" part of a query).

16
kornish 4 days ago 0 replies      
Great work! A quick question: what differentiates Redash from Chart.io, Periscope, or existing SQL-and-visualization BI tools?
17
kamranjon 5 days ago 1 reply      
I was working for a company that relied heavily on Wagon - and then it got bought by Box and they shutdown the product. We looked for alternatives but nothing seemed to offer what Wagon had - this looks like it might be a good fit though. Excited to try it out.
18
j_s 5 days ago 3 replies      
Ask HN: Is there any open source BI tool that is always first on the list when the conversation finally gets to

"Oh, you're on the Windows/Microsoft stack? Then you're probably going to wind up using _______"

19
etatkomoona 5 days ago 2 replies      
Redash is great. We've tried several alternatives, including Metabase, and ended up with extensive Redash use. We use the self-hosted version for sharing data between the tech and business sides, for day-to-day monitoring and for ad-hoc visualizations, and are generally very happy with it.It's not perfect - there are some areas, like fine-control over chart customization, query auto-updates (especially queries with parameters) and the ability to queue parallel queries to the same data source which would be nice improvements, but these are nitpicks - highly recommended.
20
lolive 5 days ago 1 reply      
Are some public datasets available directly as SQL endpoints? I know the Semantic Web provides some public datasets as SPARQL endpoints.But I have never heard of an equivalent for SQL.
21
_joel 5 days ago 1 reply      
Do any of the dashboards mentioned support exporting to a report?
22
actuator 4 days ago 0 replies      
Thanks, @arikfr for the project.

We started using redash about 2 years back. We use it for ad-hoc SQL to Redshift and MySQL and most of our business level KPI dashboards are there. One thing I would want to do is add some metabase like segmentation there(preferably that works with joins), so that marketing/sales people can also actively query from it.

23
mmonihan 4 days ago 0 replies      
Has anyone prioritized adding the ability to modify the HTML of the report the way ModeAnalytics.com does?

I frequently want to map the data directly to highcharts.js when the UI can't get me to the result I want.

24
yalooze 5 days ago 0 replies      
25
fudged71 4 days ago 1 reply      
For data visualization I recently switched from SQL+Excel to Tableau, and I'm never going back.
26
ehfeng 4 days ago 0 replies      
We use Redash at Sentry and it's become critical for marketing, sales, even engineering.

Open source ftw!

27
overcast 5 days ago 1 reply      
How does the externally hosted version connect to databases?
28
Jonovono 4 days ago 0 replies      
After trying pretty much every BI tool we went with http://modeanalytics.com/ and have been pretty happy!
29
torkale 5 days ago 0 replies      
This is awesome!
30
tdelmas 5 days ago 0 replies      
I didn't find 'The Art Of Computer Programming' by Donald Knuth... Must be a mistake.
26
Software Engineering at Google arxiv.org
431 points by asnt  2 days ago   151 comments top 22
1
varelse 1 day ago 6 replies      
Not one word about what I consider the most toxic aspect of working at Google: blind allocation.

Unless one absolutely does not care what one wishes to work on, joining Google is throwing your future into the Hogwarts hat of an ill-defined cabal of billionaires to pick your job at Google. Sometimes that works out, but most of the time you are allocated to whatever mission-critical project is currently leaking buttcheeks.

In my brief time there, I was placed on a 7-person team that lost one person per month. That is the single worst retention rate I have ever seen. I left after 4 months for that and the realization that the powers-that-be were in denial about the coming impact of an emerging technology that they have since embraced a year or so after my departure.

That said, the SCCS and the build process were top-notch.

But there are good reasons why despite all the perks and smart people, Google's overall retention rate is barely a month longer than Amazon's.

http://www.slate.com/blogs/business_insider/2013/07/28/turno...

2
hueving 2 days ago 4 replies      
>Engineers are permitted to spend up to 20% of their time working on any project of theirchoice, without needing approval from their manager or anyone else.

From what I understand talking to current employees, this is now bullshit. You could spend 20% on other stuff, but the culture in many of the groups is such that you are putting your peer review at risk by doing so because of your reduced output.

Any current Googlers that spend 1 day of every week working on something completely unrelated to their main job want to comment?

3
benzewdu 2 days ago 3 replies      
Although there is a well-defined process for launch approvals, Google does not have a well-defined process for project approval or cancellation. Despite having been at Google for nearly 10 years, and now having become a manager myself, I still dont fully understand how such decisions are made.

The reason why I quit Google.

4
NumberSix 2 days ago 4 replies      
Google is highly non-representative of businesses and technology businesses in particular. It has a near monopoly on the search business and has enormous amounts of money from a single source -- advertising.

Google Fiscal Year 2015

Revenues: $74.54 Billion (source: Google)Advertising Revenue: $67.39 Billion (source: https://www.statista.com/statistics/266249/advertising-reven...)Profits: $23.4 Billion

Market Capitalization: $570 Billion

Google Not a Startup/Has Not Been a Startup for Many Years

Founded: September 4, 1998 (19 years ago)IPO: August 19, 2004 (13 years ago)

Number of Employees (2015): 57,100Revenues per Employee: $1.3 MillionProfits per Employee: $409,000

The issue is that Google has so much money and is so successful that it can do all sorts of things that are extremely inefficient, even very harmful and still do fine, unlike smaller companies or startups in particular.

For example the arxiv article states:

2.11. Frequent rewrites

Most software at Google gets rewritten every few years.

This may seem incredibly costly. Indeed, it does consume a large fraction of Googles resources.

Google has the money to do this. As the article argues, it may work for Google. On the other hand, Google has so much money and such a dominant market position, it probably can keep succeeding even if continual code rewriting is actively harmful to Google.

In orthodox software engineering theory, competent software engineers, let alone the best of the best that Google claims to hire, should write modular highly reusable code that does not need to be rewritten.

Many rewrites and "refactorings" are justified by claiming the "legacy" code is "bad code" that is not reusable and must be written by "real software engineers" to be reusable/maintainable/scalable etc. One rewrite ought then to be enough.

Even highly successful businesses, for example $1 Billion dollars in revenues with $50 million in profits and 5000 employees (revenues per employee of $200,000), have nowhere near these vast resources --- either in total dollars or per employee or product unit shipped. Blindly copying highly expensive software development processes from Google or other super-unicorn companies like Apple or Facebook is likely a prescription for failure.

5
__strisk 2 days ago 6 replies      
Is the practice of shoving all disparate pieces of proprietary software (or individual projects) in the same repo a common occurrence? I have found that pulling unrelated changes just so that I can push my changes is an inconvenience. Furthermore, tracking the history of a particular project is confounded due to unrelated commits. I am sure that their vcs (piper?) makes this a feasible task, but for git, it seems like it would suck.

The article posted by kyrra, mentions this.

Given the value gained from the existing tools Google has built and the many advantages of the monolithic codebase structure, it is clear that moving to more and smaller repositories would not make sense for Google's main repository. The alternative of moving to Git or any other DVCS that would require repository splitting is not compelling for Google.

It seems like they have just too much invested in this "shove it in the same repo" style. Or is this the more appropriate way to do things in a large organization?

6
hueving 2 days ago 2 replies      
Google has gotten so large so quickly in the past 3 years, that I wonder how much damage has been done to their engineering culture. A lot of "less than stellar people" have joined in these recent years according to several of my friends that work there (in infrastructure and some ML groups).

It seems the push to golang is entirely to sustain large projects with average engineers. Maybe somewhere high up they decided that it's better to just have a massive engineering workforce rather than only hiring top talent? At what point does brain drain start as the best people get sick of dealing with mediocrity?

7
jondubois 1 day ago 2 replies      
The single-repo is quite surprising. How do they manage to store that much data in a central place? I guess they're using some sort of distributed network file system - This seems overly complex though. It would be interesting to know if this was intentional (if there is a reason for this) or things just evolved like this out of habit.

I think that engineers in most large software companies don't actually accomplish much on a day-to-day basis. I'm saying this after having worked in both corporations and startups. Large companies are laughably inefficient - Engineers tend to focus all of their energy on very narrow (often tedious) problems at great depth.

These huge companies never try to reign-in the complexity because they don't really need to - Soon enough, the employees at these companies get used to the enormous complexity and start thinking that it's normal - And when they switch jobs to a different company; they contaminate other companies with that attitude. That's why I think startups will always have an advantage over corporations when it comes to productivity.

In these big companies, the most clever, complex solutions tend to win.

8
dj-wonk 1 day ago 1 reply      
I read most of the paper. For the most part, it struck a nice tone as being mostly descriptive and not too promotional. However, the final paragraph in the conclusion section differs:

> "For those in other organizations who are advocating for the use of a particular practice that happens to be described in this paper, perhaps it will help to say its good enough for Google.

In my opinion, this style of writing doesn't fit nor belong.

I would leave that paragraph out. Instead, let's judge on the merits and applicability of an engineering practice based on thinking, reasoning, and experimentation.

That said, as I've read various comments about Google's processes, I'm struck by the cognitive dissonance. On one hand, I see bandwagoning; e.g. "monolithic source control is nuts; we don't do that; no one I know does that". There is also some appeal to authority; e.g. "well, Google is the best, they do X, so we should too." I'm glad to see different argumentation fallacies colliding here.

9
Upvoter33 1 day ago 1 reply      
So much snark and negativity on this thread, when the paper is just a factual description of a pretty impressive feat of software engineering (still way better than most companies I have seen the inside of).
10
anonsockpuppet 2 days ago 2 replies      
> hunger is never a reason to leave

But good luck finding a free restroom at 10am. Is this an issue at other companies?

11
aanm1988 2 days ago 9 replies      
They have a billion files in their repo, 9 million are source files.

What the heck is the other 991000000?

I skimmed this. Mostly just stuff any competent company would/should be doing. it's google though, so they act like it's super awesome.

12
mlinksva 1 day ago 0 replies      
"Most software at Google gets rewritten every few years" at incredible expense. That sounds crazy, but the article claims it has benefits including keeping up with product requirements, eliminating complexity, and transferring ownership. Would be interesting to see some kind of metric indicating how much of an outlier Google really is here, and what measures it takes to make sure rewrites aren't worse (second system).
13
Chasmo 2 days ago 5 replies      
> Software engineers at Google are strongly encouraged to program in one of four officially-approved programming languages at Google: C++, Java, Python, or Go.

I wonder which of these languages they use to develop the google front page or any other frontend when no Javascript is allowed...

14
OneMoreGoogle 2 days ago 1 reply      
I spent ~18 months at Google, and one of the annoying aspects was the diversity of build systems. I built with Ninja, emerge, Blaze, and Android's Rube-Goldberg-shell-script system.
15
Matthias247 2 days ago 1 reply      
Interesting, compared to what's common in the automotive industry it doesn't even mention the terms "requirements", "specifications", "estimations", "project plan", "tracability", "UML", etc...
16
mck- 2 days ago 0 replies      
A lot of the things they had to build in-house for repo/build/test/deploy as described in chapter two, all of us are fortunate enough to get for (almost) free with all the tools these days.

It's a good time to be a founder :)

17
l1feh4ck 2 days ago 3 replies      
>>Engineers are permitte d to spend up to 20% of their time working on any project of their choice, without needing approval from their manager or anyone else.

Can someone tell more about what they did and the things are that permitted (even though without needing approval.

18
wmu 20 hours ago 0 replies      
Things I'm envy for: 1) documenting changes, 2) compulsory code review, 3) post-failure reports. I wish my company introduced at least one of those.
19
supremesaboteur 2 days ago 1 reply      
> The maintenance of operational systems is done by software engineering teams, rather than traditional sysadmin types, but the hiring requirements for software engineering skills for the SRE are slightly lower than the requirements for the Software Engineering position.

but from

https://www.youtube.com/watch?v=H4vMcD7zKM0&feature=youtu.be...

"All of SREs have to pass a full software interview to get hired"

20
harryjo 2 days ago 3 replies      
Is section 3.1 "20%" time still true?
21
fenollp 2 days ago 0 replies      
> has released over 500,000 lines of open-source code

Interesting metric. I wonder why no "social" VCSs report that metric?

22
sjakobi 2 days ago 3 replies      
Why aren't OKR scores used as an input to performance appraisals?
27
Microsoft Allowed to Sue U.S. Government Over E-mail Surveillance bloomberg.com
344 points by pcs  2 days ago   37 comments top 9
1
KirinDave 2 days ago 1 reply      
Microsoft has an interesting stance here. As EU entities are major customers for both Windows and Azure, they're working quite hard to make sure to assure these parties that the US Government doesn't have undue influence over them.

They're adding secure links and solutions they cannot eavesdrop on to Azure, and getting pretty aggressive with pushing back against US surveillance.

The US benefits from EU privacy laws even without being there.

2
awinter-py 1 day ago 1 reply      
Their argument that cloud can't survive invasion of privacy is probably right, but ignoring government, cloud operators threaten the privacy of their own users.

G and MSFT both have stories from the early days when their employees went into cloud email accounts to check up on users. In G's case it was a rogue sysadmin stalking some high school kids, for MSFT it was looking at a journalist's hotmail acct to prosecute a leak.

The NSA has their own version of this, LOVEINT, where analysts stalk their significant others (or desireds or exes).

I can't think of any guarantee a company can provide to say this isn't happening, esp on social platforms. Crypto-based platforms might do it (see the danish sugar beet auction, which uses a form of secure multiparty computation) but that's not MSFT's argument here. Also unclear how crypto platforms affect the ability to roll out new features and debug.

3
drodgers 2 days ago 1 reply      
That the free speech arguments are going ahead is nice, but the judge also hammered another nail into the coffin of the 4th amendment:

Robart rejected the tech giants argument that the so-called sneak-and-peek searches amount to an unlawful search and seizure of property. Former Attorney General Loretta Lynch had argued that federal law allows the Justice Department to obtain electronic communications without disclosure of a specific warrant if it would endanger an individual or an investigation.

4
portref 2 days ago 1 reply      
Dog and pony show from enthusiastic PRISM partners.

"We've changed, honest!"

5
CaptSpify 2 days ago 0 replies      
It's very rare that I say this, but go Microsoft!
6
Fej 1 day ago 0 replies      
We will have our Fourth Amendment. I'm actually not surprised it's Microsoft leading the charge here. After all, if they capitulate, their presence in the EU will be greatly diminished.
7
Zak 2 days ago 0 replies      
It seems to me that requiring fixed end dates for the gag orders would greatly mitigate the problem. It would be possible to extend them, individually, for cause.
8
Sami_Lehtinen 1 day ago 0 replies      
Some companies are smart and run EU & US operations completely separately. This is exactly why they're doing it.
9
ngold 2 days ago 0 replies      
I wonder how this will play out in other tech arenas.
28
Trump2cash A stock trading bot powered by Trump tweets github.com
371 points by laktak  3 days ago   154 comments top 30
1
djb_hackernews 3 days ago 4 replies      
>This bot watches Donald Trump's tweets and waits for him to mention any publicly traded companies.

Ok, simple enough, go on.

> When he does, it uses sentiment analysis to determine whether his opinions are positive or negative toward those companies.

Eh, sentiment analysis isn't perfect but it has come a long way, plus Trump does typically say what he means in simplistic language.

> The bot then automatically executes trades on the relevant stocks according to the expected market reaction.

Ah market psychology. Stop. Do not pass Go. Do not collect $200. This is the tricky bit that a toy project will never get right and turns your slick algorithmic trading project into a monkey and a dart board.

Though this is a sweet example of an API mashup.

2
karangoeluw 3 days ago 3 replies      
I actually did some basic analysis of this recently - can you really make money in the stock market trading on Trump's tweets. https://medium.com/karan/can-you-really-make-money-when-real...

My answer was maybe but I'd rather not put my money at the mercy of a lunatic.

3
dsacco 3 days ago 3 replies      
Very cool. Funds have been capitalizing on Trump's tweets since at least December (but probably November) of 2016: https://www.bloomberg.com/view/articles/2016-12-07/flash-cra...

I'd be interested in learning why TradeKing was used , as I haven't used it myself - was it for an especially solid API, or just because TradeKing is easy to get set up and doesn't require a minimum investment? If anyone wants to play around with this and has at least $10k to play with, Interactive Brokers will give you better fees.

OP, you might be interested in Quantopian's Zipline algorithmic trading library, which is also in Python: https://github.com/quantopian/zipline

Also, I feel like there should be a disclaimer that given how accessible this strategy is (both in required skill and resources) and how much attention the Trump tweets phenomenon has gotten, any alpha from this has been almost certainly lost to other firms. This is a cool project, but it's probably not actually an effective strategy anymore.

4
thedrake 3 days ago 3 replies      
another version could be to capitialize on this:

"The efficient markets hypothesis may be "the best established fact in all of social sciences," but the best established fact in all of financial markets is that, when there is news about a big famous private company going public or being acquired, the shares of a tiny obscure public company with a similar name will shoot up. I don't know what that tells you about the efficient markets hypothesis, but it happened to Nestor, Inc., and to Tweeter Home Entertainment, and to Oculus VisionTech Inc., and now it has happened to SNAP Interactive Inc.:

In what is almost surely a case of mistaken identity, investors sent shares in a little known startup called SNAP Interactive Inc., ticker STVI, surging 164 percent in the four days since Snap Inc. filed for a $3 billion initial public offering. The $69 million SNAP Interactive makes mobile dating apps, while the IPO aspirant is the parent of the popular Snapchat photo-sharing app.These stories are always less impressive when expressed in dollar terms than they are in percentages. In the four trading days since Snap Inc. filed its S-1, SNAP Interactive has traded 19,963 shares, worth less than $200,000, according to Bloomberg data. If you had a cunning plan to buy up SNAP shares and sell them for a quick profit when Snap filed to go public, it might have worked, but not in particularly huge size."

article here: https://www.bloomberg.com/view/articles/2017-02-09/quasi-ind...

5
chis 3 days ago 2 replies      
Pretty clever and effective message. I'd put money on this being shared by all my friends on Facebook in a few days.

Trump's last couple tweets have backfired and actually increased his targets' stock price though. You might have to flip the algorithm around if this continues.

6
captainmuon 3 days ago 1 reply      
Brilliant. I've been thinking about something like this for a while.

Why don't you take it to the next level? Provoke Trump on Twitter, associating yourself with a company. Then short that company's stock and wait until he lashes out against them. Profit!

7
butler14 3 days ago 0 replies      
An ad agency did this 3 weeks ago

https://www.t-3.com/works/the-trump-and-dump-bot/

8
nodesocket 3 days ago 4 replies      
Novel project, but not useful. Trump tweeted thanks to Intel on Feb 8th which was a super positive tweet. Since then $INTC is down about 2.6%.
9
alex- 2 days ago 1 reply      
As the poster doesn't this gives Trump the power to manipulate markets with nothing more than a tweet?

You could argue he already has this power, but before the message needed to resonate with a large collective of people. Any public figure (like Warren Buffet, etc) can express an opinion that might effect the stock they are commenting on.

However as this group of people automate trades on only the contents of his tweets (and not say the contents of the tweet as well as past company performance, current POTUS approval ratings, dividend plans etc) he gains a scary amount of control over the markets.

To put another way. The only people who know what he will tweet earlier than these bots are Trump and maybe a select group of people he tells before hand.

10
jnaddef 3 days ago 1 reply      
If many people use this bot, it will make Trump's comments actually effective, which is a pretty bad idea imho.

What about a bot which would do just the opposite?

11
martin-adams 3 days ago 4 replies      
This could make quite a fun experiment to see if it works. Crowdsource an investment of something like $10,000 and let it run for a year. At the end, donate all proceeds to charity.
12
rthomas6 2 days ago 1 reply      
(I know this is just for fun, but...) I believe this won't work because of high-frequency trading. By the few seconds this latent python script will take to get the tweet, analyze the tweet, and execute a trade, high-speed bots will have already done something similar to a price level they decided was appropriate using sophisticated algorithms.

So basically, instead of this bot betting that Trump's tweets will change stock prices, you're actually betting on the HFT bots being wrong about how much Trump's tweets change stock prices.

13
swalsh 3 days ago 2 replies      
The fact that Trump has the vocabulary of an 8th grader makes this a lot easier.
14
mhuffman 3 days ago 1 reply      
I have been doing the opposite.

Whenever Trump signs on executive order or signals an intention, I presume it will either fail or will end in bad results for the poor and middle class.

Then, working as if that were true, I see if there are any stocks or ETFs that would benefit from that and buy some.

I have been doing pretty good!

15
gliese1337 3 days ago 1 reply      

 This bot watches Donald Trump's tweets and waits for him to mention any publicly traded companies. When he does, it uses sentiment analysis to determine whether his opinions are positive or negative toward those companies. The bot then automatically executes trades on the relevant stocks according to the expected market reaction.
OK, so what's the expected market reaction? If Trump hates your company, does that make your stock go up, or down?

16
hmate9 3 days ago 1 reply      
I think Nordstrom stock went up after a negative tweet because it was seen as good publicity for the company. Trump didn't actually "threaten" to retaliate or anything, whereas for companies like Ford or GM he actually said he would impose a heavy "border tax" if they opened factories elsewhere.
17
gurgus 3 days ago 0 replies      
Cool project!

Makes me want to think of another fun way of crowdsourcing stock tips for fun.

Maybe something like TwitchPlaysWallStreet?

18
theocean154 3 days ago 0 replies      
I have one of these as well, but i would've died if it was live trading during the nordstrom incident. They've gone up quite a bit since then.
19
carlmcqueen 3 days ago 0 replies      
Regardless of this bot being complete, there are better options already mentioned using options instead of short term stock purchases.

Looks like deciding the sentiment of the tweet is still on the to-do list.

def get_sentiment(self, text): """Extracts a sentiment score [-1, 1] from text."""

 # TODO: Determine sentiment targeted at the specific entity.

20
makerofthings 3 days ago 1 reply      
I like this. I think something similar for Theresa May and the GBP:EUR exchange rate might be a good addition.
21
jagermo 3 days ago 0 replies      
Nordstrom went up after Trumps rant, didn't it? But all in all this looks like a fun thing to do.
22
chatwinra 3 days ago 1 reply      
You know at the end of Men in Black it zooms out and our galaxy is just one marble in a galactic game?

I feel like this game is just a small scale version of the one being played by a higher force by making Trump president.

23
Gupie 3 days ago 2 replies      
It would work better is you knew in advance what Trump is going to tweet, not possible of course unless you work for twitter or for Trump?!
24
B1FF_PSUVM 3 days ago 1 reply      
> mention any publicly traded companies

How often did that happen in the past?

(Not paying attention, specialty of the house ;-)

25
tmalsburg2 3 days ago 2 replies      
This is fun. However, does this bot perform better than a random bot? and if yes, how much?
26
DanBC 3 days ago 0 replies      
But see this, which says his tweets generally don't make much difference: https://www.ft.com/content/a962c1f8-ee44-11e6-930f-061b01e23...
27
paulpauper 3 days ago 0 replies      
the problem is hedge funds are already on his twitter like white on rice. Maybe there is hope in looking for breaking news from less-followed accounts.
28
AltGr 3 days ago 0 replies      
I can't state how stupid this is. As if that person's bullshit's power of nuisance wasn't large enough, let's add automated echo chambers with a direct effect on the economy!

Don't be surprised if this crazy economic system can collapse any seconds with ideas like this.

29
ldev 3 days ago 5 replies      
30
AznHisoka 3 days ago 1 reply      
How many times do we have to go over this?

stocks are a random walk.

29
Postmortem of database outage of January 31 gitlab.com
375 points by mbrain  2 days ago   249 comments top 35
1
ky738 2 days ago 2 replies      
RIP the engineer
2
KayEss 2 days ago 4 replies      
The engineers still seem to have a physical server mindset rather than a cloud mindset. Deleting data is always extremely dangerous and there was no need for it in this situation.

They should have spun up a new server to act as secondary the moment replication failed. This new server is the one you run all of these commands on, and if you make a mistake you spin up a new one.

Only when the replication is back in good order do you go through and kill the servers you no longer need.

The procedure for setting up these new servers should be based on the same scripts that spin up new UAT servers for each release. You spin up a server that is a near copy of production and then do the upgrade to new software on that. Only when you've got a successful deployment do you kill the old UAT server. This way all of these processes are tested time and time again and you know exactly how long they'll take and iron out problems in the automation.

3
illumin8 2 days ago 12 replies      
I have to say - if they were using a managed relational database service, like Amazon's RDS Postgres, this likely would have never happened. RDS fully automates nightly database snapshots, and ships archive logs to S3 every 5 minutes, which gives you the ability to restore your database to any point in time within the last 35 days, down to the second.

Also, RDS gives you a synchronously replicated standby database, and automates failover, including updating the DNS CNAME that the clients connect to during a failover (so it is seamless to the clients, other than requiring a reconnect), and ensuring that you don't lose a single transaction during a failover (the magic of synchronous replication over a low latency link between datacenters).

For a company like Gitlab, that is public about wanting to exit the cloud, I feel like they could have really benefited from a fully managed relational database service. This entire tragic situation could have never happened if they were willing to acknowledge the obvious: managing relational databases is hard, and allowed someone with better operational automation, like AWS, to do it for them.

4
meowface 2 days ago 6 replies      
>Trying to restore the replication process, an engineer proceeds to wipe the PostgreSQL database directory, errantly thinking they were doing so on the secondary. Unfortunately this process was executed on the primary instead. The engineer terminated the process a second or two after noticing their mistake, but at this point around 300 GB of data had already been removed.

I could feel the sweat drops just from reading this.

I'd bet every one of us has experienced the panicked Ctrl+C of Death at some point or another.

5
atmosx 2 days ago 1 reply      
Great to have a full-featured, professional post-mortem. Incidentally I work at a company that suffered data loss because of this outage and we're looking for ways to move out of GL.

My 2 cents... I might be the only one, but I don't like the way GL handled this case. I understand transparency as a core value and all, but they've gotten a bit too far.

IMHO this level of exposure has far-reaching, privacy implications for the ppl who work there. Implications that cannot be assessed now.

The engineer in question might have not suffered a PTSD, but some other engineer might haven been. Who knows how a bad public experience might play out? It's a fairly small circle, I'm not sure I would like to be part of a company that would expose me in a similar fashion, if I happen to screw up.

On the corporate side of things there is a saying in Greek: " " meaning don't wash your dirty linen in public. Although they're getting praised by bloggers and other small-size startups, in the end of the day exposing your 6-layer broken backup policy and other internal flaws in between, while being funded at the tune of 25.62M in 4 rounds, does not look good.

6
gr2020 2 days ago 1 reply      
Reading this, the thing that stuck out to me was how remarkably lucky they were to have the two snapshots. The one from 6 hours earlier was there seemingly by chance, as an engineer had created it for unrelated reasons. And for both the 6- and 24-hour snapshots, it seems just lucky that neither had any breaking changes made to them by pre-production code (they _were_ dev/staging snapshots, after all).

I'm glad it all worked out in the end!

7
greenrd 2 days ago 0 replies      
GitHub also lost a bunch of PRs and issues sitewide early in their history. They claimed to have restored all the PRs from backup, but I was pretty sure I had opened a PR and it never came back. I emailed support and they basically told me tough luck.
8
ancarda 2 days ago 2 replies      
>Unfortunately DMARC was not enabled for the cronjob emails, resulting in them being rejected by the receiver. This means we were never aware of the backups failing, until it was too late.

At my dayjob, we gradually stopped using email for almost all alerts, instead we have several Slack channels like #database-log where errors to MySQL go. Any cron jobs that fail post in #general-log. Uptime monitoring tools post in #status. So on...

Email has so much anti-spam stuff like DMARC that make it less reliable your mail will be delivered. For something failing like a backup or database query, it's too important to have potentially not reach someone who can make sure it gets fixed.

My 2 cents.

9
matt_wulfeck 2 days ago 0 replies      
> Unfortunately this process was executed on the primary instead. The engineer terminated the process a second or two after noticing their mistake, but at this point around 300 GB of data had already been removed.

I can only image this engineer's poor old heart after the realization of removing that directory on the master. A sinking, awful feeling of dread.

I've had a few close calls in my career. Each time it's made me pause and thank my luck it wasn't prod.

10
nowarninglabel 2 days ago 1 reply      
Thanks so much for the post and transparency Gitlab! We had just finished recovering from our own outage (stemming for a power loss and subsequent cascading failures) and were scheduled to do our post-mortem on 2/1 so the original document was a refreshing and reassuring read.
11
dancryer 6 hours ago 0 replies      
Can't help but notice that the new backup monitoring tool suggests that the latest PGSQL backup is almost six days old...

Is that correct? http://monitor.gitlab.net/dashboard/db/backups?from=14859419...

12
aabajian 2 days ago 0 replies      
This is an outstanding writeup, but I wonder if it glosses over the real problem:

>>The standby (secondary) is only used for failover purposes.

>>One of the engineers went to the secondary and wiped the data directory, then ran pg_basebackup.

IMO, secondaries should be treated exactly as their primaries. No operation should be done on a secondary unless you'd be OK doing that same operation on the primary. You can always create another instance for these operations.

13
voidlogic 2 days ago 0 replies      
>When we went to look for the pg_dump backups we found out they were not there. The S3 bucket was empty, and there was no recent backup to be found anywhere. Upon closer inspection we found out that the backup procedure was using pg_dump 9.2, while our database is running PostgreSQL 9.6 (for Postgres, 9.x releases are considered major). A difference in major versions results in pg_dump producing an error, terminating the backup procedure.

Yikes. One common practice that would have avoided this is by using the just taken backup to populate stage. If the restore fails pages go out. If integration tests that run after a successful restore/populate fail- pages go out.

Live and learn I guess.

14
_Marak_ 2 days ago 1 reply      
I've noticed a lot of other positive activity and press for Gitlab for in the past month.

It's unfortunate they had this technical issue, but it's good to see others ( besides Github ) operating in this space. I should give Gitlab a try sometime.

15
pradeepchhetri 2 days ago 0 replies      
Just want to add here that using tools like safe-rm[1] across your infrastructure would help in preventing data losses by running rm on unintended directories.

[1]: https://launchpad.net/safe-rm

16
jsperson 2 days ago 0 replies      
>An ideal environment is one in which you can make mistakes but easily and quickly recover from them with minimal to no impact.

This is a great attitude. Too often opportunity cost isn't considered when making rules to protect folks from doing something stupid.

17
yarper 2 days ago 2 replies      
It's amazing how quickly it descends into "one of the engineers" did x or y. Who was steering this ship exactly?

It's really simple to point the finger and try to find a single cause of failure - but it's a fools errand - comparable to finding the single source behind a great success.

18
XorNot 2 days ago 0 replies      
The backup situation stands out to me as a problem no one has really adequately solved. Verifying a task has happened in a way where the notifications are noticed is actually a really hard problem that it feels like we collectively ignore in this business.

How do you reliably check if something didn't happen? Is the backup server alive? Did the script work? Did the backup work? Is the email server working? Is the dashboard working? Is the user checking their emails (think: wildcard mail sorting rule dumping a slight change in failure messages to the wrong folder).

And the converse answer isn't much better: send a success notification...but if it mostly succeeds, how do you keep people paying attention to it when it doesn't (i.e. no failure message, but no success message)?

The best answer I've got, personally, is to use positive notifications combined with visibility - dashboard your really important tasks with big, distinctive colors - use time based detection and put a clock on your dashboard (because dashboards which mostly don't change might hang and no one notice).

19
isoos 2 days ago 1 reply      
sytse and GitLab folks: thank you for the transparency.
20
samat 2 days ago 1 reply      
Am I missing something or didn't they mention 'test recovery, not backups'?
21
nodesocket 2 days ago 3 replies      
My main question is still:

>> Why did replication stop? - A spike in database load caused the database replication process to stop. This was due to the primary removing WAL segments before the secondary could replicate them.

Is this a bug/defect in PostgreSQL then? Incorrect PostgreSQL configuration? Insufficient hardware? What was the root cause of Postgres primary removing the WAL segments?

22
AlexCoventry 1 day ago 0 replies      
Thank you for this informative postmortem and mitigation outline.

Are any organizational changes planned in response to the development friction which led to the outage? It seems to have arisen from long-standing operational issues, and an analysis of how prior attempts to address those issues got bogged down would be very interesting.

23
jsingleton 2 days ago 0 replies      
TIL GitLab runs on Azure. If your CI servers or deployment targets are also on Azure then the latency should be pretty low (assuming you get the correct region). Good to know.

I moved from AWS to Azure years ago. Mainly because I run mostly .NET workloads and the support is better. I've recently done some .NET stuff on AWS again and am remembering why I switched.

24
Achshar 2 days ago 1 reply      
Does anyone have a link to the YouTube stream they's talking about? Can't seem to find it on their channel. And the link in the doc is redirecting to the live link [1] which doesn't list the stream.

[1] - https://www.youtube.com/c/Gitlab/live

25
nstj 2 days ago 0 replies      
@sytse were you in contact with MS/Azure during the restore? If so did they offer any assistance, e.g in speeding up restoration disk speed etc
26
nierman 2 days ago 0 replies      
yes, wal archiving would have helped (archive_command = rsync standby ...), but it's also very easy in postgres 9.4+ to add a replication slot on the master so that wal is kept until it is no longer needed by the standby. simply reference the slot in the standby's recovery.conf file.

definitely monitor your replication lag--or at least disk usage on the master--with this approach (in case wal starts piling up there).

27
oli5679 2 days ago 0 replies      
I found this entertaining, even if they did later admit that it was a hoax:

http://serverfault.com/questions/587102/monday-morning-mista...

28
tschellenbach 2 days ago 1 reply      
Shouldn't the conclusion of this post mortem be a move to a managed database service like RDS? The database doesn't sound huge, RDS is affordable enough, sounds to me that you spend less money and have better uptime and sleep by moving away from this in-house solution.
29
khazhou 2 days ago 0 replies      
Every internal Ops manual needs to begin with the simple phrase:

DON'T PANIC

30
grhmc 2 days ago 1 reply      
Thank you for redacting who the engineer was. Great write-up. Thank you!
31
encoderer 2 days ago 0 replies      
If you want to up your cron job monitoring game there's a link in my profile.
32
dustinmoris 2 days ago 1 reply      
Watching GitLab is somewhat painful. I feel like they make every possible mistake you could do as an IT startup and because they are transparent about it people seem to love the fact that they screw up all the time. I don't know if I share the same mentality, because at the end of the day I don't trust GitLab even with the simplest task, let alone any valuable work of mine.

It's good to be humble and know that mistakes can happen to anyone and learn from it, etc., but when you do in 2017 still the same stupid mistakes that people did a million times since 1990 and it's all well documented and there's systems built to avoid these same basic mistakes and you still do them today then I just think it cannot be described any different than absolute stupidity and incompetence.

I know they have many fans who just look past every mistake no matter how bad it was only because they are open about it, but common, this is now just taking the piss no?

33
cookiecaper 2 days ago 2 replies      
I really hate to pile on, but after reading through this whole thread and the whole post-mortem, there are a few basic things that are troubling besides the widely-acknowledged backup methodology. I don't see issues directly related to addressing these things.

1. notifications go through regular email. Email should be only one channel used to dispatch notifications of infrastructure events. Tools like VictorOps or PagerDuty should be employed as notification brokers/coordinators and notifications should go to email, team chat, and phone/SMS if severity warrants, and have an attached escalation policy so that it doesn't all hinge on one guy's phone not being dead.

2. there was a single database, whose performance problems had impacted production multiple times before (the post lists 4 incidents). One such performance problem was contributing to breakage at this very moment. I understand that was the thing that was trying to be fixed here, but what process allowed this to cause 4 outages over the preceding year without moving to the top of the list of things to address? Wouldn't it be wise to tweak the PgSQL configuration and/or upgrade the server before trying to integrate the hot standby to serve some read-only queries? And since a hot standby can only service reads (and afaik this is not a well-supported option in PgSQL), wouldn't most of the performance issues, which appear write-related, remain? The process seriously needs to be reviewed here.

And am I reading this right, the one and only production DB server was restarted to change a configuration value in order to try to make pg_basebackup work? What impact did that have on the people trying to use the site a) while the database was restarting, and b) while the kernel settings were tweaked to accommodate the too-high max_connections value? Is it normal for GitLab to cause intermittent, few-minute downtimes like that? Or did that occur while the site was already down?

3. Spam reports can cause mass hard deletion of user data? Has this happened to other users? The target in this instance was a GitLab employee. Who has been trolled this way such that performance wasn't impacted? What's the remedy for wrongly-targeted persons? It's clear that backups of this data are not available. And is the GitLab employee's data gone now too? How could something so insufficient have been released to the public, and how can you disclose this apparently-unresolved vulnerability? By so doing, you're challenging the public to come and try to empty your database. Good thing you're surely taking good backups now! (We're going to glance over the fact that GitLab just told everyone its logical DB backups are 3 days behind and that we shouldn't worry because LVM snapshots now occur hourly, and that it only takes 16 hours to transfer LVM snapshots between environments :) )

4. the PgSQL master deleted its WALs within 4 hours of the replica "beginning to lag" (<interrobang here>). That really needs to be fixed. Again, you probably need a serious upgrade to your PgSQL server because it apparently doesn't have enough space to hold more than a couple of hours of WALs (unless this was just a naive misconfiguration of the [min|max]_wal_size parameter, like the max_connections parameter?). I understand that transaction logs can get very large, but the disk needs to accommodate (usually a second disk array is used for WALs to ease write impact) and replication lag needs to be monitored and alarmed on.

There were a few other things (including someone else downthread who pointed out that your CEO re-revealed your DB's hostnames in this write-up, and that they're resolvable via public DNS and have running sshds on port 22), but these are the big standouts for me.

P.S. bonus point, just speculative:

Not sure how fast your disks were, but 300GB gone in "a few seconds" sounds like a stretch. Some data may've been recoverable with some disk forensics. Especially if your Postgres server was running at the time of the deletion, some data and file descriptors also likely could've been extracted from system memory. Linux doesn't actually delete files if another process is holding their handle open; you can go into the /proc virtual filesystem and grab the file descriptor again to redump the files to live disk locations. Since your database was 400GB and too big to keep 100% in RAM, this probably wouldn't have been a full recovery, but it may have been able to provide a partial.

The theoretically best thing to do in such a situation would probably be to unplug the machine ASAP after ^C (without going through formal shutdown processes that may try to "clean up" unfinished disk work), remove the disk, attach it to a machine with a write blocker, and take a full-disk image for forensics purposes. This would maximize the ability to extract any data that the system was unable to eat/destroy.

In theory, I believe pulling the plug while a process kept the file descriptor open should keep you in reasonably good shape, as far as that goes after you've accidentally deleted 3/4 of your production database. The process never closes and the disk stops and the contents remain on disk, just pending unlink when the OS stops the process (this is one reason why it'd be important to block writes to the disk/be extremely careful while mounting; if the journal plays back, it may destroy these files on the next boot anyway). But someone more familiar with the FS internals would have to say definitively if it works that way or not.

I recognize that such speculative/experimental recovery measures may have been intentionally forgone since they're labor intensive, may have delayed the overall recovery, and very possibly wouldn't have returned useful data anyway. Mentioning it mainly as an option to remain aware of.

34
NPegasus 2 days ago 10 replies      

 > Root Cause Analysis > [...] > [List of technical problems]
No, the root cause is you have no senior engineers who have been through this before. A collection of distributed remote employees, none of whom has enough experience to know any of the list of "Basic Knowledge Needed to Run a Website at Scale" that you list as the root causes. $30 million in funding and still running the company like a hobby project among college roommates.

Mark my words, the board members from the VC firms will be removed by the VC partners due to letting the kids run the show. Then VC firms will put an experienced CEO and CTO in place to clean up the mess and get the company on track. Unfortunately they will probably have wasted a couple years and be down to the last million $ before they take action.

35
EnFinlay 2 days ago 1 reply      
Most destructive troll ever.
30
Violating Terms of Use Isnt a Crime, EFF Tells Court eff.org
295 points by DiabloD3  6 days ago   99 comments top 14
1
turc1656 5 days ago 3 replies      
"But last year, a federal district court in Nevada found a defendant guilty under both the California and Nevada state computer crime statutes for nothing more than thatviolating Oracles websites terms of use."

That's insane. The terms of service is essentially a contract that you are agreeing to to use the website/software/service. Failure to adhere to it is a breach of contract, not a violation of law.

If you break an NDA, for example, you don't wind up in jail or have a criminal history. The other party takes you to court to enforce the penalty listed out in the contract for the breach.

2
holtalanm 6 days ago 1 reply      
Just my opinion, but I think ToS were originally in place to define how a user _should_ use the site, and how the site operators could act in response to violation.

I don't think they should be held as even a contract, much less criminal law.

Truthfully, they are really only there to protect the company by outlining to the user what might get them banned from the site and so on. Oracle is overstepping its authority here imo. It is their own fault they didn't revoke access to the site from that company.

3
rayiner 5 days ago 0 replies      
The EFF is right, but the relevance of the TOS violation is more subtle than the EFF's explanation makes it out to be. Using someone's property without their consent is, of course, a crime. When that property is ordinarily available for public use, consent is presumed, but can be revoked. It's can be criminal trespass to remain in a store after you're kicked out (although usually it's just civil trespass).

Here, "Oracle sent Rimini a cease and desist letter demanding that it stop using automated scripts. It did not, however, rescind Riminis authorization to access the files outright." So the question is, was the implied consent to use Oracle's servers effectively revoked?

Arguably not. A public mall can get you kicked off the property for any reason, and can press charges for criminal trespass if you don't leave. But it can't press charges for criminal trespass for violating the sign on the door that says "no hats." And it probably can't press charges for criminal trespass if it sees you wearing a hat and tells you to take it off, but doesn't kick you off the property.

4
rplst8 6 days ago 1 reply      
The fact that this has to even be argued is appalling. The erosion of the difference between a tort and crime over the last few decades is very concerning.

I think a lot of it started with the changing of copyright law into criminal law.

5
vog 6 days ago 1 reply      
Here in Germany the law states that ToS are only applicable if they contain "no surprising terms". Which is really nice! Although this doesn't give you permission for everything, it protects you from any "cleverness" of a site's operator. It ensures that indeed almost nobody needs to read ToS. Even lawyers tell you this.
6
codedokode 6 days ago 4 replies      
Isn't it nice if ToS is legally binding?

1. Make a website and write somewhere in the middle of ToS that visitor must pay $1000 (for example) for every page viewed or for every second spent on a site

2. Persuade him to press "I have read and agree to the ToS" and to stay as long as possible

3. Send a bill

7
DarkKomunalec 6 days ago 1 reply      
It's about time corporations took out the government middle man and started making laws themselves.
8
pflats 5 days ago 2 replies      
"Oracle sent Rimini a cease and desist letter demanding that it stop using automated scripts. It did not, however, rescind Riminis authorization to access the files outright. Rimini continued to use automated scripts, and Oracle sued. The jury found Rimini guilty under both the California and Nevada computer crime statues, and the judge upheld that verdictconcluding that, under both statutes, violating a websites terms of service counts as using a computer without authorization or permission."

I'm a little confused here. I'm with the EFF that violating the TOS shouldn't be criminal. But if you're given a C&D that says "stop using automated scripts" and you continue using automated scripts, why is the TOS relevant at all? Isn't Rimini clearly exceeding their authorized access (left available for manual downloads) based on the C&D?

9
snarfy 6 days ago 1 reply      
Popular websites should add an Oracle employee clause to their ToS so that employees of Oracle corporation are not allowed to use it.
10
josho 6 days ago 1 reply      
Going forward we should all have our minor children create accounts for us and be the ones to accept the TOS.

Once you realize that is a reasonable workflow you've realized how unenforceable TOS are for everything but corporate contracts where documents are being signed and witnessed.

11
peterclary 6 days ago 5 replies      
IANAL, but surely violating Terms of Use is essentially a breach of contract? Making breach of contract a crime would be very foolish indeed.
12
peeters 5 days ago 1 reply      
I think in a democracy there should be some group of state attorneys who are not just allowed, but mandated to prosecute the law to the fullest extent possible.

For example, if Congress has a law making ToU violations crimes, then there should be a select few DAs who are required to go out and prosecute people who enable AdBlock and visit a certain site. And it should always start with legislators if possible. See how fast stupid laws go away.

13
fpgadude 6 days ago 2 replies      
What would happen to foreigners entering the US with a "fake" facebook profile as their social media ID? Straight to jail or straight back home?
14
stubish 6 days ago 1 reply      
Does anyone know the details of what was being automatically downloaded? I'm aware of several Open Source projects doing this with things like Java, but not if any of them have received cease and desist orders.
       cached 13 February 2017 16:11:02 GMT