hacker news with inline top comments    .. more ..    8 Apr 2017 Best
home   ask   best   3 months ago   
Build Your Own Text Editor viewsourcecode.org
1222 points by matthewbauer  2 days ago   154 comments top 36
Dangeranger 2 days ago 6 replies      
Reference implementations of medium sized applications are incredibly useful for leveling up as a programmer. While there are many large successful open source applications, many are overwhelming to read and learn from.

Having something that outlines the key features and components and which ignores the important but complicated edge cases assists in keeping the attention focused.

Now if there are annotation within the source code, that would be truely incredible.

WalterBright 2 days ago 5 replies      
Here's one from the 1980s, which I still use and keep up to date:


and translated to D:


spectaclepiece 2 days ago 1 reply      
It's my birthday today and after a peaceful morning routine I hadn't yet decided what I would do today except that I would do whatever I felt like doing. I felt like writing a text editor in C.
strainer 2 days ago 5 replies      
Scite is a barebone open source text editor created by Neil Hodgson to exercise his "Scintilla" text-editor c++ library which is used in others like notepad++.

In hindsight, I would have a bit more caution programming text editors. I started tweaking and modifying Scite years ago, it was very interesting but it was no small undertaking and I came to understand why Neil advised in the support forum, to customise it using the inbuilt Lua scripting. Im still using this 6 year old customised version of Scite that I never managed to sync with the latest version, and it has 10 thousand lines of custom Lua facilities like file encryption, navigation panels, multi-edit mode etc.. which I wrote and stabilised a few years ago. I rarely venture to alter it now that I am at last comfortable with it, but its going to need serious attention sooner or later...

mistermumble 2 days ago 0 replies      
I think Kilo is a great little project, well-structured and very educational.

Another useful resource I've relied upon in the past, dates back to the 1990s: Freyja, which is Craig Finseth's emacs-like editor written in C.

Here is a list of features:

 * deletions are automatically saved into a "kill buffer" * ability to edit up to 11 files at once * ability to view two independent windows at once * integrated help facility * integrated menu facility, with help on all commands * can record and play back keyboard macros * supports file completion and limited directory operations * includes a fully-integrated RPN type calculator 
It was designed for MS-DOS with the Cygwin terminal library.

I found the architecture to be very clean, and it is well explained in Finseth's classic book ("The Craft of Text Editing"). The book is worth reading even if you never touch the code: http://www.finseth.com/craft/

It uses a multi-buffer architecture roughly similar to Walter Bright's text editor (see sibling posting). (I knew about Finseth's editor years ago, but was not aware of Bright's work until now, thanks Walter!)

The Freyja source can be downloaded from: http://www.finseth.com/parts/freyja.php

Look for the link to "freyja30.exe" which turns out to be freyja30.zip (not an executable).

girvo 2 days ago 1 reply      
Neat! I have been trying to, in Nim, but I've had so little experience working with raw terminals and the IOCTL handles, I've been struggling!


This looks like a great tutorial to work through with it :)

briansteffens 2 days ago 2 replies      
Nice guide, especially the terminal raw mode stuff. I haven't seen much on that before, I really appreciate that this guide goes over that.

I've been messing around with a little text editor component myself in golang:


For a SQL editor project:


aethertron 2 days ago 2 replies      
1000 lines of code is considered small. But check this out


Under 50 lines for a text editor written in K by the language's author. Way beyond my present understanding, but the promise of very small, powerful code is incredibly attractive.

tarboreus 2 days ago 0 replies      
Just getting further into C and this is fascinating. But I still have to share this: http://www.catb.org/esr/writings/unix-koans/editor-wars.html
pklausler 1 day ago 1 reply      
I got fed up with the standard offerings back in '07 or so and hammered out my idea of a code editor that's perfect for me over a long weekend. It mmaps files initially for instant response, it has a small sensible command set that I can remember completely, it depends on nothing more than the standard C library and a terminal emulator, and it's not many more lines of code than some Emacs configuration files that I've seen. I still use it for everything.
japhyr 2 days ago 9 replies      
I'm curious if anyone here regularly codes in a text editor they wrote themselves?

I've often thought of coding one for fun, with no intention to share it, just for the purpose of having a long-term project that evolves along with my skills. I've never made time for it, but I still consider it once in a while.

swah 2 days ago 1 reply      
If you enjoy "redis style" C code, take a look at vis https://github.com/martanne/vis
gavinpc 2 days ago 0 replies      
Thanks for making this! It is obviously a labor of love. This kind of incremental building is a lot of work, but adds a revealing dimension not seen in a flat annotated source.

Great work.

teddyh 1 day ago 0 replies      
The Craft of Text Editing, by Craig A. Finseth:


adzm 2 days ago 2 replies      
The next fun experiment is to handle gigabyte files without undue performance troubles with a lively ui thread! This is where scintilla et al run into limits.
netghost 1 day ago 0 replies      
Just have to say, this is ridiculously detailed, which is great. I'm not looking to build a text editor in C, but I learned a good amount just skimming the first few chapters.

Great site, great presentation, thanks for digging into it with so much detail.

bleurghyflergy 12 hours ago 0 replies      
This is a really cool hacking primer.

The guy who made this (Jeremy Ruten, I think) should be commended for taking something complicated and boiling it down into accessible terms.

I hope we see more documents like this on HN in the future.

z3t4 2 days ago 1 reply      
I'm working on an editor in JavaScript. You would be surprised how fast string operations, like concatenation, are in JavaScript! You can hold the entire buffer in a String! While browsers renders text very well, the DOM is relative slow to interact with, but there are other ways to render in JavaScript, for example the Canvas, or into a terminal, or even stream a video, or talk directly to a display.
dchuk 2 days ago 3 replies      
Would something like this be a way for a beginner to C (barely any experience) to get their feet wet? Or is there an expectation of basic familiarity with C already?
zaf 2 days ago 0 replies      
Some time ago I went overboard and wrote an editor in PHP with the 'main' feature being auto encrypted files. Completely bananas.


Zardoz84 1 day ago 0 replies      
I rememeber when some years ago, I wrote a CLI aventure game, using ncurses to handle windowsing, raw input, etc. How easy made my life....
jquast 2 days ago 0 replies      
nanospeck 2 days ago 0 replies      
Does anyone have similar links but written in Java? If yes, please share here.
JepZ 2 days ago 0 replies      
Sometimes when I configure my vim, I feel like I am building an editor too :D

But at least that is more of a LEGO style of putting different bricks (plugins) together.

nojvek 2 days ago 0 replies      
kilo.c code referenced in the tuturial if you're interested in compiling and hacking around.


I wonder how mouse clicking and scrolling in editor works? Would love to make html/js/css for terminal node module. I think that would be fun.

ja_k 2 days ago 0 replies      
Pretty sure that this is the same as the first project for Oxford ComSci students in first year
har777 2 days ago 0 replies      
This is excellent. Having a lot of fun going through it. Always wanted to start writing some C.
xyclos 2 days ago 1 reply      
for some strange reason the 'Fira Mono' google font is displaying all &s as |s. Anyone else seeing this? Cool book though, I'm going to work through it this weekend.
evacchi 2 days ago 2 replies      
marginally relevant: I was looking for a terminal text editor for git commits and other similarly simple tasks: my only requirement is that I can save&leave with ^D. Any suggestions?
vbernat 2 days ago 1 reply      
For better portability, terminfo should be used instead of using hard-coded terminal sequences. Otherwise, this is a really great intro. I liked the beginning with how to put your configure your terminal.
nojvek 2 days ago 0 replies      
This was a great read and well explained. Thanks for the share.
hutusi 2 days ago 0 replies      
Thanks for the share.
bwidlar 1 day ago 0 replies      
Great work, thanks.
learningcn 2 days ago 0 replies      
That's what I like
m_andretti 2 days ago 0 replies      
Great documentation
milesrout 2 days ago 1 reply      
Looks like it's been slashdotted.
Twitter refuses US order to disclose owner of anti-Trump account reuters.com
817 points by anigbrowl  1 day ago   467 comments top 5
hyolyeng 1 day ago 8 replies      
This is truly terrifying. The fact that the US government will pursue this kind of action, potentially exposing and punishing criticizers of the government -- seems like this is how dictatorships/autocracy/totalitarianism start.

"I disapprove of what you say, but I will defend to the death your right to say it." If we believe in the free America, this should be what we should all fight for, if we want to keep America for the reason it became great in the first place.

throwaway2048 1 day ago 5 replies      
Its disturbing the amount of posters here who are directly equating banning users who actively post racist, hateful bullshit and handing over the user info of somebody who opposes the president.

Because they did one they should do the other? what?

ultimoo 1 day ago 3 replies      

"well now on CNN! and we gained 17000 followers in less than 30 minutes. Thank you CBP/Trump"

I just learnt about the Streisand Effect this afternoon -- https://en.wikipedia.org/wiki/Streisand_effect

colanderman 1 day ago 5 replies      
And while apologists may bring up the Yiannopolis case as an example of hypocrisy, let's keep in mind that Twitter is free to censor speech how it sees fit. The US government is explicitly not.
An off-grid social network staltz.com
985 points by staltz  2 days ago   360 comments top 27
georgecmu 1 day ago 7 replies      
As a historical note, there used to be quite a few very popular solutions for supporting early social networks over intermittent protocols.

UUCP [https://en.wikipedia.org/wiki/UUCP] used the computers' modems to dial out to other computers, establishing temporary, point-to-point links between them. Each system in a UUCP network has a list of neighbor systems, with phone numbers, login names and passwords, etc.

FidoNet [https://en.wikipedia.org/wiki/FidoNet] was a very popular alternative to internet in Russia as late as 1990s. It used temporary modem connections to exchange private (email) and public (forum) messages between the BBSes in the network.

In Russia, there was a somewhat eccentric, very outspoken enthusiast of upgrading FidoNet to use web protocols and capabilities. Apparently, he's still active in developing "Fido 2.0": https://github.com/Mithgol

chc4 2 days ago 5 replies      
This sounds like what I wanted from GNU Social when I first joined over a year ago. GNU Social/Mastodon is a fun idea, but it falls apart when you realise that you still don't own your content and it's functionally impossible to switch nodes like it advertised, along with federation being a giant mess.

I tried to switch what server my account was on halfway through my GNU Social life, and you just can't; all your followers are on the old server, all your tweets, and there is no way to say "I'm still the same person". I didnt realise I wanted cryptographic identity and accounts until I tried to actually use the alternative.

That's also part of the interest I have in something like Urbit, which has an identity system centered on public keys forming a web of trust, which also lets you have a reputation system and ban spammers which you can't do easily with a pure DHT.

the8472 2 days ago 2 replies      
> However, to get access to the DHT in the first place, you need to connect to a bootstrapping server, such as router.bittorrent.com:6881 or router.utorrent.com:6881

This is a common misunderstanding. You do not need to use those nodes to bootstrap. Most clients simply choose to because it is the most convenient way to do so on the given substrate (the internet). DHTs are in no way limited to specific bootstrap nodes, any node that can be contacted can be used to join the network, the protocol itself is truly distributed.

If the underlying network provides some hop-limited multicast or anycast a DHT could easily bootstrap via such queries. In fact, bittorrent clients already implement multicast neighbor discovery which under some circumstances can result in joining the DHT without any hardcoded bootstrap node.

DenisM 1 day ago 5 replies      
My friends and I have thought this through in detail a while ago, and have a few suggestions to make. I hope you make the best of it!

Distributed identity

Allow me to designate trusted friends / custodians. Store fractions of my private key with them, so that they can rebuild the key if I lost mine. They should also be able to issue a "revocation as of certain date" if my key is compromised, and vouch for my new key being a valid replacement of the old key. So my identity becomes "Bob Smith from Seattle, friend of Jane Doe from Portland and Sally X from Redmond". My social circle is my identity! Non-technical users will not even need to know what private key / public key is.


Introduce a notion of the "relay" server - a server where I will register my current IP address for direct p2p connection, or pick my "voicemail" if I can't be reach right away. I can have multiple relays. So my list of friends is a list of their public keys and their relays as best I know them. Whenever I publish new content, the software will aggressively push the data to each of my friends / subscribers. Each time my relay list is updated, it also gets pushed to everyone. If I can't find my friend's relay, I will query our mutual friends to see if they know where to find my lost friend.


There should be a way to create handles for real-life objects and locations. Since many people will end up creating different entries for the same object, there should be a way for me to record in my log that guid-a and guid-b refer to the same restaurant in my opinion. As well I could access similar opinion records made by my friends, or their friends.


Each post has an identity, as does each location. My friends can comment on those things in their own log, but I will only see these comments if I get to access those posts / locations myself (or I go out of my way to look for them). This way I know what my friends think of this article or this restaurant. Bye-bye Yelp, bye-bye fake Amazon reviews.

Content Curation

I will subscribe to certain bots / people who will tell me that some pieces of news floating around will be a waste of my time or be offensive. Bye-bye clickbait, bye-bye goatse.


Allow me to designate space to store my friend's encrypted blobs for them. They can back up their files to me, and I can backup to them.

rattray 1 day ago 2 replies      
Bit of feedback: when you download the desktop application, it prompts for a desired name, image, and description.

It's unclear whether this can be changed later, and I'm not yet sure whether I want to use my real identity or a throwaway.

After creating an account with the default randomly? generated name, I tried to use an invite obtained from which was linked from https://github.com/staltz/easy-ssb-pub.

All I got back was "An error occured (sic) while attempting to redeem invite. could not connect to sbot"

It worked with http://pub.locksmithdon.net/ though I feel a bit odd trusting a "locksmith" I've never heard of to stream lots of data to my harddrive...

It's cool that anyone can host a pub basically, an instance of FB/Twitter/Gmail, it seems but things 1) will get expensive for them, and it's unclear how they'll fund that and 2) now I have to trust random people on the internet not only to be nice, but also secure.

As a "random technically aware netizen", I honestly trust fooplesoft more, since they have a multi-billion-dollar reputation to protect. (Not that I trust fooplesoft).

EGreg 2 days ago 2 replies      
Yes, this guy gets it. This community gets it.

Not everything needs a global singleton like a blockchain or DHT or a DNS system. Bitcoin needs this because of the double-spend problem. But private chats and other such activities don't.

I have been working on this problem since 2011. I can tell you that peer-to-peer is fine for asynchronous feeds that form tree based activities, which is quite a lot of things.

But everyday group activities usually require some central authority for that group, at least for the ordering of messages. A "group" can be as small as a chess game or one chat message and its replies. But we haven't solved mental poker well for N people yet. (Correct me if I am wrong.)

The goal isn't to not trust anyone for anything. After all, you still trust the user agent app on your device. The goal is to control where your data lives, and not have to rely on any particular connections to eg the global internet, to communicate.

Btw ironic that the article ends "If you liked this article, consider sharing (tweeting) it to your followers". In the feudal digital world we live in today, most people speak must speak a mere 140 characters to "their" followers via a centralized social network with huge datacenters whose engineers post on highscalability.com .

If you are interested, here I talk about it further in depth:


fiatjaf 2 days ago 6 replies      
Why do all "social networks" have to be a feed of news? Couldn't anyone think of anything better than a system in which people are only encouraged to talk about themselves and try to get other people's approval? In which having more "friends" is always better, because you have more potential for self-agrandissement in your narcissistic posts?
wesleytodd 2 days ago 4 replies      
Since the author didnt mention it, the original creator of the patchwork project is https://github.com/pfrazee

When I used it, which admitedly was a long time ago now, the biggest setback was lack of cross device identities. So I ended up having two accounts with two feeds, `wesAtWork` and `wes`. Maybe they have solved this by now.

ps. Does patchwork still have the little gif maker? Because that was a super fun feature.

someone7x 2 days ago 2 replies      
This excites me. I'm probably naive, but I always imagine that one day I'll retire and spend my days trying to work on an open source mesh network (or something similar).I want future generations to live in a world where 'the internet' isn't a thing that authorities can grant/deny. A headless social network is a promising omen of a headless internet.
vhodges 2 days ago 4 replies      
I've been thinking about this very thing the past few days!

Forgive the rambling, this is the first time I've written any of this down...

My idea is to use email as a transport for 'social attachments' that would be read using a custom mail client (it remains to be seen if it should be your regular email client or have it be just your 'social mail' client. But... if using another client as regular email, users would have to ignore or filter out social mails). It could also be done as a mimetype handler/viewer for social attachments.

Advantages of using email: - Decentralized (can move providers) - email address as rendezvous point (simple for users to grasp) - Works behind firewalls - Can work with local (ie Maildir) or remote (imap) mailstores. If using imap, helps to address the multiple devices issue. Could also use replication to handle it too (Syncthing, dropbox, etc)

Scuttlebutt looks like a nice alternative though. Will be following closely.

snackai 1 day ago 4 replies      
A hipster living on a self-steering sailing boat has 600 modules published on NPM. I can't even. Seriously how could this be even more funny?!
rattray 1 day ago 4 replies      
So it seems there are two ways to exchange information:

1) be on the same wifi (presumably great for dissidents in countries with heavy-handed internet control, and inconvenient for everyone else)

2) use "pubs", which can be run on any server, and connected to through the internet?

So most users would use pubs, which are described as "totally dispensable" (a nice property). But how can users exchange information about which pub to subscribe to? Is there a public listing of them?

It seems like the "bootstrapping server" problem (eg; reliance on router.bittorrent.com:6881) will still exist in practice. For that matter, is there currently an equivalent to router.bittorrent.com that would serve this purpose?

This seems like a potentially significant project, and I'm excited by the possibility that it might actually take off hence the inquiry.

itchyjunk 1 day ago 4 replies      
I am not much of a social networking type of person, but I have wondered how nice it would be to network with a community like HN. For example, I see a nice comment chain going on in some news article, but as the article dies so does all the conversation within it.

Maybe it's just me but if I see an article is x+ hours old (15+ for example), I don't bother commenting.

What type of social networking would HN use for non personal(not for family and immediate friends) communication? (I've tried hnchat.com, it's mostly inactive imho)

cakeface 1 day ago 2 replies      
Can I choose who's content I pass along? I am ok distributing my own feed, that's presumably why I am joining the network. I am not OK passing along someone else's hate speech, porn, warez, malware, spam, etc. I'd like to be able to review the feeds available and say "Yeah sure I'll pass that around." If everything in a feed is encrypted then I'd need to decide. Also yeah my brother who's feed I follow and pass may upload a really nasty bit of content and I may relay it.
olleromam91 1 day ago 3 replies      
I'm not totally sure how the traffic management works, but what I would like to know is how services like this will be able to scale? What happens when there is a Pub with millions of users? Does it creep to a halt? Is there a need for dedicated Pub machines? If so, Who funds/maintains them? Does this lead to subscriptions?

Decentralized social networks seems like an inevitable progression as internet users become more aware of their privacy and ways they can improve online relationships and ...."social networking"

rb808 2 days ago 5 replies      
I'd like to see all my friends post updates and photos to blogs where I can subscribe via rss. This would be the best social network for me.
jdormit 1 day ago 4 replies      
I think I missed something. If information is exchanged when machines are on the same network, how does the guy in New Zealand get updates from the guy in Hawaii? Is there a server involved, or does the New Zealand guy have to wait until he is on a network with someone who has already connected with the Hawaii guy?
ISL 2 days ago 7 replies      
The storage requirements are tremendous, though, right?

If I want to have access to everything that's been shared with me, I have to store it all. In the case of images, the storage burden can get large quickly.

j_s 2 days ago 2 replies      
Right now Mastadon might as well be off-grid, unable to add additional accounts on the main server. Popularity has stunted it's growth!

I am not sure how much thought has been given to the scalability of this solution, it sounds like it will benefit from most of the advantages offered by P2P in this department.

Eventually something like this could organically grow into the "next Internet", in much the same way that the current internet has morphed into what it is today.

cosenal 2 days ago 1 reply      
You can make sure that the author wrote this post by copy-pasting [this signature](https://raw.githubusercontent.com/staltz/staltz.github.io/ma...) <-- 404: Not Found.Now I am not so sure on who the author is anymore...
the_arun 1 day ago 1 reply      
Wouldn't the size of diaries grow big - GBs and TBs over period of time and make it slow?
matt_wulfeck 1 day ago 3 replies      
The post starts by introducing two people (one in a boat in the ocean and another in the mountains in Hawaii) and states that they are communicating to each other. I thought this post was about some new long-range wireless protocol that sync'd via satellites or some such. I was disappointed to see this:

> Every time two Scuttlebutt friends connect to the same WiFi, their computers will synchronize the latest messages in their diaries.

Ultimately this technology seems to be a decentralized, signed messaging system. What problem are they solving? That facebook and twitter can delete and alter your messages?

Meanwhile I'm in search of a long-range, wireless communication system that can function like a network without the need of an ISP. Anyone know anything about this?

good_vibes 2 days ago 0 replies      
I really like his train of thought. The future of social networking will be very different from how it is structured today. That's a very safe bet.
freedaemon 2 days ago 2 replies      
> For instance, unique usernames are impossible without a centralized username registry.

This is Zooko's triangle and was squared by blockchains. Namecoin (2011), BNS (the Blockstack Name System, 2014), and now a bunch of other fully-decentralized naming systems can give you unique usernames. Recently, Ethereum tried launching ENS and ran into some security issues and will likely re-launch soon.

yoandy 1 day ago 1 reply      
Does it normally take too long indexing database? since I started the app have been a long while.I thought this could be a nice tool to use in places like Cuba, but I've realized now, that once connected to a Pub it download more than 1 GB, that would be a problem too in a place with lack of internet bandwidth.
XaspR8d 1 day ago 2 replies      
Basic question: since the entries form a chain and reference the previous, is there no way to edit or delete your old entries? (I see it "prevents tampering" and there's something of a philosophical question here about whether you're "tampering" with your own history when you editorialize -- I agree with the crypto interpretation, but in the context of offline interaction, social communication isn't burdened with such expectations of accuracy or time-invariance.)

If so I see that as a fairly large limitation for the common user. Even though truly removing something from the internet is effectively an impossibility, I think most non-technical folks aren't actively aware of this, and I'd at least like the option make it harder for folks to uncover.

New York City bans employers from asking potential workers about past salary nytimes.com
687 points by mendelk  1 day ago   493 comments top 7
pmoriarty 1 day ago 18 replies      
I never answer questions about my past or expected salary, not to employers and not to recruiters.

Most employers don't ask, and the few that have (perhaps by having a part of an employment form ask for previous salary) have never made my leaving that information out an issue.

Most recruiters, if they even ask, respect my decision not to talk about it, but I've been pressed hard on this by a handful of recruiters, and have had this be a deal breaker for a couple of them. One recruiting firm admitted that they were paid by the employers to get this information. I wasn't getting paid to give this information out, however, and it's worth more to me to keep it private as I'm placed at a disadvantage in negotiations if I name a number first.

It's still a seller's market for IT talent, and there are plenty of other fish in the sea, so if some recruiters can't accept that I won't name a number, it's their loss.

It's great that NYC is taking the lead on this, and I really hope the rest of the US follows suit.

garethsprice 21 hours ago 0 replies      
Lots of people here talking from their own experience as highly skilled, in-demand professionals.

However, helping friends apply to jobs in other industries - specifically medical - I saw that most of the applications involved filling out an automated form that required prior salary information to complete.

There's no advantage to an employee from being forced to disclose this information and it perpetuates compensation discrepancies by gender/race/guts to ask. Very glad to see this made illegal.

Now, if they were really serious about fixing pay discrepancies, they'd make it mandatory to post salary ranges with job listings.

showmethemoney 22 hours ago 3 replies      
Once upon a time I interviewed for a role in NYC. An employee that I spoke to said they paid pretty well, and I could expect about 120. The HR person wanted my previous salary, and I refused. Eventually they said their range was 130-150. I said it wasn't gonna work cause I was looking for something more like 220. They said okay we can do that no problem. My previous salary was 110.
jimparkins 1 day ago 11 replies      
Have a google for: "can i lie to a employer about past salary" - it really really messes with people - people feel super uncertain about how to approach this situation. Throwing any confidence they have during the negotiation out the window.

Even now I hesitate to write this as a million people will come out and say never lie - what if they found out.

More than banning. There needs to be acceptance that if someone asks you. You are totally free to make any damn number up that you like. Seriously. Its a sales situation. It should not be like your under oath on the stand. Which is how most people view it.

pbasista 19 hours ago 1 reply      
My past salary is an irrelevant information for my potential future employer. If they ask about it, my response would be: "Why would you like to know it?" Any answer to this question is bad. If they do not bail out and stop asking at this point, then I bail out.

The point is that if I want, I can completely change my way of life by switching to a job which pays 50 % of my current salary. Or 400 % of my current salary. It does not matter. What matters is that it is solely my decision and none of my potential future employer's business.

If they want to know my current salary, it is a red flag. I do not care about them knowing it, but there is a high risk that they will use that information to try to make an offer which they think that I ought to consider good. They can offer e.g. my current salary + their negotiating margin and think "hey, we have offered you more than you have now, so you ought to be happy". While in reality, the only person who can responsibly decide whether I am happy about it or not is me.

Note that I am not criticizing companies which want to hire for cheap. This is all right. But they need to do it transparently, from the beginning. They should say it clearly and upfront: for this position, our budget is somewhere in this range ... are you interested or not? This is a fair way to go.

mdb333 21 hours ago 1 reply      
It will be interesting to see how this affects the hiring markets. Out in SF/etc it came up in just about every discussion I had last time I was looking for work usually as part of the first phase. No point interviewing candidates that wouldn't accept the job. It's pretty much a risk mgmt exercise from the hiring side. Similarly, I always asked what the compensation range they're targeting is as I don't want to waste my time either.

I wonder if this ban addresses background checks covering the same information, because some companies do ask for this data from previous employers although not all provide it. Without protection there this ban seems fairly limited.

Anyhow, I don't agree with all advice to never disclose current/previous salary. In some scenarios certainly it makes sense, but in others it is the opposite. You want to justify a higher market value and set the expectation that you're unlikely to be interested unless they're willing to compensate at $X or higher. Of course it's different in terms of leverage if you're employed currently or not. Recruiters and interviewers will waste tons of your time if you don't get on the same page quickly. Lack of transparency around your compensation expectations will exacerbate this issue. Whether that means you tell them what you're making or what you'd like to make doesn't really matter, but you better do at least one of the two.

paulcole 1 day ago 10 replies      
Good. Not enough people realize the best way to answer this question is with a straight-up lie.
A Thank-You Note to the Hacker News Community from Ubuntu dustinkirkland.com
770 points by dustinkirkland  1 day ago   221 comments top 37
tie_ 1 day ago 6 replies      
I have a strong dislike for systemd, so while I'm really sorry that upstart "lost" the fight, Ubuntu gained a lot of respect in my eyes with the decision to go with the rest and avoid unnecessary fragmentation. This could have easily ended up another as another community rift, slowing down everybody along the way.

Now they do it again with Wayland/Mir! It actually takes a significant amount of both balls and goodwill to give up on the product that you invested so much into for the sake of aligning better with your open source community. Bravo!

FWIW, I too would like to keep the DE experience of Unity, and especially the Dash panel and shortcuts. If that expose-text-search could scan non-focused browser tabs that would be a killer feature, but that's for the other thread.

The idea of "Let's simply ask HN users what they think" is a gem, that I suspect will now make it into many PMs' playbooks ;)

zserge 1 day ago 5 replies      
I believe that the major pitfall here is that the feedback you've received is mostly about the changes that people want to see. However if we consider the number of people who want Gnome vs the number of those who want Unity 8 vs the number of conservative users who like Unity 7 as it is now - the results might be different.

I personally am very happy with the current Unity. I find it intuitive and more aesthetically pleasant/polished than Gnome Shell (I've only used that as it comes with Ubuntu Gnome).

So please, don't drop current Unity. Or if you have to switch to Gnome Shell - please keep the user experience as close as possible to the current Unity to help users migrate.

josteink 1 day ago 0 replies      
Awesome to see our response followed with such attention, not to mention feedback with concrete promises about what (and what not) to expect dealt with.

Good job, Canonical! Happy to be a user!

(And good job finally ditching Mir. You could have kept Unity for all I care. Linux can handle a few dozen DEs. But having more than one display server, now that was just nuts.)

Edit: while the feedback post here may have been the most discussed post on HN ever, the announcement of dropping Mir clearly made rumbles too, with a record 10,000+ upvotes in a "niche" subreddit like /r/linux. When a player like Ubuntu does the right thing, people clearly care.

icc97 1 day ago 3 replies      
> Official hardware that just-works, Nexus-of-Ubuntu (130 weight)

I really like this one. I'd made comments of wanting pre-installed Linux, but the Nexus concept is a much better idea. It's following a model that seems to have worked well for Google.

It's a fantastic way to break the chicken and egg situation of getting pre-installed Linux available.

DoofusOfDeath 1 day ago 7 replies      
Hey Dustin, thanks for the follow-up!

In your original story, I posted a request for Canonical to come up with some viable strategy to get Adobe CS (and related color-calibration HW/SW) usable on Ubuntu.

I expected a lot of traction for that suggestion, because AFAICT a lot of creative professionals are looking for a way to escape the Windows/Mac duopoly.

However, it looks like my suggestion didn't make the cut for the list you just posted.

Can you share any thoughts on why getting Adobe CS usable on Ubuntu is / isn't a strategic priority for Canonical?

apexalpha 1 day ago 1 reply      
A sincere thank you to Ubuntu from someone who didn't study anything remotely close to IT but is now a software engineer anyway.

I've always been interested in software and computers in general and, besides with the raspberry pi, I think Ubuntu has been the biggest influence on my interest in software and decision to learn programming.

A few months back I looked up my first ever forum post. https://gathering.tweakers.net/forum/list_message/28824598#2...

Asking why my PC wouldn't boot after 14 year old me stuck some components, including the HDD, in another pc. Turned out you needed something called 'drivers' to run a motherboard.

Shortly after this I ordered my first red Ubuntu live CD that you guys shipped and that was my first experience with Linux.

Anyway, open source projects that allow you to thinker with software and even break it played an important role in my life, and Ubuntu was my doorway to a decade of learning, playing and wonder about software and technology.

Running 1604 LTS now. Sad that you guys are dropping Unity for GNome but still happy with Ubuntu. I'm sure it'll work out.

Thanks, and good luck.

mi100hael 1 day ago 1 reply      
> Add night mode, redshift, f.lux (42 weight) This request is one of the real gems of this whole exercise! This seems like a nice, little, bite-sized feature, that we may be able include with minimal additional effort. Great find.

This one should come for free with the switch to Gnome, since it's now present in 3.24. https://www.gnome.org/news/2017/03/gnome-3-24-released/attac...

JepZ 1 day ago 1 reply      
I also really like the idea of 'Official hardware that just-works'. Not just for users without much technical knowledge but also for the rest of us.

I mean nowadays we somehow manage to get most hardware working 'somehow'. Sometimes it takes a few years before your sound/bluetooth/wifi chip actually does what it is supposed to do, but most of the time we find ways to make use of our hardware.

But when you are going to buy new hardware you are a bit lost. You can try to find out if there are any major problems with some hardware, search the ubuntu hardware database, but especially for new, rare or expensive hardware there is often not so much to find about. For example a few years ago I bought a 22" touchscreen for my desktop and for almost a year it somehow worked but didn't do the things it was supposed to do.

Officially supported hardware by vendors would be a great step in the right direction.

padraic7a 1 day ago 2 replies      
There's something very HN about the fact that the author of one of the most discussed posts in the site's history can issue a follow up post, dealing with that discussion, to such little notice.
Andrex 1 day ago 0 replies      
I feel guilty: I didn't respond on the initial post because I doubted the really outrageous/far-out ideas like "dump everything you've been working on for the past five years" would actually be fruitful, but man, Shuttleworth shut me right up haha. Congrats on the Ubuntu team for the courage to make bold changes when necessary and to actually (finally?) listen to pointed community feedback and constructive criticism.

I haven't been this excited for Ubuntu since 2010.

hatsunearu 1 day ago 4 replies      
Why is dropping Unity lumped with dropping MIR?

Unity is great and I love it, I don't want to lose it.

gingerbread-man 1 day ago 0 replies      
Link to the original post: https://news.ycombinator.com/item?id=14002821"Ask HN: What do you want to see in Ubuntu 17.10?"
Johnny_Brahms 1 day ago 1 reply      
Not related to Ubuntu, but maybe someone here knows a good way to kill the swipe left/right to navigate in blogs. I accidentally left the page about 5 times, which is rather annoying.

Using ff mobile if that helps.

Edit: hallelujah! Set dom.w3c_touch_events.enabled to 0 in about:config

dankohn1 1 day ago 1 reply      
Congrats to Dustin not just for some amazing crowdsourcing with the original post, but then for an extremely concise, thoughtful and humble followup for those of us who couldn't wade through all of those suggestions ourselves.
akerro 1 day ago 0 replies      
No one mentioned better Steam support/collaboration with Valve? CPU and operating systems are made for gaming, it's gaming that attracts users and investors from big companies and developers. Ubuntu would only benefit if they invested in MESA, faster AMD GPU integration and cooperate with Valve on Steam clients and gamedev research for Linux.
Jerry2 1 day ago 1 reply      
I currently use Ubuntu on one of my laptops precisely because it's not GNOME. I can't stand GNOME3's hamburger menu UIs, giant title bars, lack of real menus, inability to make changes without digging into their version of dreaded "registry" and that pointless menu bar at the top. If I'm being forced to move to GNOME, I'll be switching to Fedora instead since they have a solid track record of GNOME and Wayland support.

My desktop will always be Arch, however with Cinnamon and, occasionally, i3 for development.

mwcampbell 1 day ago 1 reply      
I'm impressed with your thoroughness in processing the responses. But I'm just curious: Why did you lump usability for children and accessibility for users with disabilities into one suggestion? Anyway, returning to GNOME should help with the latter.
sandGorgon 1 day ago 2 replies      
The one thing I'm incredibly worried about is the packaging war starting all over again. There are zero reasons for having multiple packaging formats across liNux distros .

I was hoping that snap or flatpak becomes universally adopted, but it seems that Redhat and Canonical are again split along political fault lines here.

agd 1 day ago 0 replies      
" Fix /boot space, clean up old kernels (92 weight)

...committed to getting this problem solved once and for all in Ubuntu 17.10 -- and ideally getting those fixes backported to older releases of Ubuntu."

Great news! Thanks for taking this feedback onboard.

sunstone 1 day ago 0 replies      
While it's true that Ubuntu still has a few inconveniences, from my perspective it's still much better than the alternatives.

The mere thought of having to use Windows or IOS for development make me want to curl up in the corner with my blanket.

Daily I use ubuntu for my desktop, laptop, TV streamer and cloud servers. Overall my satisfaction level across all devices is at least 9/10.

Great job team Ubuntu!

19eightyfour 1 day ago 1 reply      
Here's a perspective on FOSS and Ububtu. One point of free open source software was we didn't need to be stuck with features we didn't like, that were decided by some majority, or leadership, because we could fork our own and customize as we like. So I think the enthusiasm for responsiveness of a central body to feature requests, is an enthusiasm for a thing not normally associated with a FOSS, which therefore demonstrates that either: that the Ubuntu ecosystem says it is FOSS, but actually operates like something else, or that FOSS fork-your-own theory doesn't apply in practice on large projects with lots of people, or that this perspective just related here is missing something.
BudFox 1 day ago 0 replies      
It is sad to see the modern and reliable Unity 7 put out to pasture. GTK was not used for Unity 7. Canonical chooses QT for Unity 8. KDE can easily replicate Unity 7 look/feel. QT no longer has an objectionable license. It is not clear why Gnome is the choice over basing on KDE?
mapreri 1 day ago 1 reply      
Just a note about Reproducible Builds: "We've been working with Debian upstream on this over the last few years, and will continue to do so" well, apart from the regular sync/merge flow from Debian to Ubuntu, AFAIK Canonical never reached out to us Reproducible Builds folks from Debian.That said, we/I plan to reach out to Ubuntu/Canonical soon :)
drej 1 day ago 1 reply      
Just an irrelevant side note:

When you do a requests.get(url), you can use the .json() method on it instead of all the json.loads([...].text). I remember doing the same for years and only discovering .json() recently, I love it.

jseutter 1 day ago 1 reply      
Just want to chime in with others and say thank you for making this followup post. Regardless of the outcome, distilling the results the way you did means a lot to me and I'm sure others as well. Thanks!
shalmanese 1 day ago 0 replies      
It would be nice to get this list sorted by most surprising as well. A lot of these feedback points seem to fall into the category of "faster horses" which usually tend to dominate when you solicit ideas from the general public.
_jordan 1 day ago 1 reply      
I would have really liked to see gnome2-esq style DE - bare bones, focus on a great windows like task bar / windows management. I really hated the unity dock thing and I especially hated the color scheme.
Alan_Dillman 1 day ago 0 replies      
Nice read, and I am wishing I had been in on the original topic.

If I had, I would have commented that the version upgrade process sucks for anyone that starts from a minimal base(like I do). If I go through the upgrade, I will end up with all the default stuff.

If I don't have thunderbird installed, don't install it for me.

In short: upgrades could be smarter, faster, better, stronger.

trololo12345 1 day ago 1 reply      
Will this mean, that the integrated Amazon search in the desktop is also gone forever?I'd like to see Canonical making money with support instead of that Amazon thing or selling "Apps" to the user. That is the number one reason, why I do not recommend Ubuntu.Other than that, I'm amazed how much the community was heard! Also thanks for the in-depth analysis blog post :)
deadfece 1 day ago 1 reply      
I think the best news out of this is hopefully that it seems you (as Canonical) have shifted away from "This is not a democracy. [...] we are not voting on design decisions." and the (paraphrased) "This is good because we [Canonical] made it!"
PleaseHelpMe 1 day ago 1 reply      
I am slightly sad by the fact that rarely anyone mentioned about the battery life of ubuntu, but got excited with the "Nexus-of-Ubuntu" thing. I wish the best to ubuntu.
mschuster91 1 day ago 3 replies      
Ad "LDAP/AD integration":

> This is actually a regular request of Canonical's corporate Ubuntu Desktop customers. We're generally able to meet the needs of our enterprise customers around LDAP and ActiveDirectory authentication. We'll look at what else we can do natively in the distro to improve this.

OK I get the need that some may have to integrate an UI but please don't ship a full-blown Samba/winbindd plus config generator as default.

Here's the why:

Everyone has different LDAP setups. Some use a homegrown LDAP, some use MS AD in varying versions, some use Samba as AD in varying versions - and then everyone uses a different LDAP/AD scheme (e.g. is the username attribute lowercase-able, which attribute is it mapped to, are all PCs/users in a single OU, do you want to restrict logins to specific groups, does the organization need "full" AD setup or will a plain ldap_bind be sufficient ...) and you almost always need to hand-tune the configuration for your specific setup. A GUI configurator will most likely only work OOTB for people sticking with a standard MS AD, and make problems with non-standard setups, multi-domain memberships or similar.

And: Non-enterprise users will most likely not need AD/LDAP support. Those who do should have competent admins anyhow, but what I can certainly say is that the documentation could be updated (e.g. https://wiki.ubuntuusers.de/Samba_Winbind/ only works for 12.04/14.04). I'd rather like if the documentation were improved than yet another shoddy Samba config generator that's falling out of sync with Samba more sooner than later...

(source: lost more than a few hairs wrestling with AD and LDAP)

jumasheff 1 day ago 0 replies      
Re: Make WINE and Windows apps work better

How about booting ReactOS to run Windows apps? Sounds like a Frankenstein-ish solution, but...

hysan 1 day ago 1 reply      
I was really happy when that Ask HN thread was first posted and even happier now knowing that all of the comments were read and thoroughly considered. That said, what this blog post and the general feeling I get from Ubuntu as a whole is that:

1. There is too much of a focus on how far Ubuntu has come and not enough focus on how far things still need to progress. I too remember the days when sleep/hibernate were a crap shot, but that was when I viewed Ubuntu as an open source alternative without much expectation. Nowadays, I see Ubuntu as a mature desktop and as such, I judge it much more harshly. Anything that doesn't work or isn't 99.99% stable is a red flag for me. So I do hope that the Ubuntu team puts more focus on making things up to date, rock solid, and super stable rather than go chase after new features. Just like a building, you need a stable foundation before building upwards.

2. There isn't a clear target audience for Ubuntu Desktop. What does Ubuntu Desktop want to be? Before it was convergence and while a very neat idea, I was never clear on who was supposed to use it. The requirements screamed high income, tech savvy end users. However, development focused on Unity, Mir, etc. with work going into features that didn't fit the target audience. For example, like the post says, HiDPI & 4K were a surprise. Why was this surprising? The group that would most likely be your early adopters and trend setters are the same exact group that would have this type of hardware. Same with trackpad, gestures, customizability, flux, root on ZFS, security, etc. All of those are used heavily by the demographic most likely to follow the news on Unity and convergence. It baffles my mind that Ubuntu's Product Management couldn't make this connection and understand what core features to build out first in Unity/Mir. Yes, these are on the sidelines now, but I really hope the Product Management team takes the time to figure out some direction.

3. At the moment, Ubuntu is at a major crossroad. Even after Mark Shuttleworth's post, this post, and all of my usual Linux news following, I don't really know where Ubuntu Desktop is going to go. Tell us what we can expect as users. Tell us when we can expect it to come. Tell us how you intend on getting there. And most importantly, tell us how we can help! Either though regular posts to various communities like the Ask HN one, or ways we can contribute actual work. Not everyone is a dev, but as an example, I used to do professional QA and yet I found it extremely difficult to find out how I can help QA things and submit useful bug reports (this isn't just Ubuntu but most open source projects). The usual "check the docs/wiki" or "submit something on the issue tracker" are not helpful. In all of my years using Linux, that Ask HN thread + this blog post was the first time I ever felt like I was heard and managed to contribute to Ubuntu. Even something as simple as periodically getting feedback from the community and telling us what you heard from us makes me feel more optimistic about Ubuntu's future.

I apologize for the rant-like nature of my comment, but hopefully this gets read and something positive comes out of it. Thanks for reading.

shmerl 1 day ago 0 replies      
Thanks for deciding to focus on Wayland from now on. This will benefit everyone.
Arizhel 1 day ago 0 replies      
Instead of adopting Gnome3, they should adopt KDE. It's far more customizable (so "Easily customize, relocate the Unity launcher (53 weight)" would be already done) and much better architected than Gnome. And for people lamenting the loss of Unity, it wouldn't be that hard to make a custom theme for KDE which largely replicates the look-n-feel of Unity. Gnome simply is not set up to allow any kind of customization, and the devs actively discourage it. The opposite is true for KDE, and a distro that wants to stand out with its UI would be better served with a DE that allows them the freedom of customization.
analognoise 1 day ago 1 reply      
I stopped using Ubuntu after it came with Amazon shit installed.
Growing Ubuntu for Cloud and IoT, Rather Than Phone and Convergence ubuntu.com
745 points by popey  2 days ago   448 comments top 15
gshulegaard 2 days ago 17 replies      
I may be a minority, but I am very saddened by this. Not because I have any particular love for Unity, but rather I share Mark's conviction that convergence is the future.

Love or hate it but Unity was IMO the best shot we had at getting an open source unified phone, tablet and desktop experience...and now this is effectively Canonical not only shutting down Unity, but refocusing efforts away from convergence and towards more traditional market segments. I mourn the death of this innovative path.

That said, hopefully this convergence with GNOME will eventually lead back to convergence...but for now that dream is dead it would seem.

s_kilk 2 days ago 1 reply      
> We will shift our default Ubuntu desktop back to GNOME for Ubuntu 18.04 LTS

Legitimately never thought I'd ever see this. Possibly the best thing that could happen for desktop Linux in this age.

Edit: and of course this would mean Ubuntu/Canonical and Fedora/RedHat basing their desktop OS's on the same platform, which can only mean easier development of desktop software and services.

hd4 2 days ago 3 replies      
I'm one of the people who asked for less NIH in Ubuntu in the recent thread https://news.ycombinator.com/item?id=14002821 but I didn't think they would take it this far. Jokes aside, it's sad that Unity won't be developed further.

I'm one of the ones who loves Unity 7, it's always been faster and less memory hungry than GNOME or KDE for me. I will just have to cling onto the LTS for as long as possible.

In the long-term I think this is good for Ubuntu and Linux users in general, less diversity can sometimes help an ecosystem form. I think many users just want a DE to stay out of the way and make life easier, so I hope some of the Ubuntu ease of use focus and community will get injected back into GNOME. I really hope a huge flood of users coming back forces them to look at their memory usage and get it under control.

brendaningram 2 days ago 7 replies      
Disclaimer: I don't use Gnome or Unity, I'm an i3 guy.

I understand that choice in the Linux world is very important, but I also think that choice (taken to extremes) can be crippling. My opinion is that we have too many desktop environments, and too many distros.

If we imagine a hypothetical scenario where in June 2010 Ubuntu committed to Gnome as the DE, imagine how much progress would have been made with Gnome in the last 7 years, not just from a coding perspective, but from a community and social perspective.

I consider it supremely important that we educate as many computer users as possible about the negative side of proprietary software (lock in subscriptions, proprietary file formats, closed source privacy concerns etc).

What Ubuntu did back in 2010 (I think) did major damage toward that vision.

I applaud Mark Shuttleworth for making the decision, even if he only got there because of commercial reasons. I really hope that Canonical and Red Hat can work together to make Gnome not just a technological success, but a social one too.

ThatGeoGuy 2 days ago 9 replies      
I get that they're moving away from convergence, but what does this ultimately mean for Ubuntu as a mobile OS? In the grand scheme of things, what does this mean for users who want a completely FOSS stack for their phone (let's ignore the baseband for now)?

As far as I can tell, this just means that your only options are Android or iOS. It's not easy to get a Jolla/SailfishOS phone that will work on most Canadian or USA networks, and with this announcement it seems that Ubuntu phones won't be around for much longer. This coupled with the death of Firefox OS means that there's really not much of a choice. Certainly you can run AOSP with no Google Apps, but not having Google Play Services tends to cause more and more problems, or at the very least means your phone is less and less capable as time goes on.

I guess in general we can all celebrate that Ubuntu is moving to GNOME / Wayland and is ditching convergence, but I think the fact that there's no healthy alternative to iOS / Android is quite sad. If Canonical is exiting the mobile space to work on other things, what other alternatives do users have?

Afforess 2 days ago 5 replies      
Damn. From the horses mouth: https://insights.ubuntu.com/2017/04/05/growing-ubuntu-for-cl...

I might be a small minority, but I _like_ Unity 7. I have never used Unity 8, and I thought the Ubuntu phone was misspent effort, but wow. Now I have to figure out if there is a way to style Gnome to look like Unity 7.

I wonder what this means for Mir vs Wayland as well.

WD-42 2 days ago 0 replies      
Wow. This is a real shock, it wouldn't be out of place for April 1st.

This is good news for linux on the desktop. Not that diversity isn't good, but Unity has been stale for years while Gnome has been progressing, but suffering from the fragmentation that Canonical caused.

Hopefully this will result in more contributions upstream, which will benefit all linux distributions. This was always the main complaint with Canonical.

sandGorgon 2 days ago 2 replies      
> We will shift our default Ubuntu desktop back to GNOME for Ubuntu 18.04 LTS

This is huge and was my #1 request for the previous post for Ubuntu 17.10. Gnome on Fedora is amazing and I have had people walk up to me and ask me - what OS am I running ?

It is so much better for Ubuntu and Redhat to have joint stewardship of Gnome going forward rather than split energy on wasted competition.

My next biggest request is flatpak vs snappy - I cant believe that the package management wars are beginning all over again in 2017. Just pick one and be done with it. RPM and DEB will never converge, but we have a narrow window of opportunity with flatpak and snappy.

dumbmatter 2 days ago 1 reply      
This is awesome. When that "What do you want to see in Ubuntu 17.10?" post https://news.ycombinator.com/item?id=14002821 was up recently, I wanted to say "Get rid of the abomination that is Unity" but figured it'd just be flamebait. Little did I know how close my dreams were to coming true!
moystard 2 days ago 0 replies      
It takes courage to reflect on previous decisions and re-consider your product strategy. I am quite impressed by Mark Shuttleworth decision to move away from Unity and the desktop/phone convergence that has been slowing down innovation for Ubuntu, and allowed other distributions to catch up.

Every Linux users should benefit from this decision; I am excited to see the improvements they will make to the Gnome environment.

paol 2 days ago 3 replies      
I'm entirely convinced that Shuttleworth's vision of convergence will happen. It looks like an inevitability, as mobile computing power continues to grow faster than typical consumer workloads (the same forces already made it possible for $400 laptops to be good enough for most mainstream users).

Canonical just didn't have the resources to push a 3rd mobile platform. Hell, even Microsoft gave up (who did have the resources, and IMO made a mistake in giving up).

AdmiralAsshat 2 days ago 3 replies      
GNOME has been doing some really cool stuff as of late (http://www.omgubuntu.co.uk/2017/03/top-features-in-gnome-3-2...). I'm still using Cinnamon, because I still like the look a bit more, but it's getting harder to ignore all of the excellent features GNOME provides, including:- Drive support for file manager- Gmail/Outlook support for GNOME accounts and built-in calendars- Working Wayland implementation

I'm holding out at the moment, as it's missing one feature from Cinnamon that I really like (the ability to launch and control any audio player from the sound icon in the tray), but when Fedora 26 launches I may finally have to switch over.

I hope that Canonical shifting back to GNOME will further its development under Wayland and not spend a crapload of time doing more work for Mir.

doppioandante 2 days ago 1 reply      
Unity has often been criticized because it was a Canonical thing and didn't leverage Wayland, but it's another world compared to Gnome shell. I couldn't like gnome shell no matter what.

I was using Gnome shell on a old eeepc. The interface is dumbed down and you have to install a bunch of extensions to make it as functional as Unity.

What was worse is that the launcher automatically triggered the search function, and that slowed down the pc to a crawl.I'm using KDE on it now and, even though less stable, it has a decent interface and it is surprisingly snappy.

CaptSpify 2 days ago 1 reply      
I just got the Ubuntu phone. It's the first smart-phone that I don't hate. I do agree that Ubuntu needs to stop with their NIH syndrome when it comes to desktop, but the phone market is just flat out terrible.

I would be willing to spend a lot of money for a decent FOSS phone, but there just isn't anything out there. Ubuntu was my only hope |:(

franciscop 2 days ago 1 reply      
Wow this is impressive just after the Ask HN they did few days back. It's been few years users complaining and opposing Mir so it seems it just took that last feedback cry. It's great that they listen to their users and also that there's going some love given back to the desktop+server (+IoT).

Disclaimer: I am really happy using Ubuntu everyday.

Uber finds one allegedly stolen Waymo file on an employees personal device techcrunch.com
511 points by folz  2 days ago   333 comments top 20
Animats 2 days ago 8 replies      
Judge Alsup:

"If your guy is involved in criminal activity and has to have criminal lawyers of the caliber of these two gentlemen, who are the best, well, okay they got the best. But its a problem I cant solve for you. And if you think Im going to cut you some slack because youre looking atyour guy is looking at jail time, no. They [Waymo] are going to get the benefit of their record. And if you dont deny itif all you do is come in and say, We looked for the documents and cant find them, then the conclusion is they got a record that shows Mr. Levandowski took it, and maybe still has it. And heshes still working for your company. And maybe that means preliminary injunction time. Maybe. I dont know. Im not there yet. But Im telling you, youre looking at a serious problem."


"Well, why did he take [them] then?". "He downloaded 14,000 files, he wiped clean the computer, and he took [them] with him. That's the record. Hes not denying it. You're not denying it. No one on your side is denying he has the 14,000 files. Maybe you will. But if it's going to be denied, how can he take the 5th Amendment? This is an extraordinary case. In 42 years, I've never seen a record this strong. You are up against it. And you are looking at a preliminary injunction, even if what you tell me is true."

Uber is having a very bad day when a Federal judge starts talking like that. A preliminary injunction looks likely. If Uber can't find anything, this goes against them. Nobody has denied that Levandowski copied the files. Uber paid $600 million for Otto's technology and people. Even if the files didn't make it to Uber's computers, Waymo can probably get a preliminary injunction shutting down much of Uber's self-driving effort. Then Uber gets to argue that their technology is different from Waymo's. It's going to be hard to argue independent invention when all the people are from Google's project.

YCode 2 days ago 4 replies      
Offhand, this kind of sounds like a parent asking their teenager to go and search their own room for drugs.

"Nah, I didn't find anything. I found this plastic bag that looks like it mighta had something in it, but I'm pretty sure my friend left it here and it was empty when he brought it."

"Okay son, go search again."

matt4077 2 days ago 2 replies      
This judge is mighty impressive, and since it's so much in fashion these days to be suspicious of institutions, I want to highlight this passage:

THE COURT: If you all keep insisting on redacting so much information, like -- and you're the guilty one on that, Mr. Verhoeven -- then arbitration looks better and better. Because I'm not going to put up with it. If we're going to be in a public proceeding, 99 percent of what -- 90 percent, anyway, has got to be public.[..]

THE COURT: The best thing -- if we were -- one of the factors that you ought to be considering is maybe you should -- if you want all this stuff to be so secret, you should be in arbitration. You shouldn't be trying to do this in court and constantly telling them not to, or you putting in -- the public has a right to see what we do.[..]And I feel that so strongly. I am not -- the U.S. District Court is not a wholly owned subsidiary of Quinn Emanuel or Morrison & Foerster or these two big companies. We belong to the public.And if this continues, then several things are going to happen. One, we're going to call a halt to the whole -- we're going to stop everything. And we're going to have document-by-document hearings in this room,

ABCLAW 2 days ago 2 replies      
It is surprising that Google did not push the court to appoint a third party discovery firm to handle the device imaging process and to provide a report to the court.

Maybe both parties' intense desire for privacy in this matter has driven Google to this strategy.

The seeming ludicrousness of the result - Alsup's "go try again, harder this time" - is not caused by this case's parties playing badly. It is caused by poorly defined and understood laws surrounding what constitutes a defensible search. Data handling in this stage of legal proceedings is imperfect, and can be manipulated by both parties to drive up the cost of litigation, or to strategically avoid disclosing the key breadcrumb documentation that would otherwise have led to the smoking gun(s).

Edit: Please find the court reporter transcript here: http://www.documentcloud.org/documents/3533784-Waymo-Uber-3-...

Judge Alsup's comments are fairly aggressive in comparison to most commercial litigation, but the no-nonsense tone is par for the course.

themgt 2 days ago 3 replies      
'To the extent Uber tries to excuse its noncompliance on the grounds that Mr. Levandowski has invoked the Fifth Amendment and refused to provide Uber with documents or assistance, Waymo notes that Mr. Levandowski remains to this day an Uber executive and in charge of its self-driving car program. Uber has ratified Mr. Levandowskis behavior and is liable for it, Waymo attorney Charles K. Verhoeven wrote in a letter to the court (emphasis his).'


golfer 2 days ago 0 replies      
Interesting statement here from the judge and Uber's attorney (Gonzalez). Gonzalez worked for Alsup at some point in their careers.

Judge Alsup: Look. I want you to know I respect bothsides here. And everyone knows I know Mr. Gonzlez from thedays when he was a young associate and I was a partner, and hewas working for me on cases. And he has gone on to be a muchbetter lawyer than I ever was. But you shouldn't have asked for in camera on this. Thiscould have all been done in the open. I'm sorry thatMr. Levandowski has got his -- got himself in a fix. That'swhat happens, I guess, when you download 14,000 documents andtake them, if he did. But I don't hear anybody denying that.


discodave 2 days ago 0 replies      
I just realized how stark the prisoners dilemma here between Uber and Levandowski is. Based on what Alsup was saying today that if Uber can't produce counter-evidence by May 3rd, they are staring down the barrel of a preliminary injunction, they're damned if they fire him and damned if they don't.

1. Levandowski remains at Uber. Keeps asserting his fifth amendment rights, which means that Uber can't present evidence to thwart Googles theft claims. Judge files a preliminary injunction, sad trombone, no self-driving cars for Uber.

2. Uber fires Levandowski. Now, he has no reason to protect Uber, the incentives for him are to avoid criminal prosecution. He could even do a deal with Google or a prosecutor to cooperate in the civil case in exchange for avoiding criminal prosecution. Uber is then likely to lose the actual case, sad trombone, no self driving cars for Uber.

As others have pointed out, the stakes for Uber are incredibly high, they missed the china train and if they can't catch the self-driving-car train, then their $50+ billion valuation is up in smoke.

Man I wish I could be shorting Uber right now.

checkdigit15 2 days ago 1 reply      
"Judge William Alsup, who is presiding over the case, ordered Uber to search more thoroughly for the documents."

Judge Alsup always winds up with the most interesting cases :-)

fowlerpower 2 days ago 4 replies      
This story is fascinating for tech people everywhere and we should all pay attention.

We all have big dreams of starting our own company some day (I know do) and many of us work for big corporations that would rather we never go anywhere and work for as little as possible. (admittedly the markets are forcing them to pay us a lot but they aren't doing it out of good will).

The outcome of this will teach us all very valuable lessons. I can't be the only one who is a little paranoid that if I start my own shit I'll be sued or that I may even be sued for some of the side projects I'm working on even though I've never taken any code or resources from my company.

DannyBee 2 days ago 3 replies      
A lot of people seem confused by the idea that a party can request personal documents someone else has.

Just like in criminal land, civil land has subpoenas.Parties can issue subpoenas for most things to other parties.In federal court, civil subpoenas are covered by Federal Rules of Civil Procedure rule 45.


Outside of the exceptions listed, yes, you would be required to produce information you have.

woodandsteel 2 days ago 1 reply      
As a former grokoholic, I must say all this heavy-duty legal drama makes me miss Groklaw and pj.
aresant 2 days ago 1 reply      
This resolved to the Judge ordering a deeper search:

"[The Judge] told Uber to search using 15 terms provided by Waymo, first on the employees computers that had already been searched, then on 10 employees computers selected by Waymo, and then on all other servers and devices connected to employees who work on Ubers LiDAR system."

Seems interesting that there's not a more comprehensive system or way to search for these since Google is clearly in possession of the specific documents they claim are stolen.

The way they're continuing the Judge's order to look for "15 terms" almost makes it seem like the extent of the original search was tied to file name or document titles or something?

dmritard96 2 days ago 3 replies      
Are there any opensource autonomous/driverless car projects with substantial momentum. This seems like something so foundational to the next 50 to 100 years that it needs to be 'owned' by everyone.
woodandsteel 2 days ago 0 replies      
It seems to me that even if Uber proves that Levandowski never downloaded the files to Otto, much less Uber, they still are in deep trouble unless they can prove that he never laundered the information in them through his brain to help Otto or Uber develop their technology.
joshu 2 days ago 0 replies      
I assume "14000 documents" is one repo checkout.
umanwizard 2 days ago 0 replies      
I don't really get how the orders to search for documents on employee-owned devices are possibly enforceable. What stops employees with incriminating data from just throwing their devices in a river before they can be searched?
dfar1 2 days ago 0 replies      
1 file is too many files.
jankassens 2 days ago 1 reply      
How would Uber find files on some employees personal device?
siliconc0w 2 days ago 1 reply      
It seems like Uber has to prove a negative here - because Google has evidence Levandowski took the files they need to show they don't have theM? Or that the files weren't involved in their self driving IP? Not sure how they're supposed to do that.
MegaButts 2 days ago 2 replies      
I get the feeling 2017 is going to be a very entertaining (from a news perspective) year for tech.
Ride-hailing apps may help to curb drunk driving economist.com
507 points by petergatsby  2 days ago   364 comments top 11
jhpaul 2 days ago 12 replies      
Based on personal experiences this isn't surprising. When I was in college in a small town in the late 00's, many friends and acquaintances would drive across town after heavy drinking. 10-15 minutes, straight roads, little traffic. Other than not going out, or not getting home, you didn't have many options. It was a hour long walk or a $20-40 cab ride, if you could get one.

I was recently back in town for a wedding and our uber to the hotel from the venue (about the same distance) was $9.

Once came to town (now on the east coast) I instantly noticed many friends who used to drive would take an uber. It's cheap, it's easy to call one from a crowded/loud place, you know how much it will cost, they don't use cash, they know where you are, and you don't have to give directions. For someone intoxicated (or anyone really), these are game changers.

It's the difference between "who's going to drive?" and "who's calling an uber?"

Company politics aside, the accessibility of ride-sharing services introduces numerous real safety benefits on top of the obvious convenience.

ericfrederich 2 days ago 3 replies      
Let's file this one under "duh"

It's not ride-hailing apps themselves. Has nothing to do with the fact that it's someone else's car or that you use an app instead of a phone number to order it.

It's the fact that you can get home now for $10 only waiting 5 minutes from the time you decided you want to leave.... compared to paying $60 and waiting an hour.

kalleboo 2 days ago 2 replies      
Or conversely, the taxi medallion system is likely responsible for 24-35% of drunk driving incidents...
robbiet480 2 days ago 1 reply      
Another study from July 2016 said that Uber doesn't save many drunk driving accidents although the study referenced in the Economist only focused on NYC, whereas the study from 2016 was focused on multiple metropolitan areas.

[1]: http://aje.oxfordjournals.org/content/early/2016/07/22/aje.k...

itchyjunk 2 days ago 5 replies      
Hmm, the study [0] credits over all reduction of drunk driving accidents to Uber. If ride sharing is the source of reduction, shouldn't ride sharing in general be credited? Maybe Uber was the only ride-share available during the study though since it's data from 1989-2013.

"A recent increase in the ease and availability of alternative rides for intoxicated passengers partially explains the steep decrease in alcohol-related collisions in New York City since 2011.I examine the specific case of Ubers car service launch in New York Cityin May 2011,a unique example of a sudden increase in cab availability for intoxicated passengers.7This study draws on a dataset of all New York State alcohol-related collisions maintained by the New York State Department of Motor Vehicles from 1989 through 2013. My inference is based on the variation in Uber access across New York State counties over time and the careful choice of New York State counties that provide an appropriate control group for New York Citys drunk-driving behavior"


Fair enough, looks like lyft only came to NYC around 2014[1]. But does anyone know if the ride share prices in NYC from 2011 [2] to now has significantly changed? I vaguely remember a lot of people using it initially because of dirt cheap prices during the first few month of introduction but I don't trust my memory over facts if someone has some.



[1] https://en.wikipedia.org/wiki/Lyft#History

[2] https://techcrunch.com/2011/04/06/i-just-rode-in-an-uber-car...

spodek 2 days ago 1 reply      
I've wondered if cell phones have led to fewer altercations on subways as people spend more time focused on them, giving them less reason or chance to argue and fight with others.
vannevar 2 days ago 1 reply      
Proving another benefit of subsidized public transportation. In Uber's case, low-cost rides subsidized by its investors.
rubicon33 2 days ago 1 reply      
Another bright light on the horizon: Self driving cars. I'm hugely optimistic that within the next 100 years, we could very possibly see deaths from drunk driving shrink to a small fraction of it's current value. It's a great thing when profit driven businesses also have positive side effects for everyone.
dsacco 2 days ago 2 replies      
The study was originally published in January. This article is being published in April.

I subscribe to the Economist and I enjoy it, but I am cynical about why this is being published now, amidst significant negative publicity for Uber.

Dotnaught 2 days ago 9 replies      
Perhaps that should be phrased "the availability of drivers using a ride sharing platform has helped reduce drunk driving accidents."

If Uber doesn't accept drivers as employees and isn't responsible for their actions, it shouldn't be credited with their collective contributions to a safer society.

Unsupervised sentiment neuron openai.com
593 points by gdb  1 day ago   124 comments top 29
ericjang 1 day ago 5 replies      
Why are people being so critical about this work? Sure, the blog post provides a simplified picture about what the system is actually capable of, but it's still helpful for a non-ML audience to get a better understanding of the high-level motivation behind the work. The OpenAI folks are trying to educate the broader public as well, not just ML/AI researchers.

Imagine if this discovery were made by some undergraduate student who had little experience in the traditions of how ML benchmark experiments are done, or was just starting out her ML career. Would we be just as critical?

As a researcher, I like seeing shorter communications like these, as it illuminates the thinking process of the researcher. Read ML papers for the ideas, not the results :)

I personally don't mind blog posts that have a bit of hyped-up publicity. It's thanks to groups like DeepMind and OpenAI that have captured public imagination on the subject and accelerated such interest in prospective students in studying ML + AI + robotics. If the hype is indeed unjustified, then it'll become irrelevant in the long-term. One caveat is that researchers should be very careful to not mislead reporters who are looking for the next "killer robots" story. But that doesn't really apply here.

srush 1 day ago 0 replies      
If you are interested in looking at the model in more detail, we (@harvardnlp) have uploaded the model features to LSTMVis [1]. We ran their code on amazon reviews and are showing a subset of the learned features. Haven't had a chance to look further yet, but it is interesting to play with.

[1] http://lstm.seas.harvard.edu/client/pattern_finder.html?data...

1024core 1 day ago 8 replies      
I don't know, but this seems a bit hyped in places.

They start with:

> Our L1-regularized model matches multichannel CNN performance with only 11 labeled examples, and state-of-the-art CT-LSTM Ensembles with 232 examples.

Hmm, that sounds pretty impressive. But then later you read:

> We first trained a multiplicative LSTM with 4,096 units on a corpus of 82 million Amazon reviews to predict the next character in a chunk of text. Training took one month across four NVIDIA Pascal GPUs

Wait, what? How did "232 examples" transform into "82 million"??

OK, I get it: they pretrained the network on the 82M reviews, and then trained the last layer to do the sentiment analysis. But you can't honestly claim that you did great with just 232 examples!

YCode 1 day ago 4 replies      
The synthetic text they generated was surprisingly realistic, despite being generic.

If I were perusing a dozen reviews I probably wouldn't have spotted the AI-generated ones in the crowd.

nl 1 day ago 1 reply      
So char-by-char models is the next Word2Vec then. Pretty impressive results.

It would be interesting to see how it performed for other NLP tasks. I'd be pretty interested to see how many neurons it uses to attempt something like stance detection.

Data-parallelism was used across 4 Pascal Titan X gpus to speed up training and increase effective memory size. Training took approximately one month.

Everytime I look at something like this I find a line like that and go: "ok that's ncie.. I'll wait for the trained model".

emcq 1 day ago 2 replies      
It's very difficult to understand what the contributions are here. From what I've read so far this feels more of a proposal for future research or a press release than advancing the state of the art.

* Using large models trained on lots of data to provide the foundation for sample efficient smaller models is common.

* Transfer learning, fine tuning, character RNNs is common.

Were there any insights learned that give a deeper understanding of these phenomena?

Not knowing too much about the sentiment space, it's hard to tell how significant the resulting model is.

mdibaiee 3 hours ago 0 replies      
As far as I understand, it means that there must be a relation between a character's sentiment and what the next character can (/should) be for neural network to use this as a feature, am I right?

Does this mean we have unconsciously developed a language that exposes such relations?

wackspurt 1 day ago 0 replies      
(Apologies for the slightly incoherent post below)

I've been noticing a lot of work that digs into ML model internals (as they've done here to find the sentiment neuron) to understand why they work or use them to do something. Let me recall interesting instances of this:

1. Sander Dieleman's blog post about using CNNs at Spotify to do content-based recommendations for music. He didn't write about the system performance but collected playlists that maximally activated each of the CNN filters (early layer filters picked up on primitive audio features, later ones picked up on more abstract features). The filters were essentially learning the musical elements specific to various subgenres.

2. The ELI5 - Explain Like I'm Five - Python Library. It explains the outputs of many linear classifiers. I've used it to explain why a text classifier was given a certain prediction: it highlights features to show how much or little they contribute to the prediction (dark red for negative contribution, dark green for positive contribution).

3. FairML: Auditing black-box models. Inspecting the model to find which features are important. With privacy and security concerns too!

Since deep learning/machine learning is very empirical at this stage, I think improvements in instrumentation can lead to ML/DL being adopted for more kinds of problems. For example: chemical/biological data. I'd be highly curious to what new ways of inspecting such kinds of data would be insightful (we can play audio input that maximally active filters for a music-related network, we can visualize what filters are learning in an object detection network, etc.)

tshadley 1 day ago 0 replies      
"The selected model reaches 1.12 bits per byte." (https://arxiv.org/pdf/1704.01444.pdf)

For context, Claude Shannon found that humans could model English text with an entropy of 0.6 to 1.3 bits per character (http://languagelog.ldc.upenn.edu/myl/Shannon1950.pdf)

stillsut 14 hours ago 0 replies      
> The model struggles the more the input text diverges from review data

This is where I fear the results will fail to scale. The ability to represent 'sentiment' as one neuron, and its ground truth as uni-dimensional seems most true to corpuses of online reviews where the entire point is to communicate whether you're happy with the thing that came out of the box. Most other forms of writing communicate sentiment in a more multi-dimensional way, and the subject of sentiment is more varied than a single item shipped in a box.

In otherwords, the unreasonable simplicity of modelling a complex feature like sentiment with this method, is something of an artifact of this dataset.

itchyjunk 1 day ago 2 replies      
I would imagine stuff like sarcasm is still out of reach though. It seems hard for humans to understand it in text based communication. Also using anything out of the standard sentimental model might throw it off. "This product is as good as <product x> (where product x has been known to perform bad." I am just trying to think of scenarios where a sentimental model would fail.

Sentimental neuron sounds fascinating too. I didn't realize individual neurons could be talked about or understood outside of the concept of the NN. I am thinking in terms of "black box" its often referenced to in some articles.

Since one of the research goal for openai is to train language model on jokes[0], I wonder how this neuron would perform with a joke corpus.


[0] https://openai.com/requests-for-research/#funnybot

eanzenberg 21 hours ago 0 replies      
I think one of the most amazing parts of this is how accessible the hardware is right now. You can get world-class AI results with the cost of less than most used cars. In addition, with so many resources freely available through open-source, the ability to get started is very accessible.
aabajian 1 day ago 2 replies      
I'm trying to understand this statement:

"The sentiment neuron within our model can classify reviews as negative or positive, even though the model is trained only to predict the next character in the text."

If you look closely at the colorized paragraph in their paper/website, you can see that the major sentiment jumps (e.g. from green to light-green and from light-orangish to red) occur with period characters. Perhaps the insight is that periods delineate the boundary of sentiment. For example:

I like this movie.I liked this movie, but not that much.I initially hated the movie, but ended up loving it.

The period tells the model that the thought has ended.

My question for the team: How well does the model perform if you remove periods?

d--b 1 day ago 3 replies      
Can someone explain what is "unsupervised" about this? I'm guessing this is what confuses me most.

I think this work is interesting, although when you think about it, it's kind of normal that the model converges to a point where there is a neuron that indicates whether the review is positive or negative. There are probably a lot of other traits that can be found in the "features" layer as well.

There are probably neurons that can predict the geographical location of the author, based on the words they use.

There are probably neurons that can predict that the author favors short sentences over long explanations.

But what makes this "unsupervised"?

andreyk 1 day ago 0 replies      
I think it's fair to criticize this blog post for being unclear on what exactly is novel here; pre-training is a straighforward and old idea, but the blog post does not even mention this. Having accessible write ups for AI work is great, but surely it should not be confusing to domain experts or be written in such a way as to exacerbate the rampant oversimplification or misreporting in popular press about AI. Still, it is a cool mostly-experimental/empirical result, and it's good that these blog posts exist these days.

For what it's worth, the paper predictably does a better job of covering the previous work and stating what their motivation was: "The experimental and evaluation protocols may be underestimating the quality of unsupervised representation learning for sentences and documents due to certain seeminglyinsignificant design decisions. Hill et al. (2016) also raises concern about current evaluation tasks in their recent work which provides a thorough survey of architectures and objectives for learning unsupervised sentence representations - including the above mentioned skip-thoughts. In this work, we test whether this is the case. We focus in on the task of sentiment analysis and attempt to learn an unsupervised representation that accurately contains this concept. Mikolov et al. (2013) showed that word-level recurrent language modelling supports the learning of usefulword vectors and we are interested in pushing this line ofwork. As an approach, we consider the popular researchbenchmark of byte (character) level language modellingdue to its further simplicity and generality. We are also interested in evaluating this approach as it is not immediately clear whether such a low-level training objective supports the learning of high-level representations." So, they question some built in assumptions from the past by training on lower-level data (characters), with a bigger dataset and more varied evaluation.

The interesting result they highlight is that a single model unit is able to perform so well with their representation: "It is an open question why our model recovers the concept of sentiment in such a precise, disentangled, interpretable, and manipulable way. It is possible that sentiment as a conditioning feature has strong predictive capability for language modelling. This is likely since sentiment is such an important component of a review" , which I tend to agree with... train a on a whole lot of reviews, it's only natural to train a regressor for review sentiment.

huula 1 day ago 0 replies      
Machine Learning has become more and more like archaeology after people start saying "empirically" more and only provide a single or limited datasets.
kamalbanga 1 day ago 0 replies      
What they have done is semi-supervised learning (Char-RNN) + supervised training of sentiment.Another way to do is semi-supervised learning (Word2Vec) + supervised training of sentiment.If first approach works better, does it imply that character level learning is more performant than word level learning?
gallerdude 1 day ago 1 reply      
The neural network is savage enough to learn "I would have given it zero stars, but that was not an option." Are we humans that predictable?
anonymfus 1 day ago 1 reply      
This article is not accessible. It puts all textual examples into images and ever has some absolutely unnecessary animation. Please fix it.
ChuckMcM 1 day ago 0 replies      
This is a great name for a band :-). That said, I found the paper really interesting. I tend to think about LSTM systems as series expansions and using that as an analogy don't find it unusual that you can figure out the dominant (or first) coefficient of the expansion and that it has a really strong impact on the output.
du_bing 1 day ago 0 replies      
Train on character-by-character basis, this is really incredible, quite opposite to human's intuition about language, but it seems a brilliant idea, and OpenAI tried it out, great!
kvh 1 day ago 0 replies      
Impressive the abstraction NNs can achieve from just character prediction. Do the other systems they compare to also use 81M Amazon reviews for training? Seems disingenuous to claim "state-of-the-art" and "less data" if they haven't.
auvi 1 day ago 1 reply      
just wondering, how many AI programs (models with complete source code) OpenAI has released?
djangowithme 1 day ago 0 replies      
Why is the linear combination used to train the sentiment classifier? Why does its result get taken into account?

Is this linear combination between 2 different strings?

mrfusion 1 day ago 1 reply      
why did they do this character by character? Would word by word make sense? Other than punctuation I'm not seeing why specific characters are meaningful units.
changoplatanero 1 day ago 0 replies      
What's the easiest way to make a text heatmap like the ones in their blog?
sushirain 1 day ago 1 reply      
Very interesting. I wonder if they tried to predict part-of-speech tags.
grandalf 1 day ago 0 replies      
This has amazing potential for use in sock puppet accounts.
curuinor 1 day ago 0 replies      
moved that needle I guess
The reference D compiler is now open source dlang.org
515 points by jacques_chirac  23 hours ago   265 comments top 25
jordigh 22 hours ago 2 replies      
Walter, thank you so much for finally doing this! I am so happy that Symantec finally listened. It must have been really frustrating to have to wait so long for this to happen. I have really been enjoying D and I love all the innovation in it. I'm really looking forward to seeing the reference compiler packaged for free operating systems.

Thanks again, this news makes me very happy!

iamNumber4 19 hours ago 7 replies      
Good news indeed.

Switched to D 4 years ago, and have never looked back. I wager that you can sit down a C++/Java/C# veteran, and say write some D code. Here's the manual, have fun. They will with in a few hours be comfortable with the language, and be fairly competent D programmer. Very little FUD surrounding the switching to yet another language with D.

D's only issue is that it does not have general adoption, which I'm willing to assert is only because it's not on the forefront of the cool kids language of the week. Which is a good thing. New does not always Mean, improved. D has a historical nod to languages of the past, and is trying to improve the on strengths of C/C++ and smooth out the rough edges, and adopt more modern programming concepts. Especially with trying to be ABI compatible, it's a passing of the torch from the old guard to the new.

Regardless of your thoughts on D; My opinion is I'm sold on D, It's here to stay. In 10 years D will still be in use, where as the fad languages will just be foot notes in Computer Science history as nice experiments that brought in new idea's but were just too out there in the fringes limiting themselves to the "thing/fad" of that language.

tombert 22 hours ago 2 replies      
Honestly, since I'm slightly psychotic about these things, this is a kind of huge to me. Part of the reason I never learned D was because the compiler was partly proprietary.

Now I have no excuse to avoid learning the language, and that should be fun.

WalterBright 22 hours ago 4 replies      
And best of all, it's the Boost license!

Here it is:


vram22 18 hours ago 1 reply      
Good to hear the news, and congrats to all involved.

Since I see some comments in this thread, asking what D can be used for, or why people should use D, I'm putting below, an Ask HN thread that I had started some months ago. It got some interesting replies:

Ask HN: What are you using D (language) for?


brakmic 21 hours ago 4 replies      
Please don't get me wrong, as I don't want to start a flame here, but why do they call D a "systems programming language" when it uses a GC? Or is it optional? I'm just reading through the docs. They do have a command line option to disable the GC but anyway...this GC thing is, imho, a no-go when it comes to systems programming. It reminds me of Go that started as a "systems programming language" too but later switched to a more realistic "networking stack".


JoshTriplett 22 hours ago 2 replies      
Interesting change! Before, people had a choice between the proprietary Digital Mars D (dmd) compiler, or the GCC-based GDC compiler. And apparently, since the last time I looked, also the "LDC" compiler that used the already-open dmd frontend but replaced the proprietary backend with LLVM.

I wonder how releasing the dmd backend as Open Source will change the balance between the various compilers, and what people will favor going forward?

bluecat 18 hours ago 0 replies      
Something I always thought was cool about dlang was that you can talk to the creator of the programming language on the forums. I don't write much D code as of now, but I always visit the forums everyday for the focused technical discussions. Anyways, congrats on the big news!
softinio 22 hours ago 4 replies      
Whats special about D? why should i learn it?
xtreak_1 6 hours ago 0 replies      
Thanks a lot! I am also consistently amazed at the performance of the forum like any other day even though the story is on top of HN.
noway421 4 hours ago 0 replies      
It's really surprising that to this day, there are languages in use which have its reference implementation closed source. All the possible optimizations and collaboration possible when it's open is invaluable.
jacquesm 16 hours ago 1 reply      
That is excellent news :)

Congratulations Walter, now let's see D take over the world.

saosebastiao 22 hours ago 2 replies      
This was something that always rubbed me the wrong way about the language, and it was an impediment for adoption for me (for D, but also Shen and a few others). In this era, there is no excuse for a closed source reference compiler (I could care less if it's not a reference compiler, I just won't use it). I'm surprised it took this long to do this, it seems like D has lost most of its relevance by now...relevance it could have kept with a little more adoption. I wonder if it can recover.
Samathy 19 hours ago 0 replies      
Amazing! D has really been exciting me for the past couple of years. It has great potential.

Hopefully a fully FOSS compiler will bring it right into the mainstream.

petre 20 hours ago 1 reply      
This is great news. I was using LDC because the DMD backend was proprietary. Thank you Walter, Andrei and whoever made this possible.
nassyweazy 3 hours ago 0 replies      
This is an awesome news!
virmundi 18 hours ago 1 reply      
Has there been any new books out there to learn D? I have one that still references the Collection Wars (Phobos vs Native). Once I saw that, I put the book back on the shelf and stuck with Java.
tbrock 16 hours ago 0 replies      
Is anyone else surprised that it wasn't before?
imode 10 hours ago 0 replies      
what is it like to 'bootstrap' D? I know in many languages you can forego the standard library and 'bootstrap' yourself on small platforms (C being the main example).
zerr 21 hours ago 4 replies      
Anybody worked on performance critical stuff in D? How good is its GC?
snackai 22 hours ago 0 replies      
This is big. Heard from mmany people that this avoids adoption.
herickson123 14 hours ago 0 replies      
I wanted to play around with D using the DMD compiler but it's unfortunate I have to install VS2013 and the Windows SDK to work with 64-bit support in Windows. I've installed VS in the past and found it to be a bloated piece of software I'm not willing to do again.
spicyponey 9 hours ago 0 replies      
Tremendous effort. Congrats.
joshsyn 21 hours ago 2 replies      
Please get rid of GC :(

I want to have smart pointers instead

sgt 19 hours ago 0 replies      
Off topic question; are you related to https://en.wikipedia.org/wiki/Jacques_Chirac ?
Machine learning without centralized training data googleblog.com
542 points by nealmueller  1 day ago   99 comments top 27
nostrademons 1 day ago 6 replies      
This is one of those announcements that seems unremarkable on read-through but could be industry-changing in a decade. The driving force between consolidation & monopoly in the tech industry is that bigger firms with more data have an advantage over smaller firms because they can deliver features (often using machine-learning) that users want and small startups or individuals simply cannot implement. This, in theory, provides a way for users to maintain control of their data while granting permission for machine-learning algorithms to inspect it and "phone home" with an improved model, without revealing the individual data. Couple it with a P2P protocol and a good on-device UI platform and you could in theory construct something similar to the WWW, with data stored locally, but with all the convenience features of centralized cloud-based servers.
whym 1 day ago 0 replies      
Their papers mentioned in the article:

Federated Learning: Strategies for Improving Communication Efficiency (2016) https://arxiv.org/abs/1610.05492

Federated Optimization: Distributed Machine Learning for On-Device Intelligence (2016)https://arxiv.org/abs/1610.02527

Communication-Efficient Learning of Deep Networks from Decentralized Data (2017)https://arxiv.org/abs/1602.05629

Practical Secure Aggregation for Privacy Preserving Machine Learning (2017)http://eprint.iacr.org/2017/281

binalpatel 1 day ago 1 reply      
Reminds me of a talk I saw by Stephen Boyd from Stanford a few years ago: https://www.youtube.com/watch?v=wqy-og_7SLs

(Slides only here: https://www.slideshare.net/0xdata/h2o-world-consensus-optimi...)

At that time I was working at a healthcare startup, and the ramifications of consensus algorithms blew my mind, especially given the constraints of HIPAA. This could be massive within the medical space, being able to train an algorithm with data from everyone, while still preserving privacy.

andreyk 1 day ago 1 reply      
The paper: https://arxiv.org/pdf/1602.05629.pdf

The key algorithmic detail: it seems they have each device perform multiple batch updates to the model, and then average all the multi-batch updates. "That is, each client locally takes one step of gradient descent on the current model using its local data, and the server then takes a weighted average of the resulting models. Once the algorithm is written this way, we can add morecomputation to each client by iterating the local update. "

They do some sensible things with model initialization to make sure weight update averaging works, and show in practice this way of doing things requires less communication and gets to the goal faster than a more naive approach. It seems like a fairly straighforward idea from the baseline SGD, so the contribution is mostly in actually doing it.

itchyjunk 1 day ago 4 replies      
"Federated Learning enables mobile phones to collaboratively learn a shared prediction model while keeping all the training data on device, decoupling the ability to do machine learning from the need to store the data in the cloud."

So I assume this would help with privacy in a sense that you can train model on user data without transmitting it to the server. Is this in any way similar to something Apple calls 'Differential Privacy' [0] ?

"The key idea is to use the powerful processors in modern mobile devices to compute higher quality updates than simple gradient steps."

"Careful scheduling ensures training happens only when the device is idle, plugged in, and on a free wireless connection, so there is no impact on the phone's performance."

It's crazy what the phones of near future will be doing while 'idle'.


[0] https://www.wired.com/2016/06/apples-differential-privacy-co...

sixdimensional 1 day ago 1 reply      
This is fascinating, and makes a lot of sense. There aren't too many companies in the world that could pull something like this off.. amazing work.

Counterpoint: perhaps they don't need your data if they already have the model that describes you!

If the data is like oil, but the algorithm is like gold.. then they still extract the gold without extracting the oil. You're still giving it away in exchange for the use of their service.

For that matter, run the model in reverse, and while you might not get the exact data... we've seen that machine learning has the ability to generate something that simulates the original input...

azinman2 1 day ago 1 reply      
This is quite amazing, beyond the homomorphic privacy implications being executed at scale in production -- they're also finding a way to harness billions of phones to do training on all kinds of data. They don't need to pay for huge data centers when they can get users to do it for them. They also can get data that might otherwise have never left the phone in light of encryption trends.
TY 23 hours ago 0 replies      
This is an amazing development. Google is in a unique position to run this on truly massive scale.

Reading this, I couldn't shake the feeling that I heard all of this somewhere before in a work of fiction.

Then I remembered - here's the relevant clip from "Ex Machina":


argonaut 1 day ago 3 replies      
This is speculative, but it seems like the privacy aspect is oversold as it may be possible to reverse engineer the input data from the model updates. The point is that the model updates themselves are specific to each user.
siliconc0w 1 day ago 2 replies      
While a neat architectural improvement, the cynic in me thinks this is a fig leaf for the voracious inhalation of your digital life they're already doing.
sandGorgon 1 day ago 0 replies      
Tangentially related to this - numerai is a crowdsourced hedge fund that uses structure preserving encryption to be able to distribute it's data, while at the same time ensuring that it can be mined.


Why did they not build something like this ? I'm kind of concerned that my private keyboard data is being distributed without security. The secure aggregation protocol doesn't seem to be doing anything like this.

emcq 1 day ago 1 reply      
Even if this only allowed device based training and not privacy advantages it's exciting as a way of compression. Rather than sucking up device upload bandwidth you keep the data local and send the tiny model weight delta!
muzakthings 1 day ago 1 reply      
This is literally non-stochastic gradient descent where the batch update simply comes from a single node and a correlated set of examples. Nothing mind-blowing about it.
legulere 1 day ago 2 replies      
Where is the security model in this? What stops malicious attackers from uploading updates that are constructed to destory the model?
holografix 1 day ago 1 reply      
I don't work with ML for my day job but find it exhilaratingly interesting. (true story!)

When I first read this I was thinking: surely we can already do distributed learning, isnt that what for example SparkML does?

Is the benefit of this in the outsourcing of training of a large model to a bunch of weak devices?

nialv7 22 hours ago 1 reply      
I had exactly this idea about a year ago!

I know ideas without execution don't worth anything, but I'm just happy to see my vision is on the right direction.

yeukhon 1 day ago 0 replies      
To be honest I have thought about this for long for distributed computing. If we have a problem which takes a lot of time to compute but problem can be computed with small pieces and then combined then why can't we pay user to subscribe for the computation? This is a major step toward thr big goal.
nudpiedo 1 day ago 0 replies      
Where is the difference between that and distributed computing? A part of the specific usage for ML I don't see many differences, seti@home was an actual revolution made of actual volunteers (I don't know how many google users will be aware of that).
alex_hirner 1 day ago 0 replies      
I think the implications go even beyond privacy and efficiency. One could estimate each user's contribution to fidelity gains of the model. At least as an average within a batch. I imagine such an attribution to rewarded in money or credibility in the future.
Joof 23 hours ago 0 replies      
Could we build this into a P2P-like model where there are some supernodes that do the actual aggregation?
orph 1 day ago 0 replies      
Huge implications for distributed self-driving car training and improvement.
mehlman 1 day ago 0 replies      
I would argue there is no such thing. The model will after the update now incooperate your traning data as a seen example, clever use of optimization would enable you to partly reconstruct the example.
hefeweizen 1 day ago 0 replies      
How similar is this to multi-task learning?
yk 1 day ago 2 replies      
Google is building a google cloud, that is they try to use the hardware of other people, instead of other people using Googles hardware.
exit 1 day ago 0 replies      
i wonder whether this can be used as a blockchain proof of work
Svexar 1 day ago 0 replies      
So it's Google Wave for machine learning?
Lego Macintosh Classic with epaper display jann.is
449 points by andrevoget  1 day ago   91 comments top 22
antirez 1 day ago 5 replies      
Is somebody able to explain why certain e-ink displays are so slow to refresh while others are much faster? For instance my Garmin Vivoactive HR e-ink display, that is even capable of displaying 64 colors, is like an LCD display in terms of refresh rate apparently, you can't see the difference easily, while the one that was used to build this project takes a lot of time to even show a single frame (see the Youtube video where the display is presented, following the link Jann provided in the blog post). My best guess is that they use completely different technologies.

EDIT: Vivoactive HR uses a Transreflective LCD actually. This web site explains very well how it works:


alexandros 1 day ago 2 replies      
Incredible work jayniz -- I guess everyone and their mother is suggesting improvements, are you preparing a new version with the rpi zero W and non-cut lego blocks?

Disclaimer: resin.io founder, we're so happy you chose resin for this awesome project ;)

redsummer 1 day ago 2 replies      
I'd love a pi with an e-paper display (larger than the lego Mac) which just booted into Raspbian CLI. Has such a thing been done?
aphextron 1 day ago 4 replies      
It looks like there's a business opportunity here for someone to make a really slick browser based LEGO editor that does cost estimates and orders all the correct components for you when you're finished. I'm curious how large the market for such a thing would be.
walrus01 1 day ago 3 replies      
The Mac 128k was not from 1988. In that time frame, The Mac plus was the first really usable model with a 20MB HDD.
TheRealPomax 23 hours ago 0 replies      
Now it just needs to accept "something" through the slot to trigger "something" to happen on the screen.
jordache 1 day ago 0 replies      
I don't get it.. is it just using e-ink to display greyscale image? So that's just a screenshot of app chrome and the hello text?
rangibaby 1 day ago 4 replies      
Pics need a banana for scale

/E I'm serious though, it is hard to tell how large it is from the pics

mamcx 1 day ago 1 reply      
I wish exist a e-paper suitable for use as monitor (21" at least)

* I mean, not a prototype in a galaxy far away

doomslay 1 day ago 5 replies      
Cutting lego bricks? Eugh. There's specific bricks that would have worked exactly.
ohitsdom 1 day ago 1 reply      
Awesome project!

Would anyone else chose a different software solution rather than Docker with resin.io? I love working on projects like this but I've stayed away from Docker so far. Docker plus a third-party service to manage it seems like it could be overkill, but it obviously got the job done.

codecamper 1 day ago 1 reply      
That's awesome... but can you really play shufflepuck on eInk? I miss shufflepuck too!
maaaats 1 day ago 1 reply      
How powerful is this small replica compared to the original hardware?
NoGravitas 22 hours ago 0 replies      
Now this really needs to be running Basilisk II and a System 7 ROM. Hook up a Bluetooth keyboard and mouse, and you're set.
timvdalen 1 day ago 1 reply      
Looks really slick! Is there any way to interact with the system though?

Is the Pi actually running Mac OS or is it just a static image?

AKifer 1 day ago 1 reply      
Probably that's how we will build computers in 20 years.
amelius 1 day ago 0 replies      
Does it emulate the Macintosh Classic?
mproud 21 hours ago 0 replies      
Why would you put an e-ink display on this if its not going to be used?
hilti 1 day ago 0 replies      
Pretty cool. I like it!
ge96 1 day ago 0 replies      
nice font
ruthtaylor123 1 day ago 1 reply      
Thats a good loooking website. Thumbs up!
jlebrech 1 day ago 0 replies      
no mac os?
Color Night Vision (2016) [video] kottke.org
522 points by Tomte  17 hours ago   134 comments top 31
qume 6 hours ago 0 replies      
There is much discussion here regarding quantum efficiency (QE). Keep in mind that figures for sensors are generally _peak_ QE for a given colour filter array element. These can be quite high like 60-70%.

But - this is an 'area under the graph' issue. While it may peak at 60%, it can also fall off quickly and be much less efficient as the wavelength moves away from the peak for say red/green/blue.

From what I can tell from the tacky promo videos, the sensor is very sensitive for each colour over a wide range of wavelengths, probably from ultraviolet right up to 1200nm. That's a lot more photons being measured in any case, but especially at night.

Their use of the word 'broadband' sums it up. It's more sensitive over a much larger range of frequencies.

I also wouldn't be surprised if they are using a colour filter array with not only R/G/B but perhaps R/G/B/none or even R/IR/G/B/none. The no filter bit bringing in the high broadband sensitivity with the other pixels providing colour - don't need nearly as many of those.

Edit - one remarkable thing for me is based on the rough size of the sensor and the depth of field in the videos, this isn't using a lens much more than about f/2.4. You'd think it would be f/1.4 or thereabouts to get way more light but there is far too much DoF for that.

amluto 13 hours ago 5 replies      
It would be interesting to see how this compares to theoretical limits. At a given brightness and collecting area, you get (with lossless optics) a certain number of photons per pixel per unit time. Unless your sensor does extraordinarily unlikely quantum stuff, at best it counts photons with some noise. The unavoidable limit is "shot noise": the number of photons in a given time is Poisson distributed, giving you noise according to the Poisson distribution.

At nonzero temperature, you have the further problem that your sensor has thermally excited electrons, which aren't necessarily a problem AFAIK. More importantly, the sensor glows. If the sensor registers many of its own emitted photons, you get lots of thermal noise.

Good low noise amplifiers for RF that are well matched to their antennas can avoid amplifying their own thermal emissions. I don't know how well CCDs can do at this.

Given that this is a military device, I'd assume the sensor is chilled.

joshvm 11 hours ago 1 reply      
Better video from their website comparing to other cameras:


Anecdotal evidence on the internet suggests it's around 6k, but that seems far too low.

rl3 8 hours ago 3 replies      
One would think with all the money the military throws into imaging technology that they would already have this.

For Special Operations use, it'd be nifty to have this technology digitally composited in real-time with MWIR imaging on the same wearable device. Base layer could be image intensification with this tech, then overlay any pixels from the MWIR layer above n temperature, and blend it at ~33% opacity. Enough to give an enemy a nice warm glow while still being able to see the expression on their face. Could even have specially made flashbangs that transmit an expected detonation timestamp to the goggles so they know to drop frames or otherwise aggressively filter the image.

Add some active hearing protection with sensitivity that far exceeds human hearing (obviously with tons of filtering/processing), and you're talking a soldier with truly superhuman senses.

That's not to mention active acoustic or EM mapping techniques so the user can see through walls. I mean, USSOCOM is already fast-tracking an "Iron Man" suit, so I don't see why they wouldn't want to replicate Batman's vision while they're at it.

telesilla 10 hours ago 3 replies      
Can someone wake me up in the future? When we have digital eyes, and we can walk around at night as if it were day except the stars would be glittering. Sometimes, I'm so sad to know I'll not live to know these things and I'm incredibly envious of future generations.
19eightyfour 12 hours ago 2 replies      
That is beautiful.

If they can increase the dynamic range to bring detail to the highlights it is basically perfect.

I've never seen a valley look like that with a blue sky above with stars in it. Truly incredible.

The 5M ISO rating is pretty funny. 1/40 f1.2 ISO 5M.

akurilin 15 hours ago 3 replies      
You can get somewhat close to that with a Sony a7s these days: https://vimeo.com/105690274
jacquesm 16 hours ago 1 reply      
That really is incredible. I wonder how they keep the noise level down and if the imaging hardware has to be chilled and if so how far down. Pity there is no image of the camera (and its support system), I'm really curious how large the whole package is. It could be anything from hand-held to 'umbilical to a truck' sized.

Watch when the camera tilts upwards and you see all the stars.

cameldrv 12 hours ago 0 replies      
They say it's hybrid IR-visible. I wonder if the trick is to use IR as the luma and then chroma-subsample by having giant pixels to catch lots of photons.
colordrops 13 hours ago 3 replies      
Two thoughts come to mind:

1. It would be nice to see a split screen against a normal view of the scene as it would be seen by the typical naked eye.

2. Our light pollution must SUCK for nocturnal animals that see well at night.

dreamcompiler 13 hours ago 2 replies      
This is an amazing device. I've taken night photos that look like frames of this movie on my digital camera, but they require a 60-second exposure and a tripod, and they're -- still frames.
floatboth 15 hours ago 1 reply      
"an effective ISO rating of 5,000,000"

Holy shit, my Canon 600D is pretty bad at 2500, goes to crap at 3200, and 6400 is an absolute noise mess

caublestone 13 hours ago 1 reply      
My brother in law experimented with this camera a few years back on family portraits. The camera picks up a lot of "dark" details. Skin displays pale and veins are very defined. My nieces called it "the vampire camera".
fpoling 3 hours ago 1 reply      
There are far infrared cameras that capture thermal radiation from 9-15 NM band. They nicely allow to see in complete darkness. They do not use CCD but rather microbalometers.

But they are expensive. 640x480 can cost over 10000 USD and cameras with smaller resolution like those used in high-end cars still cost over thousand USD.

kator 4 hours ago 0 replies      
teh_klev 11 hours ago 0 replies      
Direct link to manufacturer or supplier:


batbomb 15 hours ago 0 replies      
So maybe Peltier on the sensor, heat sink attached to body, body hermetically sealed. Sensors probably tested for best noise quality (probably a really low yield on that).
drenvuk 16 hours ago 1 reply      
This is incredibly cool. You can even see how other sources of light actually have an effect on the environment as if they were their own suns.
Cieplak 10 hours ago 0 replies      
I wonder what the sensor is made of. I would bet on there being a fair bit of Germanium in there.

PS: probably wrong about that, silicon's band gap is more suited to optical spectrum, even though germanium has more electron mobility. I'm speculating now that they're using avalanche photodiodes.


peteretep 8 hours ago 0 replies      
Put one of these on a drone and you'll break a lot of people's assumptions about their privacy
copperx 13 hours ago 0 replies      
I've dreamed of such a camera for decades. I thought the technology was at least 10+ years away. This is what science fiction is made of.
nnain 6 hours ago 0 replies      
What a quandary: We see military weapons technology put to terrible use all the time, and yet, so much technology shows up in (US) military use first.
interfixus 8 hours ago 1 reply      
Why is the night sky blue? Is that really scattered starlight?
breatheoften 12 hours ago 0 replies      
Is that red rocks (just outside of Las Vegas). There's a lot of man made light sources that really scatter light pretty far and in a lot of directions (the Luxor spotlight comes to mind). I wonder if that could have an effect on this camera's performance.
jbrambleDC 12 hours ago 1 reply      
I want to know what this means for observational astronomy. Can we put this in the eyepiece of a telescope and discern features in nebulae that otherwise look like gray blobs to unaided vision
Silhouette 11 hours ago 0 replies      
They list a lot of potentially useful applications on the product's own web site. I wonder how long it will take for this sort of technology to be commercially viable for things like night vision driving aids. High-end executive cars have started to include night vision cameras now, but they're typically monochrome, small-screen affairs. I would think that projecting an image of this sort of clarity onto some sort of large windscreen HUD would be a huge benefit to road safety at night. Of course, if actually useful self-driving cars have taken over long before it's cost-effective to include a camera like this in regular vehicles, it's less interesting from that particular point of view.
faragon 2 hours ago 0 replies      
Is this real? :-O
AnimalMuppet 14 hours ago 0 replies      
It occurs to me that this technology could do absolutely amazing things as the imager for a space telescope...
samstave 13 hours ago 3 replies      
ELI5 what an iso of 5MM means?
lutusp 14 hours ago 3 replies      
Someone should contact this company and volunteer to redesign their website (https://www.x20.org). They should also be told that "complimentary" and "complementary" don't mean the same thing.

They have a great product, unfortunately presented on a terrible website.

egypturnash 12 hours ago 0 replies      
Their website is a thing of beauty. It's straight out of the Timecube school of design. https://www.x20.org/color-night-vision/
How Bank of America Gave Away My Money soraven.com
637 points by alienchow  2 days ago   247 comments top 54
instaheat 2 days ago 5 replies      
I know how to handle these dumb fucks. (I had a Mortgage with them, that makes me an expert)

I had a problem with them coughing up an escrow refund check, although there is much more to the story that would have most of you boiling.

I filed complaints with a total of 3 agencies and received my check via FedEx Priority overnight not long after. Suck it, BofA.

* Filed complaint with CFPB* Filed with Texas State Attorney General's Office (Insert your state here)* Filed complaint with Office of the Comptroller of Currency (HelpWithMyBank.gov)

Trust me. If you file a very well articulated complaint to each of these entities, they will feel the heat and resolve it.

I hope this information helps you alienchow, and many others.

CPLX 2 days ago 3 replies      
If you know a little bit about legal procedure dealing with something like this shouldn't be too hard.

You'd file against your local bank based on the address of the branch you go to for an order to show cause hearing where they are tasked to show up and show cause as to why the transfer of money should go forward. You'd state the basic grounds of mistaken identity, propose a temporary restraining order barring any further action until the case is heard, and go to the court for a judge to sign the order and give instructions for service.

Once it gets on everyone's radar as a conflicting court proceeding (rather than a customer service complaint) they'd likely quickly get to the bottom of it.

It sounds really hard but it isn't, most courts in bigger cities at least will have an office where you can make an appointment to get free volunteer legal help.

Yes this will burn a couple slightly frustrating afternoons getting it together but it's eminimently possible to do, and an interesting exercise for the average person who enjoys learning how things work.

tstegart 2 days ago 3 replies      
He should be calling up the plaintiff's lawyers, not BOA. Lawyers don't want to collect only to have to give it back. Find out who they are and call them. Any other attorneys involved as well. Once they have proof the money should not be given out, any ethical attorney would be very wary of handing out money.

It's also likely the money will be sitting in the trust account of an attorney before it actually gets dispersed, so he has more time that way. Contacting the bank for help in the legal system is totally the wrong way to solve this. They've given the money away and can't actually get it back even if they tried. They're a useless avenue at the moment.

wjnc 2 days ago 3 replies      
What I don't get: Why would he pursue some court order he's not a party to? The only logical counterparty to this dispute is BoA. They gave away your money without proper title. That shouldn't hold up in court? That they gave some money to the LA Sheriffs Department, that would be BoA to recoup.
phkahler 2 days ago 2 replies      
The error was committed by the LASD, their response here is wrong:

A few minutes later, she came back with the bad news. Why didnt you call earlier? Its too late for us to withdraw the request. The money was already sent to the court! You need to go down to the courthouse and ask them to show you the court documents.

It is not his problem what they did with the money, they took it from him plain and simple. The error was on their part and how they correct it on the other end is their problem.

Back in the day when people use MCI for some of their phone services , I got an adder to my phone bill from ATT (my carrier) for a charge attributed to a John Fox. I called ATT a couple times and pointed out that I'm not John Fox and they tried to tell me to call MCI. MCI used their LEC billing agreement to have ATT collect the fee. At some point I explained that I was not an MCI customer, had no relationship with them and that ATT had actually charged this fee to me, which is incorrect - how they wanted to deal with MCI or John Fox was their problem. They credited my account and I never heard about John Fox again.

EDIT: Actually the comment below by CLPX is better - the bank did this, they took his money and it is THEIR problem not the LASD. My point remains somewhat valid, it's not your problem to chase the money after someone wrongly took it - it's their problem. The bank didn't do a decent job of verification - they went on a partial name match alone apparently (no SSN, no account number, WTF).

perlpimp 2 days ago 2 replies      
Not sure if you have small claims court in America. In Canada for losses under 10000$ process is fairly straightforward. You should file and follow the process. If they don't give money back, go to the court and execute writ of seizure. One guy did this.


andy_ppp 2 days ago 1 reply      
Don't mess around with the court, the whole thing is Bank of America's problem, don't let them make it your fault or the courts. They have debited the wrong account, you can prove this; in the UK if this happened and they didn't return me my money I would take them to the small claims court which can be filled in online. Surely California must have this?
louhike 2 days ago 3 replies      
I had the same problem with one of the biggest french bank.I saw an important withdrawal on my account in a city where I wasn't. I called my bank and I explained the situation. They told me I'll get the money back in one day. I asked if I should change my credit card in case it was hacked, they told me calmy it was just an error on their part because another client had the same last name. I just couldn't believe it.
nathantotten 2 days ago 1 reply      
Wells Fargo did this to me many years ago from a business account. The State of California issued a seizer for unpaid taxes for another business. In the end I think Wells Fargo told us that our tax ID numbers were similar. We did end up getting the money back from the state of California after about 3 years. We of course never got back the penalties Wells Fargo charged or interest.
pmarreck 2 days ago 1 reply      
This entire fiasco could have been avoided if people were identified correctly. Here's another recent article where simply using "firstname lastname DOB" as a primary key resulted in a collision: https://www.theguardian.com/us-news/2017/apr/03/identity-the...
darkmirage 1 day ago 1 reply      
I wrote the blog post. It turns out writing a blog post works better than talking to the bank beyond simple catharsis. Just got a call from someone from Bank of America's social media team who credited the money back to my account. Thanks for all the suggestions here!
jandrewrogers 1 day ago 0 replies      
I will never use Bank of America again, for similar reason.

On two separate occasions they "lost" transfers to the IRS (totaling $40k) that I only discovered when the IRS came after me for failure to pay taxes. I had full receipts from BofA for the transfers that apparently never happened. At least as disconcerting, the reaction of BofA to the situation both times suggested that sending money to /dev/null was a routine occurrence in their organization.

threepipeproblm 2 days ago 2 replies      
BofA also launders money for gangs, according to the FBI https://www.wsj.com/news/articles/SB100014240527023032922045...

They had to pay $17 billion because of the scale to which they institutionalized mortgage fraud http://nypost.com/2014/08/21/bank-of-america-to-pay-record-1...

The other big banks aren't too different... I went with a small, local bank years ago and haven't looked back.

Find a Local Bank http://banklocal.info/

rodionos 2 days ago 0 replies      
Likely, there is a dataset on data.gov published by CFPB which consolidates consumer complaints in a federal database.


BofA ranks first by the total number of cases which is to be expected given their footprint. What's notable is that BofA has actually improved over time.


hedora 2 days ago 1 reply      
Consider taking LASD, BofA and the LAPD to small claims court in a single case in Northern California.

All three of them acted together to take the $3.4k, and I suspect there would be a settlement rather than an actual day in court.

It might be too late, but maybe call the BofA fraud department and tell them it was an unauthorized charge. "Unauthorized charge" is like the root password for banks.

oblib 1 day ago 0 replies      
BOA is probably the worst institutions I've ever dealt with and I never even opened an account with them.

I did get to close one I had with them though and that was a very good day for me because I was among the very 1st of many here who did the same and I got to watch what happened to them here from start to end.

They came to Branson Missouri and bought out a well loved local bank where I had my business account. I closed it as soon as I found out. As the new management came and started enforcing BOA policy people learned fast that I and a few others familiar with them weren't bullshitting about them.

Soon, the long time employees of the old bank started leaving because they weren't willing to piss their friends and neighbors for BOA and, of course, they told everyone why they left. Most went to work for one of the several other local banks here, and all their close friends and family moved their money with them.

When the last of those employees were gone everyone here started moving their accounts over to one of the still locally owned banks and in just a few years BOA didn't have hardly anyone left here to screw so they closed their doors and left town.

I will always admire my neighbors here for that. I knew people who let that bank abuse them for decades out in Los Angeles and I could never, not for the life of me, understand why.

uptown 2 days ago 1 reply      
Reminds me a bit of this article -- similar names crossed up in beurocratic negligence.


mirimir 2 days ago 2 replies      
I don't get how BoA would pay when the SSN didn't match.

But maybe the LASD didn't cite the SSN, but just the bank account that they had erroneously identified. But still, it's mind-boggling that they'd be so sloppy.

cthulhuology 2 days ago 2 replies      
If everyone on this thread tweeted his story, he'll get his money back. In the age of social media, the only effective tool we have against corporations is public shaming.
the-dude 2 days ago 3 replies      
In The Netherlans, but I suspect it is EU wide, they won't check the name at all when doing an ordinary transfer. And this is all banks.
hutzlibu 2 days ago 1 reply      
Hm ... the bank acted clearly careless, but I am surprised that there is so little blame on the court/sherrif, as they messed up in the first place and then in the end to not really be willing to undo their misstake.
rodionos 2 days ago 1 reply      
While BofA has its share of blame to absorb here, the specific issue is the lack of support for unusual names such as names containing whitespace. The issue may affect other banks, big or small. I for instance had once to deal with a banking application not supporting non-US phone numbers for 2FA, and yet positioned to serve customers 24x7 globally.
tomohawk 2 days ago 1 reply      
Yet another reason not to use one of the big 4 banks. Credit unions and local banks are the way to go.
gumby 1 day ago 0 replies      
A comment on the article advises filing a theft report. This has worked for me twice. The one most relevant to HN:

Crappy payroll company ADP double-debited the quarterly IRS tax payment for payroll. (Luckily I always have payroll checks written out of a payroll-only account, so when the account went negative the bank called me so I could transfer additional money (about a half a million bucks) in and nobody's paycheck bounced). To make a long story short ADP refused to do anything and when I finally got to talk to the district manager he insisted his hands were tied. "I don't know why you keep saying we improperly took your money -- we didn't! The money is at the IRS and in three months you can just declare it as a credit on your next payment. We don't have your money at all." Finally I agreed with him, "you're right! I will stop saying you took your money and will instead use the correct vocabulary, 'grand theft', 'fraud', and 'abuse of power of attorney'. Since you prefer the precise terminology, if the money is not in my account by 2PM (note: that's when the fed wire closes) I'll drive over to the Santa Clara district Attorney's office and swear a complaint."

Oddly enough the money that they supposedly didn't have was in our account by 1pm.

mdekkers 2 days ago 1 reply      
Repeat after me:"The only clients of a bank are its' shareholders"
DiabloD3 2 days ago 1 reply      
As far as I know, the only correct action is to write a letter detailing the entire situation to the Office of the Comptroller of the Currency in Washington DC.

I'm surprised no one has mentioned this yet.

rewrew 1 day ago 0 replies      
You need to hire a lawyer. It's not fair but that's how this is. This is going to haunt you forever -- not just this account! You have to do so ASAP and get this straightened out so you have the paperwork to prove it next time this happens. You also should write Consumer Reports Web site -- they'll love this story.
ChuckMcM 1 day ago 0 replies      
In this case I think the author is railing on BofA too much, unless the court said "pony up any money from any account" I'm guessing they specifically asked BofA for the money from this particular account. The person responsible here is the Sheriff's office and they are the ones that should be pursued.
ScottBurson 2 days ago 0 replies      
Maybe try suing the bank in small claims court. They were clearly negligent.
bm1362 2 days ago 0 replies      
I recently went through a similar legal situation to recover basically the same small amount. I find these incidents to be really interesting and oddly beautiful.

Exploring the broken beaurocracy to retrieve your money is a game; when you finally send enough documents and certified letters to satisfy/scare the otherside it's extremely satisfying.

coding123 2 days ago 1 reply      
I think I might ask my employer to pay me in bitcoin soon. Or shit, maybe gold bars that I pick up every few weeks.
code4tee 1 day ago 0 replies      
Wow. Lots of parties liable here. Reeks of those cases where banks forceclosed on the wrong houses.

Sad that one has to go public like this to get anything to happen, but I suspect that it will help showing the world how incompetent the involved parties were.

I remember there being some brilliant story in the reverse direction where someone sued a bank (I believe it was BoA) for something like this and won. When the bank didn't respond they got the court to allow property seizure to recover damages. Guy rocked up at a local bank branch and started taking computers and such with court order in hand and sheriff watching. He was legally robbing the bank. Bank reacted quite quickly then!

curun1r 1 day ago 0 replies      
My personal nightmare with BofA started when they merged with Fleet. I had a credit card with Fleet and it had been set to auto-pay the balance every month. After the merger, the auto-pay got disabled without telling me and by the time I realized it two months later, around $20 of interest had accrued. After customer support calls to try to get it resolved, they refused to refund the interest. So I asked to cancel my account and asked for the full-payoff amount and the address to mail the check. Since I had paperless billing, I didn't have an official slip to send in with my payment, but I wrote my Fleet account number on the check and the check was cashed a couple of weeks later. I assumed the unpleasant situation was finished.

Little did I know, my nightmare was just beginning. Several years later, I was applying for an apartment and I failed the credit check. I got a hold of my credit report and saw that BofA claimed I had a large unpaid balance that was several years behind. After calling them many times to try to resolve it, we finally figured out that they had credited my payment to a non-existent account and that my account had a different BofA account number that was different from my Fleet account number. I thought that would clear everything up, but after they reconciled the two accounts, they still claimed I owed them over $4k in interest that had accrued on the unpaid account. No amount of common sense could make them realize that they cashed my full-payout amount check so my account should be fully paid off. It took getting a lawyer involved for them to finally zero out my account and, by that time, I'd missed out on the apartment. The lawyer also helped me dispute the credit mark on my account with the agencies because BofA refused to remove it. Hundreds of hours of my life calling their inept customer service and over $1k in legal fees all because of their screw-ups.

BofA are a truly despicable company that doesn't give customer service reps the ability to make even obvious, common-sense adjustments which leads situations like mine. Whenever I have the chance, I try to warn people away from doing business with them. They literally took a happy customer (I loved Fleet) and, through incompetence unwillingness to act reasonably, turned him into an enemy for life. And all for $20.

ewams 2 days ago 3 replies      
They have failed to pay back billions, much less $3k.Do not do business with any organization on this list: https://projects.propublica.org/bailout/list
gumby 1 day ago 0 replies      
> my credit union was infinitely more competent than the buffoons at Bank of America.

This. It's not even a pro-credit union thing: small banks handle exceptional situations much better than big banks in my experience. You'd think it would be the opposite, but it's not. When the tellers know every customer, even when I almost never walk in, exceptions seem to generate a helpful phone call.

gumby 1 day ago 0 replies      
BTW what's with the weird grey fade out effect at the bottom? It simply slows down reading. Fortunately "reader mode" works.
dghughes 2 days ago 0 replies      
> ...So what we do here is match you with one of our members for a $30 fee which guarantees you a 30-minute conversation...

That's a good service to have I needed it and was referred but ten days later still no callback. And there was deadline I had to honour long passed. There was little time between when I needed the lawyer which I can't afford and the deadline. Scumbags know the law so well you are screwed for time trying to fight anyone.

ionised 22 hours ago 0 replies      
It's amazing how it seems the best way to be treated fairly by organisations that have clearly done you wrong is to out them on social media.
matt_wulfeck 1 day ago 0 replies      
This amount of money seems like it would be a good candidate for small claims court. Can anyone tell me if this guy can use small claims court (which is easy to initiate) to recover the money? I feel like even if he lodged a motion against the branch manager for negligence it might be enough to get things cleared up.
samfisher83 2 days ago 0 replies      
I have had a similar issue before just call office controller currency. They are pretty good about getting on the banks.
mankash666 1 day ago 0 replies      
Why isn't anyone questioning the LASD for fucking up a basic SSN comparison before requesting a fund-transfer from BoFA? Isn't this really the LASD's fault. Can't you sue the LASD for damages?
hardlianotion 1 day ago 0 replies      
When you have finally managed to get Bank of America to sort out their nonsense, do a little research and change bank. Hopefully, there are banks in the US that aren't that unprofessional.
di_ry 2 days ago 2 replies      
Bank of America what is a SSN?
solotronics 1 day ago 0 replies      
Bitcoin is the solution.. banking is broken for consumers because once you give a bank your money it's NOT YOURS.
dagenleg 2 days ago 2 replies      
I especially like how in every case of being screwed by the bureaucracy the lowly citizen has only one option - go the the lawyers. And what are those? More bureaucracy!
asgeirn 1 day ago 0 replies      
Is it actually possible to have a comfortable life in the US of A without a family lawyer?
vivaamerica 1 day ago 0 replies      
Interesting response video from Martin Shkreli, who is mostly known for his EpiPen scandal.


SketchySeaBeast 1 day ago 0 replies      
And here I was wondering what my new anxiety for the day would be.
gtsteve 2 days ago 2 replies      
I don't understand why he's taking this on himself to be honest. If I was incorrectly named in some sort of lawsuit, my first reaction would be to call my lawyer. Who knows if there's also an incorrect arrest warrant out for example?
chrisvoss 2 days ago 0 replies      
Wow what a horror show
RichardHeart 1 day ago 0 replies      
The greatest trick the devil ever pulled, was convincing the world that when the bank gets money stolen from it, it's your money, and not theirs.
Noughmad 2 days ago 2 replies      
Was the bank really at fault here? From what I see they receive a legal letter from the Sheriffs Department, and they complied with the letter. The article doesn't say what was included in the letter to the bank, I assume the author doesn't know either. It could have the wrong SSN, or the Sheriffs Department just looked up the defendant's name, found the author's account number and SSN, and sent those to the bank.
unlmtd 2 days ago 1 reply      
BoA gave everybody's money-substitute-government-credit away, but few have realised it (Even thou BoA says so themselves in their financial statements). So the guy is lucky in a way; he now doesn't trust the thieves. Even fewer realise that the banknotes they are so proudly holding have long ago been defaulted on. It's all running on illusions now. Sell your paper.
Why Momentum Works distill.pub
500 points by m_ke  3 days ago   95 comments top 30
throwaway71958 3 days ago 2 replies      
Some of the multi-author articles on Distill have a very important (IMO) innovation. They quantify precisely the contribution each author has made to the article. I would like to see this become norm in scientific papers, so on the one hand it'd be clear who to ask questions if they arise, and on the other the various dignitaries won't get their honorary spot on the authors list of papers they were barely involved with scientifically.
tempodox 3 days ago 2 replies      
I was fully expecting an article about some braindead product that nobody needs, called Momentum. Imagine my surprise finding physics and a healthily low percentage of BS.
tzs 3 days ago 2 replies      
I'm curious about the method chosen to give short term memory to the gradient. The most common way I've seen when people have a time sequence of values X[i] and they want to make a short term memory version Y[i] is to do something of this form:

 Y[i+1] = B * Y[i] + (1-B) * X[i+1]
where 0 <= B <= 1.

Note that if the sequence X becomes a constant after some point, the sequence Y will converge to that constant (as long as B != 1).

For giving the gradient short term memory, the article's approach is of the form:

 Y[i+1] = B * Y[i] + X[i+1]
Note that if X becomes constant, Y converges to X/(1-B), as long as B in [0,1).

Short term memory doesn't really seem to describe what this is doing. There is a memory effect in there, but there is also a multiplier effect when in regions where the input is not changing. So I'm curious how much of the improvement is from the memory effect, and how much from the multiplier effect? Does the more usual approach (the B and 1-B weighting as opposed to a B and 1 weighting) also help with gradient descent?

poppingtonic 3 days ago 2 replies      
I'm really loving the choice of articles, especially since you're just getting started.

Edit: I'm referring to the journal, not the author.

lutorm 3 days ago 3 replies      
It would be nice if the introduction made clear that the "Momentum" that "works" is some algorithm and not at all the physical concept "momentum".
wonderous 3 days ago 1 reply      
Reminds me of this paper, "Causal Entropic Forces":


colmvp 3 days ago 0 replies      
I have to say that the dynamics of momentum diagram is a thing of real beauty. The whole paper felt a little NYTimes like and then of course, I see that Shan Carter helped a little bit with it!
mark212 3 days ago 2 replies      
I can't follow the math but the presentation is gorgeous (Safari on a MacBook retina display). Really great, keep up the good work!
simplynot 3 days ago 4 replies      
But simply not on Firefox
amelius 3 days ago 1 reply      
Curious, has this method been used for solving linear systems? How would it perform e.g. against conjugate gradient?

And how would it perform for non-positive-definite systems?

Paul-ish 3 days ago 3 replies      
For those that don't read materials about optimization as much as they maybe should, what is "w"? It used without introduction and I don't know what it is. Perhaps this a convention I am not aware of?
hatsunearu 3 days ago 0 replies      
Pretty cool, it started using stuff about classical control theory. I always kinda missed that classical controls weren't really brought up in the discussion of gradient descent.
PDoyle 3 days ago 0 replies      
People, listen. "Damping" means to reduce the amplitude of an oscillation. "Dampening" means to make something wetter.


pizza 3 days ago 1 reply      
Animats 3 days ago 2 replies      
Hm. So that helps with high-frequency noise. Any progress on what to do when the dimensions are of vastly different scales? I have an old physics engine which had to solve about 20-value nonlinear differential equations. During a collision, the equations go stiff, and some dimensions may be 10 orders of magnitude steeper than others. Gradient descent then faces very steep knife edges. This is called a "stiff system" numerically.
fabmilo 3 days ago 0 replies      
If you are curious to see the code to produce that post you can check it out here: https://github.com/distillpub/post--momentum I was surprised to see that each post has its own html page and javascript library. I was expecting to see some form of rendering engine and a common javascript library.
codekilla 3 days ago 1 reply      
It would be really, really great if you could somehow hook this up to Discourse so people could comment on and ask questions about the article. Allowing people to ask questions and having others answer like MathOverflow would I think bring a lot more clarity. Many different kinds of people want to understand material like this but may need the math unpacked in different ways.
xapata 3 days ago 2 replies      
> The added inertia acts both as a smoother and an accelerator

 momentum = mass * velocity force = mass * acceleration momentum = (force / acceleration) * velocity
So, it looks to me like momentum is inversely related to acceleration. It doesn't seem right to call momentum an "accelerator".

jchrisa 3 days ago 1 reply      
Is this related to the way bias frequency works in analog audio recording? https://en.wikipedia.org/wiki/Tape_bias
andai 3 days ago 0 replies      
The math here is beyond my reasoning, but I loved playing with the sliders!
jacobush 3 days ago 0 replies      
Am I the only one who drew parallels to real life and how "just do stuff" often works better than the deliberate, slow step by step process?
panic 3 days ago 0 replies      
Turn the step size and momentum to maximum for some wonderfully glitchy chaos! (and a demonstration of why forward Euler integration only works well with relatively small step sizes)
tnecniv 3 days ago 0 replies      
Figured I'd mention since I saw the author and an editor in here: in footnote 4, the LaTeX isn't rendering (chrome, OSX).
soVeryTired 3 days ago 1 reply      
In your polynomial regression example, I can't follow what you mean by p_i = \xi \rightarrow \xi^{i-1} when you're setting up the model.
jxy 3 days ago 0 replies      
It seems have no improvement for badly conditioned problems, i.e. k >> 1. The convergence rate is 1 with or without the damped momentum.
c517402 3 days ago 0 replies      
This looks like gradient decent passed through a Kalman filter. Which, on reflection, seems like a good idea for overcoming ripples.
cuca_de_chumbo 3 days ago 1 reply      
isn't this akin, in effect, to successive over relaxation? https://en.wikipedia.org/wiki/Successive_over-relaxation

under-relax, converge real slowly

over-over-relax, oscillate

just-right-over-relax, get fast convergence

the8472 3 days ago 0 replies      
Is the decreasing momentum related to the temperature in simulated annealing?
retox 3 days ago 0 replies      
Page killed my browser.
cee_el123 3 days ago 0 replies      
So much beauty in the presentation.
Grid Garden A game for learning CSS grid cssgridgarden.com
400 points by jwarren  3 days ago   131 comments top 38
ajross 3 days ago 7 replies      
Immediate learnings from the first 3 exercises:

1. Grid columns and rows are 1 indexed instead of 0, ensuring a coming decade of mistakes due to the mismatch with Javascript (and, y'know, everything else) conventions for arrays.

2. Grid extents use the "one more than end" convention instead of "length", which is sorta confusing. But then they call it "end", which is even more so.

(edit) more:

3. grid-area's four arguments are, in order (using normal cartesian conventions to show how insane this is): y0 / x0 / y1 / x1. Has any API anywhere ever tried to specify a rectangle like this?

pidg 3 days ago 3 replies      
Nice game!

This made me uncomfortable though (about CSS grid, not about the game):

grid-area: row start / column start / row end / column end;

So you have to put the rows (Y axis coordinates) first and columns (X axis coordinates) second, i.e. the opposite of how it's done in every other situation - i.e. draw_rect(start_x, start_y, end_x, end_y)

(1, 1, 3, 4) in every other language would draw a box 2 wide and 3 high, but in css grid it selects an area 3 wide and 2 high.

Also the fact it uses 'row' and 'column' to describe the gridlines rather than the actual rows and columns irked me.

I'm sure I'll get over it!

ysavir 3 days ago 4 replies      
>Oh no, Grid Garden doesn't work on this browser. It requires a browser that supports CSS grid, such as the latest version of Firefox, Chrome, or Safari. Use one of those to get gardening!

Is Chrome 56 so outdates that this grid box doesn't work with it?

Or, perhaps, does the game only check if I'm running the "latest" version, regardless of which browsers do or do not actually work with Grid Garden?

Edit: Oh, wow, 56 is that outdated. Talk about cutting edge technology?

jwarren 3 days ago 0 replies      
The creator, Thomas Park (http://thomaspark.co/) is also the author of the similar and similarly excellent Flexbox Froggy: http://flexboxfroggy.com/
cr0sh 2 days ago 1 reply      
First off - I liked the game. It was fun. No arguments there.

But (and nothing against the author of the game)...

I'm going to jump on the bandwagon here of others wondering just what the person or committee who thought up the API was smoking when they came up with it?

At first, it made kinda sense. Nothing too troubling.

But the deeper it went, the less it made sense. I don't have a problem with 1 vs 0 indexing (because I started coding in old-school BASIC back in the dinosaur days of microcomputing - so that doesn't bother me much).

It's just that the rest of the API seems arbitrary, or random, or maybe ad-hoc. Like there were 10 developers working on the task of implementing this, but with no overall design document to guide them on how the thing worked.

I'm really not sure why there's two (or three? or four?) different ways to express the same idea of a "span" of row or column cells, based on left or right indexing, or a span argument, or...???

Seriously - the whole thing feels so arbitrary, so inconsistent. This API has to be among the worst we have seen in the CSS world (not sure - I am not a CSS expert by any means). I can easily see this API leading to mistakes in the future by developers and designers.

We'll also probably see a bazillion different shims, libraries, pre-compilers, template systems, whatever - all working on the same goal of trying to fix or normalize it in some manner to make it consistent. Unfortunately, all of these will be at odds with one another.

I'm sure JQuery will have something to fix it (if not already). Bootstrap too.

The dumb thing is that had this been designed in a more sane fashion, such hacks wouldn't be needed.

clishem 3 days ago 1 reply      
I can hardly read the code (https://i.imgur.com/fEsIYdA.png). Author needs to take a look at http://contrastrebellion.com/.
stigi 3 days ago 0 replies      
I love how the name resembles the classic http://csszengarden.com
legulere 3 days ago 2 replies      
Level 21 was pretty hard for me. It lacked the explanation that fr goes from 0 to 100 like %.
Kezako 3 days ago 1 reply      
There is also a nice postCSS plugin to build css grids through a kind of "ASCII-art": https://github.com/sylvainpolletvillard/postcss-grid-kiss
jonahx 3 days ago 3 replies      
How long will it likely be before CSS Grid can be used in the wild? Current browser support seems to be only about 35% [0].

[0]: http://caniuse.com/#feat=css-grid

ArlenBales 2 days ago 0 replies      
I think these CSS learning games would work better if the game you were trying to complete was something you would actually use the technology for. For example, a game that involves building a website. (e.g. Instruct me to make the page for UX friendly by moving an element from one column to another, or adjust columns, etc.)

I would never use CSS grid to do what this game is asking me to, so even though it helps me learn the syntax and properties, it's not helping me learn how it's going to be applied to an actual website.

philh 3 days ago 1 reply      
This is a neat game, but I have to say that the explanation of order didn't feel particularly enlightening, and I was hoping it would become clearer but it never got used again.
codyb 2 days ago 0 replies      
I'm a bit confused by level six.

grid-column-start: 0 - doesn't existgrid-column-start: 1 - leftmost squaregrid-column-start: 2 - 2nd square left to rightgrid-column-start: -1 - also doesn't exist?grid-column-start: -2 - rightmost square

Very strange. Although the concepts are neat so far.

The / notation in grid-column: 2 / 4;

is interesting. I'm surprised it's not similar to saymargin: 20 30 20 20;

where there's no slash, just an ordering to remember (clockwise from top).

Default span is 1 which is sensible sogrid-column: 3 === grid-column: 3/4 === grid-column: 3/span 1;

Wow the fr unit is pretty neat!

You can see where the slash is very useful in grid-template which makes a lot sense for dynamic numbers of rows and columns.

Very fun. I still have some questions about the numbering, but a great way to learn about CSS grid and have a bit of fun.

enugu 3 days ago 1 reply      
Is there a library or babel like transform on server side which can enable this feature in older browsers? Most clients wont have this enabled for some time now.
pc2g4d 2 days ago 1 reply      
`grid-row: -2` targets the bottom-most row, whereas I would have expected `grid-row: -1` to do so. I've never seen `-2` used to refer to the last element in a sequence. Python [1,2,3][-1] yields 3, for example.

Anybody have an explanation for this surprising behavior?

welpwelp 3 days ago 1 reply      
Is this like a built-in equivalent of Skeleton and similar frameworks? It's kinda cool the way it brings CSS closer to frameworks like iOS', which has built-in UI components like collection views and such that can be extended to build interfaces easily.
gondo 3 days ago 2 replies      
"Oh no, Grid Garden doesn't work on this browser. It requires a browser that supports CSS grid, such as the latest version of Firefox, Chrome, or Safari. Use one of those to get gardening!" I am running Chrome 56 on MacOS
smpetrey 3 days ago 1 reply      
So the moment everyone hops on Flex-Box, CSS Grid becomes the next hot take now huh?
EamonnMR 3 days ago 2 replies      
This is pretty cool. One thing I notice is that when you submit an answer, it shakes the editor box. This usually has a "you did something wrong" connotation (ie if you type a bad password when signing into a mac.)
nvdk 3 days ago 0 replies      
Ugh does not seem to use feature detection, but some other ugly browser detection scheme :/

edit: scratch that I was confusing css grid for flexbox, my browser does not support css grid yet.

Honzo 3 days ago 1 reply      
Why wouldn't grid-column-start/end be zero indexed?
Pigo 3 days ago 2 replies      
I haven't stepped up my styling game in awhile. Would anyone like to explain to me how css grid is better than say flexbox, or how the two are different?
weavie 3 days ago 1 reply      
Huh? I was only just starting to get used to flexbox. Are CSS grids meant to be a replacement/alternative/addition to flexbox?
redsummer 2 days ago 1 reply      
Haven't worked in CSS for about 5 years, but managed to get thru this. Are there any other decent games to learn CSS and JS?
julie1 2 days ago 0 replies      
wahou the Tk grid manager is back again.

Rule#1 of GUI every geometry manager will reinvent Tk/Tcl poorly saying it is crap.

friendzis 3 days ago 0 replies      
Sounds a bit peculiar when you put it this way, but chrome has finally caught up with IE :)

One more excuse not to use HTML tables in our toolbox

sultanofsaltin 2 days ago 0 replies      
Well that was fun! The last level took me a couple minutes (maybe have brute forced the prior fr example).
Twisol 3 days ago 0 replies      
This is really nice! It pegs my CPU something awful though -- Firefox 52 on a mid-2014 Macbook Pro.
selbekk 3 days ago 0 replies      
Love this game - can't wait to start using this in production - in about two-three years ^^
darth_mastah 2 days ago 0 replies      
Very good work, the Grid Garden.

As a side note, I find the grid API confusing as well.

danadam 2 days ago 1 reply      

Ever heard of this thing called "contrast"? Could use some.

kiflay 2 days ago 1 reply      
awesome.Few of the levels like level 23, 24 found them a bit difficult ,Can't find a solution. would have been great if there was a solution where i can compare my results with it.
chasing 3 days ago 0 replies      
Very nice! Works as advertised. ;-)
suyash 2 days ago 0 replies      
Unfortunately it's totally broken, on latest Safari and it says download new version of Safari to work.
trippo 1 day ago 0 replies      
PericlesTheo 3 days ago 0 replies      
Nicely done!
macphisto178 3 days ago 1 reply      
Do I have to use Canary? I'm on latest Chrome and it says my browser isn't supported.
pjmlp 3 days ago 2 replies      
"Oh no, Grid Garden doesn't work on this browser. It requires a browser that supports CSS grid, such as the latest version of Firefox, Chrome, or Safari. Use one of those to get gardening!"

Oh well, maybe in 5 years time I can make use of this.

React v15.5.0 facebook.github.io
396 points by shahzeb  17 hours ago   152 comments top 21
acemarke 15 hours ago 1 reply      
For those who are interested in some of the details of the work that's going on, Lin Clark's recent talk on "A Cartoon Intro to Fiber" at ReactConf 2017 is excellent [0]. There's a number of other existing writeups and resources on how Fiber works [1] as well. The roadmap for 15.5 and 16.0 migration is at [2], and the follow-up issue discussing the plan for the "addons" packages is at [3].

I'll also toss out my usual reminder that I keep a big list of links to high-quality tutorials and articles on React, Redux, and related topics, at https://github.com/markerikson/react-redux-links . Specifically intended to be a great starting point for anyone trying to learn the ecosystem, as well as a solid source of good info on more advanced topics. Finally, the Reactiflux chat channels on Discord are a great place to hang out, ask questions, and learn. The invite link is at https://www.reactiflux.com .

[0] https://www.youtube.com/watch?v=ZCuYPiUIONs

[1] https://github.com/markerikson/react-redux-links/blob/master...

[2] https://github.com/facebook/react/issues/8854

[3] https://github.com/facebook/react/issues/9207

TheAceOfHearts 14 hours ago 3 replies      
React team is doing an amazing job. I remember when it was first announced, I thought Facebook was crazy. "JSX? That sounds like a bad joke!" I don't think I've ever been so wrong. After hearing so much about React, I eventually tried it out and I realized that JSX wasn't a big deal at all, and in fact it was actually pretty awesome.

Their migration strategy is great for larger actively developed applications. Since Facebook is actually using React, they must have a migration strategy in place for breaking changes. Since breaking anything has such a big impact on the parent company, it makes me feel like I can trust em.

Heck, most of the items in this list of changes won't surprise anyone that's been following the project. Now there's less magic (e.g. React.createClass with its autobinding and mixins), and less React-specific code in your app (e.g. react-addons-update has no reason to live as a React addon when it can clearly live as a small standalone lib).

STRML 14 hours ago 6 replies      
This is a big deal to deprecate `createClass` and `propTypes`.

PropTypes' deprecation is not difficult to handle, but the removal of createClass means one of two things for library maintainers:

(1). They'll depend on the `create-class` shim package, or,

(2). They must now depend on an entire babel toolchain to ensure that their classes can run in ES5 environments, which is the de-facto environment that npm modules export for.

I'm concerned about (2). While we are probably due for another major shift in what npm modules export and what our new minimum browser compatibility is, the simple truth is that most authors expect to be able to skip babel transcompilation on their node_modules. So either all React component authors get on the Babel train, or they start shipping ES6 `main` entries. Either way is a little bit painful.

It's progress, no doubt, but there will be some stumbles along the way.

ggregoire 16 hours ago 7 replies      
For those still using propTypes, I'd recommend to take a look at Flow as replacement.


Drdrdrq 1 hour ago 1 reply      
Just curious: did Facebook change the license? AFAIK they can revoke permission to use from 3rd parties. Am I mistaken? If not, isn't this a huge risk for startups?
amk_ 8 hours ago 1 reply      
The breakup of the React package into a bunch of smaller modules really puts packages that treat React as a peer dependency in a pickle. I have a component module using createClass that works fine and exports a transpiled bundle in package.json. I guess now we'll have to switch to create-react-class, or maintain some kind of "backports" release series for people that are still using older React versions but want bugfixes.

Anyone have experience with this sort of thing?

uranian 5 hours ago 1 reply      
What is this problem in the Javascript landscape to keep forcing developers to do things differently, with the penalty of your app not working anymore if you don't comply?

I mean, creating a new type of brush for painters is ok, but I don't see the need for forcing them to redo their old paintings with the new type of brush in order to keep them visible..

IMHO Coffeescript and some other to Javascript transpilers are still a much better language than the entire Babel ES5/ES6/ES7 thing. But for some reason my free choice here is in jeopardy. The community apparently has chosen for Babel and are now happily nihilating things that are not compatible with that.

In my opinion this is not only irresponsible, but very arrogant as well.

Although I do understand and can write higher order components, I still write and use small mixins in projects because it works for me. I also use createClass because I enjoy the autobinding and don't like the possibility to forget calling super.

Now I need to explain my superiors why this warning is shown in the console, making me look stupid using deprecated stuff. And I need to convince them why I need to spend weeks rewriting large parts of the codebase because the community thinks the way I write is stupid. Or I can of course stick to the current React version and wait until one of the dependencies breaks.

It would be really great if library upgrades very, very rarely break things. Imagine if all the authors of the 60+ npm libs I use in my apps are starting to break things this way, for me there is no intellectual excuse to justify that.

nodesocket 16 hours ago 3 replies      
Big news seems to be removal of `React.createClass()` in favor of:

 class HelloWorld extends React.Component { }

hueller 15 hours ago 0 replies      
This is a good move. Modernization with sensible deprecation and scope re-evaluation with downsizing when more powerful alternatives exist. Too often codebases get bigger when they should really get smaller.
smdz 10 hours ago 0 replies      
I absolutely love how React+TypeScript setup handles PropTypes elegantly. And then you get the amazing intellisense automatically.


interface State {}

interface ISomeComponentProps {

 title: string; tooltip?: string ....


export class SomeComponent extends React.Component<ISomeComponentProps, State> {



PudgePacket 14 hours ago 1 reply      
Why do React and other js libraries emit warnings as console.error when browsers support console.warn?
whitefish 11 hours ago 3 replies      
I'd like to see React support shadow-dom and web components. Not holding my breath however, since Facebook considers web components to be a "competing technology".

Unlike real web components, React components are brittle since React does not have the equivalent of Shadow DOM.

sergiotapia 15 hours ago 1 reply      
Awesome changelog with great migration instructions. Bravo to the React team!

Going to set aside some hours on Saturday to upgrade our React version.

I recently started to go in with functional components where I don't need life-cycle events such as componentDidMount. Does anyone know if React is planning to make optimizations for code structured in this way?

baron816 14 hours ago 3 replies      
I hate that the React team prefers ES6 classes. This is what I do:

function App(params) { const component = new React.Component(params);

 component.lifeCycleMethod = function() {...}; component.render = function() {...}; function privateMethod() {...} return component;}

Rapzid 16 hours ago 3 replies      
Fiber is what I'm really waiting for. Not much official chatter about it, but looks like a 16 release?

They just removed some addons in master that many third party packages rely on, including material-ui. Hopefully these other popular packages can be ready to go with the changes when the fiber release hits.

xamuel 12 hours ago 2 replies      
Happy to see propTypes getting shelved. Too many people stubbornly use propTypes even in Typescript projects. Hopefully this change will usher in the final stamping out of that.
bsimpson 15 hours ago 0 replies      
Of course, you'd need to use super appropriately, but I wonder if anyone's taken a stab at porting React mixins to ES2105 mixins:


aswanson 3 hours ago 2 replies      
I cannot keep up. I just started learning react/apollo/graphql and I'm already out of date.
ksherlock 14 hours ago 2 replies      
So... create-react-class is an unrelated node module. react-create-class (the correct one, I guess) is completely empty, other than the package.json.
revelation 16 hours ago 3 replies      
I still remember the times when warning were actual likely mistakes in your code, not "we're adding some more churn, update your stuff until we churn more".

If you want people to always ignore warnings, this is how you go about it.

lngnmn 3 hours ago 1 reply      
Why people are always end up with J2EE-like bloatware? There must be some pattern, something social. It, perhaps, has something to do with the elitism of a being a framework ninja, a local guru who have memorized all the meaningless nuances and could recite the mantras, so one could call oneself an expert.

The next step would be certification, of course. Certified expert in this particular mess of hundred of dependencies and half-a-dozen tools like Babel.

Let's say that there is a law that any over-hyped project eventually would end up somewhere in the middle between OO-PHP and J2EE. Otherwise how to be an expert front-end developer?

Google's responsive design looks like the last tiny island of sanity.

An In-Depth Look at Google's Tensor Processing Unit Architecture nextplatform.com
374 points by Katydid  2 days ago   66 comments top 12
struct 2 days ago 6 replies      
Interesting points I took from the paper[1]:

* They actually started deploying them in 2015, they're probably already hard at work on a new version!

* The TPU only operates on 8-bit integers (and 16-bit at half speed), whereas CPU/GPUs are 32-bit floating point. They point out in the discussion section that they did have an 8-bit CPU version of one of the benchmarks, and the TPU was ~3.5x faster.

* Used via TensorFlow.

* They don't really break out hardware vs hardware for each model, it seems like the TPU suffers a lot whenever there's a really large number of weights and layers that it must handle - but they don't break out the performance on each model individually, so it's hard to see whether the TPU offers an advantage over the GPU for arbitrary networks.

[1] https://drive.google.com/file/d/0Bx4hafXDDq2EMzRNcy1vSUxtcEk...

mooneater 2 days ago 3 replies      
"This first generation of TPUs targeted inference" from [1]

So they are telling us about inference hardware. Im much more curious about training hardware.

[1] https://cloudplatform.googleblog.com/2017/04/quantifying-the...

slizard 2 days ago 2 replies      
It's a pity they omitted comparing against the Maxwell-gen GPUs like the M40/M4. Those were out already in late 2015 and are also on 28 nm.

Perhaps the reason is simply that they don't have them in their servers, but we'll see if Jeff Dean replies on G+ [1].

[1] https://plus.google.com/+JeffDean/posts/4n3rBF5utFQ?cfem=1

MichaelBurge 2 days ago 2 replies      
It's interesting that they focus on inference. I suppose training needs more computational power, but inference is what the end-user sees so it has harder requirements.

Most of us are probably better off building a few workstations at home with high-end cards. The hardware will be more efficient for the money. But if you're considering hiring someone to manage all your machines, power-efficiency and stability become more important than the performance/upfront $ ratio.

There's also FPGAs, but they tend to be much lower quality than the chips Intel or Nvidia put out so unless you know why you'd want them you don't need them.

zitterbewegung 2 days ago 1 reply      
Looking at the analysis of the article one of the big gains of this is that they have a Busy power usage of 384W which is lower than the other servers while having performance that is competitive with the other methods (although only restricting to inference).
zackmorris 2 days ago 0 replies      
While this is interesting for TensorFlow, I think that it will not result in more than an evolutionary step forward in AI. The reason being that the single greatest performance boost for computing in recent memory was the data locality metaphor used by MapReduce. It lets us get around CPU manufacturers sitting on their hands and the fact that memory just isnt going to get substantially faster.

I'd much rather see a general purpose CPU that uses something like an array of many hundreds or thousands of fixed-point ALUs with local high speed ram for each core on-chip. Then program it in a parallel/matrix language like Octave or as a hybrid with the actor model from Erlang/Go. Basically give the developer full control over instructions and let the compiler and hardware perform those operations on many pieces of data at once. Like SIMD or VLIW without the pedantry and limitations of those instruction sets. If the developer wants to have a thousand realtime linuxes running Python, then the hardware will only stand in the way if it cant do that, and well be left relying on academics to advance the state of the art. We shouldnt exclude the many millions of developers who are interested in this stuff by forcing them to use notation that doesnt build on their existing contextual experience.

I think an environment where the developer doesnt have to worry about counting cores or optimizing interconnect/state transfer, and can run arbitrary programs, is the only way that well move forward. Nothing should stop us from devoting half the chip to gradient descent and the other half to genetic algorithms, or simply experiment with agents running as adversarial networks or cooperating in ant colony optimization. We should be able to start up and tear down algorithms borrowed from others to solve any problem at hand.

But not being able to have that freedom - in effect being stuck with the DSP approach taken by GPUs, is going to send us down yet another road to specialization and proprietary solutions that result in vendor lock-in. Ive said this many times before and Ill continue to say it as long as we arent seeing real general-purpose computing improving.

saosebastiao 2 days ago 5 replies      
Are people really using models so big and complex that the parameter space couldn't fit into an on-die cache? A fairly simple 8MB cache can give you 1,000,000 doubles for your parameter space, and it would allow you to get rid of an entire DRAM interface. It's a serious question, as I've never done any real deep learning...but coming from a world where I once scoffed at a random forest model with 80 parameters, it just seems absurd.
mdale 2 days ago 0 replies      
Interesting stuff; really points to the complexity of measurement of technical progress against the Mores law; it's really a more fundamental around how institutions can leverage information technologies and organize work and computation towards goals that are valued in society.
cr0sh 2 days ago 0 replies      
This appears to be a "scaled up" (as in number of cells in the array) and "scaled down" (as in die size) as the old systolic array processors (going back quite a ways - 1980s and probably further).

As an example, the ALVINN self-driving vehicle used several such arrays for it's on-board processing.

I'm not absolutely certain that this is the same, but it has the "smell" of it.

sgt101 2 days ago 0 replies      
Does anyone have a view as to how much deep kernels might be useful for riding to the rescue for the rest of us?


andrepd 2 days ago 1 reply      
They're comparing against 5-year old Kepler GPUs. I wonder how it had fared vs the latest Pascal cards, since they're several times more efficient than Kepler.
amelius 2 days ago 2 replies      
Are they using it in feedforward mode only? Or also for learning?
Ask HN: Building a side project that makes money. Where to start?
576 points by ihoys  3 days ago   244 comments top 76
mikekchar 3 days ago 10 replies      
This is going to sounds like crazy advice, but having worked on many side projects in my life the last thing that's going to let you down is your skills. What you really need is time. Let's say it takes you 400 hours to build your project -- in those 400 hours, you will build up enough skills to get you started (not nearly enough to be good at it, but good enough).

So you need to work consistently 1-2 hours a day on your side project. It really doesn't matter what you do. If you manage to get those 1-2 hours in, you will muddle through and accomplish something. If your goal is to make a side project and bring in a non-zero amount of money, this is achievable. Learn whatever you learn on that project and then do it again.

Personally, I would spend exactly $0 on your task because, like I said, the thing that will kill you in the end is likely to be time commitment. If you spend money, you will be out the money and your time. So start with time and see where it takes you.

As others have said, no need to get fancy. Just build the simplest thing that will get you started, using the simplest tools you can find.

nurettin 3 days ago 7 replies      
Not sure if I should share this, as it is a trivial and obvious thing to do. Recently I created a ramen-profitable on google play with currently a couple of thousand users.

The idea is to look for apps that have low ratings, high downloads and lots of recent comments, then make them better. You can use synonyms and the same niche category to increase visibility on google play. This is where the money is.

gsylvie 3 days ago 5 replies      
Here's what I did: at work I needed something. (A git commit graph). But the one I found was #1. buggy, and #2. too expensive. It wasn't my money, but I just couldn't allow my company to pay that much.

So I made my own, and fixed the bug: http://bit-booster.com/best.html

And then I realized I needed a rebase button on the pull-request screen... and so it continues to evolve.

Here's the thing: I've always known I'm a good maintenance programmer. I've always preferred working on existing software instead of making new software from scratch. And writing add-ons for Bitbucket is basically just another form of maintenance programming: reading Bitbucket's code, noticing its flaws and shortcomings, and fixing them.

Also, I love git, and I love going very deep into git (e.g., https://github.com/gsylvie/git-reverse.sh). So this is my dream job.

I've only made $7,000 USD after 1 year on this side project. But $1 of those dollars feels better than $10,000 from my day job.

superasn 3 days ago 4 replies      
I think you should start something very very small and forget about the money part for now. For me, my most successful project ideas came from problems that I faced during my own site launches.

Since you don't have much knowledge of FE development, I would suggest you keep things Simple Stupid and try to do as much as possible with HTML and jQuery. I have created really complex websites using just PHP and jQuery (sites that have made me 6 figures over time), plus you will learn the real nitty-gritty like DOM manipulation, CSS tricks, etc - which you will need to use anyway at least a few times regardless of the shiny JS framework.

I would highly recommend at this time you don't get sucked into the React, Node, Vue, etc. You will only end up wasting months without nothing to show for it (but maybe I'm just too old school).

Whatever time you have left after that, use it to learn online marketing. Learn about list building, SEO, Copywriting, outreach and affiliate marketing. Because that's how you turn your technology into actual money.

haser_au 3 days ago 4 replies      
Here's my suggestion. Walk down the road to shops in your area (small, family run businesses) and ask if they have a business problem they think IT can solve.

You'd be surprised how little some of these businesses know. I have previously;- Built a travel database in MS Access for a Travel Agent (long time ago)- Ordered and setup ADSL connections and email for a water tank manufacturer and a furniture store- Capture requirements, researched, ordered and installed an office (6 people) worth of IT kit for a not-for-profit (didn't charge them for this work).- Designed and implemented a roster management system for an IT helpdesk for a university.

There are heaps of opportunities. Just have to know where to look.

danielsamuels 3 days ago 1 reply      
I built Rocket League Replays[1], a website which analyses the replay files generated from matches played in Rocket League. I took my inspiration from GGTracker[2] which is essentially the same thing, but for StarCraft 2. I was looking through the replay files and noticed that there was some human readable content in them, so I wrote a parser[3] and built the site around it. Eventually I started a Patreon which allowed users to support the site in return for more advanced analysis. I get around $200/mo from that which covers the server costs etc, so I'm more than happy with that.

[1]: https://www.rocketleaguereplays.com/replays/

[2]: http://ggtracker.com/landing_tour

[3]: http://danielsamuels.co.uk/words/2015/07/27/rocket-league-re...

simonbarker87 3 days ago 3 replies      
My side projects always come from personal needs, in each case I built it to solve my own problem before turning it in to a full product (summaries below). If you are going in with the sole intention of making money then make sure you know that there is a need for what you want to make and that you can get your project in front of people.

You don't need to use the latest and greatest tech, in fact, I would urge you don't. For front end you can stick to simple JQuery interactions, bootstrap theme and you'll be fine, depending on the market sector you go for they may not even care about the design, so long as it's functional.

Summary of my project and where they came from:

http://www.oneqstn.com, before launching our company's product I put the question "Where would you expect to buy the Radfan?" on the shop page and then 5 options. I expanded this on its own dedicated domain and 5 years later it's still ticking along. Very popular in the middle east for some reason.

http://www.stockcontrollerapp.com, I manage in-house production of my company's hardware product and after moving from an Excel spreadsheet to a Python script I decided to make a stock management app for small factories. Has made my work life much easier and is more appropriate for me than Unleashed.

http://www.taptimerapp.com, I didn't like any of the timer apps that I had tried so made my own mainly for use in the gym. All the others had too small touch targets, hard to see at a distance/without glasses on, or stopped music playing when the timer finished so I made an app that addressed these.

yodon 3 days ago 3 replies      
If your goal is to make money, don't allow yourself to write a single line of code until you have talked to people not related to you (and not close friends with you), heard at least two of them independently describe facing the same business problem, and heard both of them say "yes, that would help!" (or better "yes, I would buy that!") in response to your proposed solution.

Finding a real business problem and a real solution is what matters. The tech is just an implementation detail you work out later.

qin 3 days ago 0 replies      
Baffled why nobody has mentioned https://www.indiehackers.com/ yet.

If you're looking for some inspiration from others who've built revenue-generating side projects and businesses, I'd start here.

joeyspn 3 days ago 2 replies      
Start looking for people that complements your skills! I've just created a "HN Side Project Partner Search" google stylesheet for people seeking team mates or help building their side projects:


IMO building a team (2 or 3) is the best way to go...

Hopefully you'll find the idea useful (ideally this should be a website but I'm testing the waters with a simple stylesheet...)

akanet 3 days ago 1 reply      
I gave a talk at Dropbox literally about how to start a small business without quitting your day job. A lot of people have told me it was helpful. You can watch it here: https://youtu.be/J8UwcyYT3z0.

A lot of my focus was boiling down what approaches could plausibly work and what pitfalls to avoid.

bikamonki 3 days ago 2 replies      
List down 25 products/services you consume regularly. For each, ask whether a better version could be done. Yes? Do it.

Here's one case: the local/popular site to search for used cars sucks. It is slow, hard to see/compare all options, silly reloads the page on each added filter, filled with outdated listings, flooded with ads, and pic slides take forever (all of this on my slow phone over a slow 3g which is how most visitors must be using it). Furthermore, car dealers (who post most listings) complain about service and price. So I built the proverbial mvp and put it in the hands of my marketing partner (you won't sell a line of code if you do not partner with a person/company dedicated to push your stuff) who's already working on a deal with the used car dealers association, pitching a novel business plan, hopefully making some passive income for both of us.

kureikain 2 days ago 0 replies      
I think the easiest thing is build what you need. So that if it failed to make money, you still have the tool you want.

I build https://noty.im that way, an monitoring tool that call me when my site is down.

Then I realized more thing is needed and I started to add more features and plan to publish launch soon.

I will say not to worry about scale and technical first. I learned it the hardway, Just get it out. No one will really care if it's broken or something doesn't owrk when you doesn't have lots of customer.

So to answer your questions:

1. Can you provide some ideas on where to start?Pick a technology stack you familiar with. Apply to Microsft BizSpark to take advantage of $150/monthly credit. Learn FE, it isn't that hard.

2. What are some simple things I can build by myself? Any idea?

I build `https://kolor.ml` in a sunday. It's very simple but I need it. So you can try to build some simple/small utility that helps people with their daily live such as: a tool to call people up in morning.

A tool to check if my site has expose some particular header such as `nginx`, `php version` etc and if found an old one or vulnerable one, alert.

Of course, lots of people already build those, but the point is just get started, along the way you will realize what you really want to build

jasonswett 3 days ago 1 reply      
This is what I believe to be the formula:1. Find a group of people who are interested in a subject.2. Find out what, related to that subject, they want to buy.3. Sell them that thing.

This is the approach I took with my book/videos at AngularOnRails.com, a "side project that makes money".

Another important thing is to surround yourself with people who have successfully done the thing you're trying to do.

I don't have much time right now but if you (or anybody) wants to talk about building side projects that make money, feel free to email me at jason@angularonrails.com. I'm not an expert but I know a hell of a lot more than I did 9 years ago when I started.

tmaly 3 days ago 0 replies      
I used my food side project as a way to learn new skills. I am still in the process of working on version 2.

The simplest way to start is to take a framework or system that has most of the basic parts ready for you to use.

Since you already know python, try to learn something like django and use Bootstrap with a CDN for your front end stuff.

I would recommend reading some of the posts on indiehackers.com to get an idea of how those people got started with an idea and how they got their first customers. Some do not even have any tech skills and just used wordpress or found someone to help them with the site.There is also a podcast for this that just got started that is excellent. The founder of indiehackers was a YC alumn named Courtland, he is a cool guy.

I chose to solve a problem that I personally encountered. If you cannot think of something, try picking something that you know requires lots of manual effort for some people. Then use some scripts from the book Running Lean to try to work out exactly what the problem is for those people.

Another great resource is OppsDaily which I love reading first thing in the morning. Cory sends out a problem someone has in a particular industry that needs to be solved. The criteria is that they must be willing to pay for it if someone responds. In many cases they will say how much they are willing to pay.

jjude 3 days ago 2 replies      
Start to teach. Create a course in udemy on what you know (data processing or management). If creating a video course overwhelms you, create a text based course. I'm using softcover to do that. You can check here: https://www.jjude.com/softcover-in-docker/

Creating a course can get you the momentum. You can start there and branch out to other things.

renegadesensei 3 days ago 0 replies      
Oh man I know this feel. I have been programming for startups for years and I have always had lots of ideas but never the mental commitment to finish anything.

I'm proud to say that very recently I did manage to complete a side project that I intend to launch in a week or so. It is a social site based on an idea I got from watching Japanese dramas.

What helped in my case was that my idea was really simple to build. I too have zero frontend / web design ability, so I just paid a guy from Craigslist to fix it up. Being able to bootstrap a finished product with a relatively small amount of time / money helps you get in that "closer" mentality instead of just playing around and never finishing.

I'd also suggest not worrying about making money at first. Just try to make a cool product or service. Money is a stressful and distracting motivator I find. Once you have something of value to offer and get some feedback from potential users, then you think more about pricing and marketing.

So in short, start small, don't be afraid to outsource and trade money for time, and don't worry about making a profit right away. That worked for me at least.

techbubble 3 days ago 0 replies      
Side projects that also do some public good might be a good avenue for you to consider. I built Walkstarter https://walkstarter.org a free walkathon fundraising platform for public schools as a side project. The experience is fantastic. I continue to develop my skills, e-meet new people, and the platform is on track to raise a very satisfying $1 million for schools.
patio11 3 days ago 1 reply      
What are some simple things I can build by myself?

You're already successful at selling enterprise services to at least one company in management and backend data processing. Have you considered selling management and backend data processing advise, perhaps delivered in the form of a PDF or series of videos? This is stupendously valuable to tech companies if you meaningfully improve on what they have already and would allow you to sell to people who have expense accounts tied to, to steal a friend's phrase, the economic engine of the planet.

You don't need a commandingly high bar of programming sophistication to sell books. There exist services that can do all the heavy lifting for you. If you prefer knocking together a site to sell your own books, it is essentially an hour to get the minimum thing to charge money and ~2 days to get something which could plausibly be the kernel of an ongoing business.

fpgaminer 3 days ago 0 replies      
I researched startups on IndieHackers.com and wrote an article:


The summary, well, sums it up pretty well I think:

> Listen to your friends, coworkers, and clients. Find something painful they mentioned that you also have first hand experience with, or that youve needed at your job. Package it up so its easy to use. Build an MVP, get feedback, iterate. Charge more than you think you should. Listen to your customers. Launch on ProductHunt. Market the hell out of it! Use Content Marketing, reach out to communities, forums, friends, and businesses with cold calls/emails. If youve built something great, word of mouth will do its magic. You can do this in your spare time, and probably should.

averagewall 3 days ago 4 replies      
I've made a Windows GUI for a powerful command line open source application that was for Linux. It makes a few thousands dollars a month. It did take many years of part time programming along with user feedback to get to that stage though.

Desktop and Windows might seem like a dying market but that's what people use at work and those are the people who can pay for things.

cheez 3 days ago 2 replies      
Pick someone else's niche making money, tweak it to make it your own, implement. Rinse, repeat until you achieve success.
grow91 3 days ago 1 reply      
I started a side project last year and it's been a fantastic experience. It's an opportunity to "scratch an itch" that the day job can't provide (which for me is doing whatever I want).

I had some frontend dev skills but didn't have the backend chops, so I hired someone on Upwork. I'm pretty busy at work so getting someone else involved is key (If I was by myself I'm not sure I would have stuck with it).

It's been a year and the app is doing about $3k/mo in revenue.

gschier 2 days ago 1 reply      
As someone with many failed side projects, I can tell you that having a goal of making money is usually a bad thing. Like many other people have said, you can learn the skills. The hardest part of building a side project is finding the time and staying motivated for more than a few months.

So, pick something _you_ will use and, if you enjoy building/using it, so will others. Obviously think about ways to monetize it, but money should be more of a side-effect than a motivation.

I'm currently working on https://insomnia.rest, which makes around $800/mo right now. I started it as a side-project a couple years ago with no intention of making money. However, traffic grew organically and I eventually left my job to pursue it full-time.

In summary, find something you love to work on and let it consume you. If you do this, making $100/mo should come in no time. Have fun hacking!

pseingatl 3 days ago 0 replies      
Look at oppsdaily.com. The developer posted here a while back. While there's no archive, there are daily postings concerning software needs. Some of the needs are unrelated to software, but most are. Could give you some ideas. (I have no relationship with the site or maker).
tonyedgecombe 3 days ago 0 replies      
"The fallacy is that I have so much information about day to day job in my head that I have lost all creative juice."

I wonder if more programming is going to bring back those creative juices, perhaps you should think about doing something away from the keyboard instead?

j_s 3 days ago 0 replies      
Where should I start? [...] where to start?

Building an audience + market validation before building a sideproject are the top starting priority for "a side project that makes money".

Start by building an audience (this can be as simple as interacting with professionals on Twitter and/or their own blogs, or even contributing here on HN!). I won't be able to determine what people want without asking people, and I will save a lot of wasted time by building something that I can guarantee people already want to pay for! For example: I intend to walk around my neighborhood with a survey to gauge interest in localized "technology disaster prevention" (aka initial setup of PC & phone backups with verification and increasingly annoying reminders) as a service. The first sideproject is the hardest because initially the audience is smallest but then can be re-used.

I hesitated to post this because most developers have an "if you build it, they will come" mentality (and a tendency to focus on technology/implementation details that they enjoy) that even I personally have a hard time overcoming myself. However, if the criteria is making money, building an audience is the right first step. Once the bare-minimum MVP functions, marketing makes all the difference on the "that makes money" part (see my list of random books to buy elsewhere in this thread)... and there is no point over-engineering something I can't convince anyone to use!

I realize I'm going out on a limb a bit to say that market validation before each sideproject comes second... I know of one example of someone who has built an audience while publicly initiating sideprojects without thorough market validation (focusing on technology instead -- note that this determination is 100% my own armchair quarterbacking with the benefit of hindsight); this person's projects appear to be faltering because of poor market fit. However, it hasn't stopped many from buying into this person's brand / other projects, and that audience is now following the next project even though initially it appears to be trending toward the same mistake!

PS. As mentioned elsewhere I could shortcut market validation by tracking down commercial products (already being paid for) that are getting a lot of visibility and addressing issues raised in bad reviews; however even when going this route I will still benefit greatly by having an audience to market the replacement.

Edit: switched to first person to preach to myself to get off my butt and start doing something!

wordpressdev 3 days ago 0 replies      
You can start with what you already know and then build over it, or diversify, with time. The easiest way to monetize your knowledge to create a blog and link it up with social media (twitter, FB, youtube etc).

For example, you can write about management and backend data processing (what you do at work). This way, you don't have to learn something new to start your side project (except maybe how to manage a blog). The blog can be monetized via Ad networks like Adsense and Amazon affiliate program etc. As you grow, you may take in direct advertisers, sponsored content etc.

joelrunyon 3 days ago 1 reply      
Listen to http://sidehustleschool.com - it's a story every day about someone who did just this.
roycehaynes 3 days ago 1 reply      
Spend a few days identifying problems people pay for, particularly easy problems that you can build yourself. The key is that the solution has to be fairly easy to build since you're not comfortable full-stack. Once you've chose a solution that already makes money solving a problem, build the same solution but position it for either a niche market or make a better product than competitors.

I would at minimum leverage bootstrap or semantic ui as your ui. Otherwise, hire someone to do the web interface for you.

rb808 3 days ago 0 replies      
I dont think any advice is any use unless we know what you want.

First thing is you should decide why you're doing it. Is it to make money, or have fun, or enhance your current mgmt skills or to learn to code in a whole new area?

Only once you've decided that, then questions like "I don't have any frontend dev skills. Where should I start?" and "Should I outsource the website development part?" are possible to answer.

mendeza 3 days ago 0 replies      
Get some inspiration from indie hackers.com . I have been wanting to do something similar and there are tons of great advice and wisdom from solo developers building their own businesses.
chad_strategic 2 days ago 0 replies      
Before you do anything, I would recommend that you clear state a goal. Do you want to learn a new language? Do you want $100 a month in revenue. Do you want learn a little SEO?

Recently, I started on project for the sole purpose of learning more about nodejs and then in the second phase angular 2.

So a few months later I have a process that extracts data from amazon api and looks for price decreases in products. I learned quite a bit about nodejs and even about mysql database structures. So it was a good learning exercise.

Although I have accomplished my objective, I want to make money. This is the problem... Now I have learned what I needed, unfortunately I have learn more about seo, twitter api and facebook api to get users to visit my 200k webpages and make some money. So the side work winds up becoming a challenge and sometimes a burden, to continue to figure out how to reach your goal.

But when you reach your goal of $100 a month, then you will want more... So basically it never ends.


daraosn 2 days ago 0 replies      
My advise: don't focus so much on how to build it, focus on how to grow it... REALLY!

I've done so many complex projects that at the end I couldn't sell, that's frustrating... please hear me: figure out first how to sell it (or at least get good traffic to a crappy wordpress site), then build a very crappy version and then improve it over time.

I read recently this, and I think is gold:https://www.blackhatworld.com/seo/making-money-online-it-all...

ryandrake 3 days ago 3 replies      
Surprised this has not been mentioned yet: Make sure your current employer is cool with side projects and moonlighting. Very few companies I've worked for are OK with it, even if done completely with personal equipment/time. You don't want to build the next Facebook in your free time and get fired over it or have your current employer claim IP ownership of it.
cyberphonze 3 days ago 1 reply      
Firstly as other have said build something you would use and that interests you. It doesn't have to make money if it adds value to you personally or professionally (technical knowledge and hard life lessons learnt). Secondly just ship it, personal projects can easily become obsessions, always needing one more thing. I did this and even though I hate parts of my apps design it is getting good feedback.

I recently had a quiet period in my freelance work so spent the time learning React Native amongst other things. I applied this to an idea I have had rattling around in my head for a point tracking app for people on Slimming World. I spent 4 weeks developing this and then shipped it to iOS. In under 2 weeks it has grown to almost 10,000 registered users, is number 4 in the UK Lifestyle Free Apps chart (ahead of Slimming World's own app) and has made enough ad revenue to cover the only costs I have had (App Store membership costs). It is never going to bring in big money but the lessons I have learnt are priceless.

susi22 3 days ago 1 reply      
If you want a stable tech stack that'll stay the same for the new few years then check out Clojure + Clojurescipt. I'm still doing the same since I started a few years ago.

Regarding FE dev: I also have a technical Background (EE) and I hated any CSS (HTML isn't so bad). Though, flexbox is a life changer. It's actually enjoyable and I can get stuff done without spending hours on simple layout issues.

fuckemem 2 days ago 0 replies      
Here is an angle on this:

The goal is to stick to something until completion.

Phsychologically this is easier if you are enjoying it.

So choose a project and a tech stack you can really get into.

Scratch your own itch so that even if no one else uses it or buys it are least you can.

Look at the non-financial upsides, so that if you make zero or little money you can still feel proud: For example - learning new skills that might help you get a raise, learning marketing so that your next project is more likely to be successful, etc.

A word of warning - once you have spent some time on a side project the shine will wear off and it will feel like a job - and you have to find ways to keep yourself motivated when you could literally just go an watch TV instead on your time off.

0898 3 days ago 0 replies      
I started holding talks for independent agency owners (www.agencyhackers.co.uk). It brings in about 200 a month, from an 'agency roundtable' that I run. That's not much obviously it only just covers ConvertKit subscription and other SaaS software I need like Reply.io. But I think this is an audience I will be able to monetise with webinars, conferences etc.
rezashirazian 2 days ago 0 replies      
As someone who has done countless side projects (check them out here: http://www.reza.codes) I suggest taking out "making money" as a variable and focus on things that interest you or something that stems from a personal need.

The satisfaction you get from these side projects will come from being able to finish them as opposed to try and make money from them. When you try to take on a side project with the goal of making money, you'll end up sinking way too much time in marketing and reaching out to possible customers as opposed to building something (which I find to be more fun and rewarding).

And the time you spend on trying to get people to sign up or even try your product doesn't have the same returns in satisfaction as building it. (my opinion of course)

simonebrunozzi 2 days ago 0 replies      
A little side project of mine is a chrome extension to be able to read (or write) summaries of articles on Hacker News.

It is in alpha stage right now, but I'm curious to hear your thoughts about it: https://chrome.google.com/webstore/detail/mnmn/kepcdifhbfjep...

Screenshot of how it looks like: https://github.com/simonebrunozzi/MNMN/blob/master/screensho...

(note: the "+" button is only available for users that are "editors". If you try it out, your user doesn't have that enabled by default).

amelius 3 days ago 4 replies      
If thousands of engineers are looking for something to build that others will pay for (but can't find it), then that tells me there is something fundamentally wrong with the way the economy works. Shouldn't it be the other way around? People who have a certain need express it, and engineers just pick one and work on it.
19eightyfour 3 days ago 0 replies      
I guess you could focus on building a side project that doesn't need front end skills. Your aim is to get a bit of satisfaction and money and prove you can do something like this, right? Building something simple you can start on right now, starting from where you are right now.

Two ideas:

You could build some kind of email integration, or something delivered over, or using, email. The email processing itself would be mostly backend stuff, you could template the emails with Django or even Python triple quote strings.

Or you could build an API of some sort. The only front end really needed for an API company ( such as Stripe or whatever ), is documentation. You can write your docs on one of those doc hosting platforms ( readthedocs might be one? I don't know much about it now ).

For your side project, it's probably important to pick some things you like and start doing them, rather than trying to make certain up front which things are going to make money.

skdotdan 3 days ago 3 replies      
Instead of building an MVP and then trying to sell it, there are people who suggest:1-Think of an idea.2-Make a simple landing page with a mockup or something.3-Try to sell the product. And by selling the product I mean actually selling it (so people actually transferring you money). If you can achieve a given amount of sells in a given amount of time (important: set concrete goals with concrete timing), then you have validated your product.4-Build a first version of your product.5-Iterate.

Very simple to say, very difficult to do (I've never tried, but I would probably fail). But I think there would be ways to systematically apply steps 1-3 until you find the right product to work on.

Any thoughts?

encoderer 3 days ago 0 replies      
As a software engineer who has built a side project that more or less pays my Bay Area mortgage here is my advice:

Find an idea that plays to your strengths and build something with a friend/coworker who is a better frontend developer. A good partner is invaluable, and with this you can already see how.

Also, charge on day one imo.

JayeshSidhwani 3 days ago 1 reply      
You can work with freelance companies [like https://indiez.io/] This will mean that you can pair up with someone who has complementary skills + not worry about getting the projects yourself.
jenamety 3 days ago 0 replies      
"Start Small, Stay Small: A Developers guide to launching a startup" by Rob Walling.

really got me thinking about best ways (ie most time efficient + value) to vet an idea before lifting a finger building.

helen842000 2 days ago 0 replies      
Instead of focusing on what you need to build, focus on what you want to fix.

Firstly do you have any code, processes, methods or scripts you have created as part of your job that saves your team time or money? Could you re-package them for sale?

I don't think that tech is a good starting point because your solution might not need to be software.

Razengan 3 days ago 0 replies      
> What are some simple things I can build by myself? Any ideas?

How about an iOS/Android game with optional micro-transactions?

In game development you can indulge and improve as many different skills as you want; graphics, sound, music, mathematics, networking, AI, UX design, character design, writing, storytelling, difficulty balancing, teaching..

An indie game project will give the freedom to be as creative as you want, and you get to enjoy your own product, but of course you don't have to arrive at a finished, marketable product to have fun building it.

Opteron67 3 days ago 0 replies      
Lot of money is moving around but you need first take a look to other stuff and and activities rather than coding/refcatoring.

Try see other people activities and have a look how much they spend for basic services you could improve. If you suck at UX/UI, give some money to someone else who could do it for you

what matters is how fast you can release something, do not speed time in courses or fancy optimisations

Also do not focus on a side project alone, let create 3 or 4 project a year

Swizec 3 days ago 2 replies      
Step 1: "Pay Me" button

You'll be amazed by how many ideas you never have to waste time building, if you put up a paywall and nobody pays.

Tezos 3 days ago 1 reply      
Yup instead of starting a side project you should acquire some tezos blockchain token and build IOT project on top of it when the network launch

You dont need frontend skills

oldmancoyote 3 days ago 0 replies      
Before you get deeply into what you are going to sell, consider how you are going to market it. Marketing is a * * *! Successful marketing is harder than programming a product, much harder and more problematic. Just ask the folks trying to sell iOS apps when there are about 2 million apps on the market. Lining up a buyer for a custom product before you begin (as some here have suggested) sounds very attractive to me.
hkmurakami 3 days ago 0 replies      
Find customers first. Then build what they want to buy.
joshfraser 2 days ago 0 replies      
My advice to all startups is that you should spend as much time thinking about how you are going to find & acquire customers as you do on your product. The same goes for cash-flow businesses & side-projects.

One approach is to decide which customers you can find the easiest and then ask them about their problems. Start there.

reacweb 3 days ago 0 replies      
Build something that helps you in your everyday job. This can increase your motivation to improve it and keep working on it regularly. When it starts being useful, try to find another user to get feedback and increase feature. Before considering putting it in the wild, try to find a couple of hackers to anticipate potential security issues ;-)
amureki 2 days ago 0 replies      
I recently came up with next problem, I did the ungly POC (proof of concept), but don't know how to properly advertise it to find to get responses (except for HN, reddit and a couple of same-type resources).
davidjnelson 3 days ago 1 reply      
A simple way to make $100+/month is writing short books, articles, and giving away things like software and putting adsense ads on the site.
COil 3 days ago 0 replies      
You shouldn't start a side project for money but for something you like or believe in. Money will come after. (may be!)
dorait 3 days ago 1 reply      
Check out https://www.meetup.com/Code-for-San-Francisco-Civic-Hack-Nig...

Find one similar in your region. If it does not exist, start one.

Coming up with an idea for a product that is useful and that people will pay for is more difficult than actually implementing one.

llorensj28 3 days ago 0 replies      
Definitely build something you personally need. Try to see what you can do without building a product. For example, site for a service that could eventually become a product in the future, but allows you to validate early on and figure out a business model prior to touching any code.
shanecleveland 2 days ago 0 replies      
Try to look at what you do at your day-job from an outsider's perspective. What are some little things that to you seem trivial and obvious, but to an outsider would seem complex and foreign. There's opportunities there to package your knowledge into a tool or resource.

Example: i

retrac98 3 days ago 0 replies      
I wrote up my experiences with successfully shipping a side project here: https://medium.com/leaf-software/5-tips-for-actually-shippin...
richev 3 days ago 0 replies      
Build something that you yourself find useful (and that you can reasonably assume will be useful to other people too).
noir_lord 3 days ago 0 replies      
If you are weaker on the FE side from a project point of view keep your 'stack' really small, jQuery and Vue.js would get you a long way without really needing much front end knowledge, you can then gradually add in the other tools as you go (things like SASS/Less etc).
zdware 2 days ago 0 replies      
Anytime you start focusing on monetization of a side project, it stops being a side project and more of a "startup". Your mindset has to change around it entirely. You now have to consider marketing, your audience, and legality.
pmcpinto 3 days ago 0 replies      
What about a side project that isn't heavily related with tech? Which are your passions or hobbies outside work? Maybe one of them can lead you to a niche market.You can also share your professional knowledge. Do you like teaching or writing?
anothercomment 3 days ago 0 replies      
Another option might be looking into shopify's tutorials on drop shipping. https://www.shopify.com/guides/dropshipping
techaddict009 3 days ago 0 replies      
Find something simple and can solve the problem of few.Use bootstrap if you are not good at UI. It's simple yet powerful.

Regarding monetization you can use AdSense, donation button or charge monthly based on type of your product.

onion2k 3 days ago 0 replies      
If you want to make money from your side project (or startup, or whatever) figure out how you're going to get customers before you do anything else. If you can't do that then you won't make a dime.
pbreit 3 days ago 0 replies      
First, you need to turn your attitude around and be a lot more positive.

With respect to a project I'll let you in on a secret: if your service does something valuable, it doesn't have to look pretty.

borplk 3 days ago 0 replies      
Man my 2 cents is iterate in tiny tiny steps.

Whatever you do get it up there in an embarrassing state and keep making it better.

I wasted lots of time by wanting to do things right and so on.

Firegarden 2 days ago 0 replies      
How about we all band together and make one big side projecg. Pile on as many of us as we can and just keep evoloving it
pw 2 days ago 0 replies      
It requires a little bit of frontend work, but I've had good luck with content websites based on public data sets.
bartvk 3 days ago 1 reply      
> I have so much information about day to day job in my head that I have lost all creative juice

Although you ask great questions, this bit is what worries me. It doesn't feel good - it seems your job is taking too much energy from you. Have you thought about getting another, easier job?

SmellTheGlove 1 day ago 0 replies      
Great topic. We sound a lot alike. I work in a different sector, but I'm also now squarely in management and have been for a while, and my previous expertise was data engineering and infrastructure. Started with zero front end ability, and also not much Python either since I'm in the size of company and industry that still accomplishes most data work in SAS, and what it can't, Informatica and DataStage. Here's where I'm at, and so as to not bury the lede, I'm not making money yet but I found satisfaction in just spending some time each week on my projects -

1. If you know python, you can probably make a pretty natural jump to Flask. I didn't know Python, but I could program in a handful of other languages, so I figured I'd pick it up as a useful tool anyway. You may like this tutorial:


I'd say I really started learning when I got to the stage on authentication. The reason is that this tutorial implements OpenID, which isn't very common anymore, so I went off and implemented OAuth instead - heavily googling and scavenging, but ultimately having to piece together something that worked myself. I learned a lot that afternoon.

You could do this with any framework and there are tons of them. I chose Python and Flask over a Javascript-based framework because learning Python in parallel would be useful to me in data engineering, even though I don't write code for a living anymore.

2. As others have said, time commitment is the biggest issue. Figure out what you can give this, and scope appropriately.

3. I haven't done this because I'm too much of a completionist to pull the trigger, but get your MVP out there and build off of it. For me, I've decided I'm pretty happy just spending time on the project, even if no one else has seen it.

4. Bootstrap is my friend, it can be yours too. I have never been a strong visual person. I like words on a page. I have no eye for what makes a good visual and what doesn't, which has been my biggest developmental item when I moved into an executive role last year. All that said, Bootstrap is awesome and makes it a lot easier to build good looking websites. I started off here and built out a static website for an idea I'd had, and am now circling back to build the things I want to be dynamic in Flask.

5. There are a lot of choices out there. Unless you're developing bleeding edge, and I may get flamed for this, most of the choices really don't matter that much. I chose Python+Flask+Bootstrap because I liked each individually, it seemed like something I could work with, and NOT because I decided they were objectively better than Node, Angular, Express, React, or anything else that I haven't touched. I also sort of like that there isn't a new Python web framework each day, so diving into Flask seems like a more stable investment of my time. I'm sure there are drawbacks.

6. When it all starts to come together, the real, revenue generating idea might be to address pain points in your day job. My sector is insurance. I know a lot about certain operational functions. Eventually, I could solve some of those and build a business around it, I tell myself. You probably have some specialized domain knowledge as well. Consider that.

Good luck, and have fun. Like I said, I'm happier just for having taken on the challenge. If I ever make a dollar, that'd be good too, but less important than I initially thought.

Over the Air: Exploiting Broadcoms Wi-Fi Stack googleprojectzero.blogspot.com
432 points by ivank  3 days ago   151 comments top 15
mrb 3 days ago 4 replies      
How ironic. Yesterday on HN someone said "Google Security Team, here's your call to stop pontificating on the Project Zero blog and throwing cheap muck at Microsoft. You've got an even bigger and more complicated mess to clean up, you dug the hole yourself, it's going to take you longer, and you should have started on it years ago" [1]

And today we have this very impressive counter-example of Google putting some engineers to work for months doing vuln research for making, in the end, EVERYONE safer: Apple users, Samsung users, and hundreds of other mobile device vendors who use this popular Broadcom Wi-Fi chipset in products shipped to 1B+ users.

But no, somehow, tomorrow again it is going to be all Google's faults that Android-derived commercial works are insecure and poorly maintained by their respective vendors.

[1] https://news.ycombinator.com/item?id=14023969

scarybeast 3 days ago 3 replies      
This is one of the most serious and instructive pieces of technical security work we're likely to see this year. In case it hasn't sunk it:

- This vulnerability affects tons of smart phones (iPhone, Nexus, Samsung S*).- The attack proceeds silently over WiFi -- you wouldn't see any indication you've been nailed.- Mitigations and protections on WiFi embedded chips are weak.- The second blog post will show how to fully commandeer the main phone processor by _hopping from the WiFi chip to the host_.

Imagine the havoc you could wreak by walking around a large city downtown, spewing out exploits to anyone who comes into WiFi range :-)

jacquesm 3 days ago 3 replies      
That's going to hurt a lot of folks. Especially those whose manufacturers are not doing their bit with respect to updates. It's absolutely incredible to me how sloppy manufacturers are when it comes to keeping phones updated, they seem to see phones that are older than two years as effectively end-of-life.

Even Google is guilty of this, though to a slightly lesser extent.

See: http://www.androidpolice.com/2016/06/21/google-support-site-...

So three years or 18 months past sale.

Personally I think that a phone is only end-of-life when it stops to work and that manufacturers should either offer to buy them back if they don't want to support them any longer or should be forced to provide security updates.

8_hours_ago 3 days ago 2 replies      
This research is effectively a free audit of Broadcom's firmware by Google. At what point does Broadcom approach Google, have the appropriate NDAs signed, and give them access to the source code? If someone is providing a (very valuable) free service to you, wouldn't you want to make their lives easier?

I assume there are some important reasons why this wouldn't occur, but at first glance, it seems to me that the pros outweigh the cons.

shock 3 days ago 1 reply      
> Broadcom have informed me that newer versions of the SoC utilise the MPU, along with several additional hardware security mechanisms. This is an interesting development and a step in the right direction. They are also considering implementing exploit mitigations in future firmware versions.

...considering implementing exploit mitigations in future firmware versions. I'm somewhat doubtful that they give much shit unless it hurts their bottom line. This sounds like lip service. What else are they gonna say? "We're not considering implementing exploit mitigations"?

osivertsson 3 days ago 3 replies      
Whoa! This is really impressive stuff, and will cause head-ache in my dayjob where we develop a product using this WiFi SoC.

Can this vulnerability cause content-owners and DRM vendors to no longer allow such devices to decode 4K content? I'm thinking of for example PlayReady certification that may be withdrawn/downgraded because of this issue, but I'm fuzzy on the details how this would work.

chillydawg 3 days ago 0 replies      
Project Zero are seriously doing good work here. This attack can passively own a large portion of all modern smartphones if unpatched against these vulns.
hmottestad 3 days ago 0 replies      
My wife has a Huawei P7. Seems like it has the Broadcom chip :(

Huawei has provided exactly 1 update to the phone since it was released. And only for those living in New Zealand.

Other than clearing out all the trusted wifis and just keeping the home wifi, is there anything at all that can be done?

glasshead969 3 days ago 0 replies      
iOS 10.3.1 patches this exploit.


kyrra 3 days ago 0 replies      
For reference, here is the bug[0] that affected Apple that was discussed yesterday[1]. One commenter on that HN topic noticed that there was 1 other public bug about Broadcom wifi chips, though it was not the specific one that affected Apple.

This blog post points to 4 Project Zero bugs for different Broadcom issues.

[0] https://bugs.chromium.org/p/project-zero/issues/detail?id=10...

[1] https://news.ycombinator.com/item?id=14024971

caio1982 3 days ago 0 replies      
That is one of the most educative text I've ever read on network hacking/security, cannot wait for the next part(s)!
thomastjeffery 2 days ago 0 replies      
This is a prime example of why manufacturers (like Broadcom) should always have open source drivers.

Also a prime example of why manufacturers should always allow the user to update device the software on his/her device.

Broadcom's closed driver stack has gone from the Linux user's headache to a serious vulnerability in most phones.

When will vendors get it through their respective thick skulls that there is no legitimate reason or benefit to keeping their drivers proprietary?

blumentopf 3 days ago 1 reply      
There's a bug in Apple's EFI driver for BCM4331 cards present on a lot of older Macs which keeps the card enabled even after handing over control to the OS. A patch went into Linux 4.7 to reset the card in an early quirk, but I suspect other OSes can be taken over via WiFi on the affected machines:


dleibovic 3 days ago 1 reply      
Is there a way to tell if my android phone received an update to protect against this exploit?
toperc 3 days ago 2 replies      
FDA Authorizes Ten 23andme Genetic Health Risk Reports 23andme.com
306 points by checkoutmygenes  1 day ago   180 comments top 17
butisaidsudo 1 day ago 9 replies      
> For several years, 23andMe has worked on demonstrating that its reports are easy to understand and analytically valid...

I guess these are different reports, but I know a genetic counsellor who describes 23andMe's carrier screening tests as "the bane of their existence". Those reports seem not-so-easy to understand based on the patients she sees.

One problem is that they warn that your offspring are at high risk for some condition, when really "high risk" means 0.5% higher risk than the general population. The other is that they may say you are not a carrier for a certain condition, when they only test for one variant of it, where proper tests will test for multiple variants. They can both scare and soothe irresponsibly.

grandalf 1 day ago 1 reply      
I'm surprised to see all the fear-mongering in this thread.

We leave genetic material behind everywhere we go. 23andme analyzes only a small subset of one's DNA.

The most important thing to realize about genetics is that very few health conditions (and even traits) are highly correlated with a specific genotype.

Some are, but the reason something like 23andme hasn't revolutionized health is because the correlations for most things are weak. 23andme does a good job of showing just how weak in the results. I'm 52% likely to have the eye color I have even though both parents have that color. I'm the tallest in my generation (in my family) yet my genes are mostly for below average height.

Over time, with a lot more data and a lot more correlation analysis with health and behavioral data, there will be more actionable information for the average customer.

As it stands, 23andme is useful for the following reasons:

- the data is entertaining. It's fun to find out how much neanderthal DNA one has, etc.

- the ancestry results are interesting.

- the health results make it clear just how little impact genetics has in most aspects of health. Yes there are some big exceptions, but those are a minuscule percentage.

By joining 23andme you get a chance to watch the studies unfold and plug in your own data. For a curious, patient person, this offers a great way to make an interesting area of science a bit more salient.

Balgair 1 day ago 5 replies      
Are they still saying that by submitting a sample to them, that they then own your genome and can sell it to whoever they want? I'd love to get mine sequenced and check it out a bit, but not if they are going to sell it off to a million shady companies whenever they go bankrupt (maybe 50+ years, but still)
phkahler 1 day ago 9 replies      
Is there any way to just have your entire genome sequenced and get all the data in a software-friendly format? At that point there could/should be some open source software for analyzing it and finding common or well understood things like this. That way the software could be updated and people could re-run their analysis to look for newly discovered stuff.

I think this would be an awesome amount of fun. I for one would be interested in looking for certain gene variants that are not mentioned at all over at 23andMe.

awalton 1 day ago 0 replies      
...right as the Republicans want to remove some of the protections afforded by GINA. [1]

While I'm sure this helps 23andMe's business case, it's a seriously scary time to consider getting your genome sequenced right now.

[1]: https://www.washingtonpost.com/news/to-your-health/wp/2017/0...

Gatsky 1 day ago 1 reply      
I think people are forgetting to ask the key question - Cui bono? Who benefits?

23andme definitely benefits - all the data they have collected is very valuable, and they intend to sell it to pharmaceutical companies etc.

On the other hand, working in genomics, in my opinion the benefit to any one person having their genome tested in this manner is minimal. The simple reason is that most genetic alterations have low penetrance for phenotypes or involve complex interactions.

Paul-ish 1 day ago 0 replies      
> 23andMe is now the only company authorized by the FDA to provide personal genetic health risk reports without a prescription.

Will it be hard for competitors to get this authorization as well?

CaliforniaKarl 1 day ago 4 replies      
I wonder, does that mean anyone who's already submitted a sample to 23andme will get these reports, or is a new sample required?
soneca 1 day ago 9 replies      
My opinion is that this test is useless at the least and dangerous at the most. It provides information that in almost the totality of the cases no one can correctly interpretate and transform in actionable health advice. Not scientists, not doctors, much less consumers.

But it is sold as a cutting edge scientific resource that will improve your life. It wont. Not even increasing the chance that you might avoid something somehow, that's the fallacy.

For our level of knowledge regarding causality in biology and genetics, I believe this test is as good as buying your astrological map.

csl 1 day ago 1 reply      
When 23andMe took down their health reports, I reimplemented most of them myself: https://github.com/cslarsen/arv/

(I.e., arv is a newer version of the older dna-traits, which includes the actual health reports: https://github.com/cslarsen/dna-traits/)

Just `pip install arv`, `python -m arv --example genome.txt` and you're good to go (it's fast as well, parses in 60-70ms).

deusofnull 1 day ago 2 replies      
Can't wait till the require these for health insurance... Seriously, is there regulation protecting people from "pre-existing conditions" discovered by their genetic analysis?
mrfusion 1 day ago 3 replies      
The FDA thinks it can decide what I can learn about my own body.
Khaine 1 day ago 0 replies      
So if someone was interested in getting a genetic test to find out ancestry and health information, what service is best?
tudorw 1 day ago 0 replies      
I found some useful information in my 23andme report regarding poor or undesirable responses to quite a few pharmaceuticals, my phase 1 metabolism is... novel...
kakarot 1 day ago 1 reply      
DNA readings will soon be the new Horoscopes...
DownSyndrome 1 day ago 1 reply      
23andMe is very offensive in assuming all humans only have 56 genes.
ZeroNet Uncensorable websites using Bitcoin crypto and BitTorrent network zeronet.io
404 points by handpickednames  3 days ago   168 comments top 29
freedaemon 2 days ago 3 replies      
Love the ZeroNet project! Been following them for a year and they've made great progress. One thing that's concerning is the use of Namecoin for registering domains.

Little known fact: A single miner has close to 65% or more mining power on Namecoin. Reported in this USENIX ATC'16 paper: https://www.usenix.org/node/196209. Due to this reason some other projects have stopped using Namecoin.

I'm curious what the ZeroNet developers think about this issue and how has their experience been so far with Namecoin.

shakna 3 days ago 6 replies      
Has the code quality improved since I was told to screw off for bringing up security?

* 2 years out of date gevent-websocket

* Year old Python-RSA, which included some worrying security bugs in that time. [0](Vulnerable to side-channel attacks on decryption and signing.)

* PyElliptic is both out of date, and actually an unmaintained library. But it's okay, it's just the OpenSSL library!

* 2 years out of date Pybitcointools, with just a few bug fixes around confirmation things are actually signed correctly.

* A year out of date pyasn1, which is the type library. Not as big a deal, but covers some constraint verification bugs. [1]

* opensslVerify is actually up to date! That's new! And exciting!

* CoffeeScript is a few versions out of date. 1.10 vs the current 1.12, which includes moving away from methods deprecated in NodeJS, problems with managing paths under Windows and compiler enhancements. Not as big a deal, but something that shouldn't be happening.

Then of course, we have the open issues that should be high on the security scope, but don't get a lot of attention.


* Disable insecure SSL cryptos [3]

* Signing fail if Thumbs.db exist [4]

* ZeroNet fails to notice broken Tor hidden services connection [5]

* ZeroNet returns 500 server error when received truncated referrer [6] (XSS issues)

* port TorManager.py to python-stem [7] i.e. Stop using out of date, unsupported libraries.

I gave up investigating at this point. Doubtless there's more to find.

As long as:

a) The author/s continues to use out-dated, unsupported libraries by directly copying them into the git repository, rather than using any sort of package management.

b) The author/s continue to simply pass security problems on to the end user

... ZeroNet is unfit for use.

As simple as that.

People have tried to help. I tried to help before the project got as expansive as it is.

But then, and now, there is little or no interest in actually fixing the problems.

ZeroNet is an interesting idea, implemented poorly.

[0] https://github.com/sybrenstuvel/python-rsa/issues/19

[1] https://github.com/etingof/pyasn1/issues/20

[3] https://github.com/HelloZeroNet/ZeroNet/issues/830

[4] https://github.com/HelloZeroNet/ZeroNet/issues/796

[5] https://github.com/HelloZeroNet/ZeroNet/issues/794

[6] https://github.com/HelloZeroNet/ZeroNet/issues/777

[7] https://github.com/HelloZeroNet/ZeroNet/issues/758

roansh 2 days ago 2 replies      
We need more projects like these. Whether this project solves the question of a truly distributed Internet* is out of question. What we need is a movement, a big cognitive investment towards solving the Big Brother problem.

*I am referring to concentrated power of the big players here, country-wide firewalls, and bureaucracy towards how/what we use.

eeZah7Ux 3 days ago 2 replies      
The project looks very promising but relies on running a lot of javascript from untraceable sources in the browser.

Given the long history of vulnerabilities in the the browsers, trusting js from a well-known website might be OK, trusting js from zeronet is unreasonable.

If ZeroNet could run with js code generated only by the local daemon or without js it would be brilliant.

emucontusionswe 3 days ago 2 replies      
I would recommend use of Freenet over ZeroNet. More or less the same concept/functionality however with 15 years more experience.

Freenet: https://freenetproject.org/

0xcde4c3db 3 days ago 3 replies      
> Anonymity: Full Tor network support with .onion hidden services instead of ipv4 addresses

How does this track with the Tor Project's advice to avoid using BitTorrent over Tor [1]? I can imagine that a savvy project is developed with awareness of what the problems are and works around them, but I don't see it addressed.

[1] https://blog.torproject.org/blog/bittorrent-over-tor-isnt-go...

avodonosov 3 days ago 1 reply      
As for uncensorable, if the content is illegal, the torrent peers may be incriminated distribution of illegal content
dillon 2 days ago 2 replies      
There's also GNUNet: https://gnunet.org/As others have mentioned there's also FreeNet: https://freenetproject.org/

I haven't looked deep into any of these projects, but I do think they are neat and hoping at least one of them gains a lot of traction.

ThePadawan 3 days ago 1 reply      
Cannot access this at work for zeronet.io being involved in P2P activity.

I cannot help but feel disappointed and unamused.

Kinnard 3 days ago 1 reply      
In other news ZeroNet has been banned from giving its TEDtalk: https://news.ycombinator.com/item?id=14039219
jlebrech 3 days ago 2 replies      
I always wondered why you couldn't just download a torrent of torrents for the month.
lossolo 3 days ago 1 reply      
There is single point of failure, kill the tracker = kill the whole network.You can get all the IPs from the tracker that are visiting certain site, it's not so secure if someone is not using tor.
daliwali 3 days ago 1 reply      
I don't see how this could decentralize web applications though. Wouldn't each client have to be running the server software? Someone has to pay for that, too.
wcummings 3 days ago 1 reply      
I thought it was pretty easy to disrupt / censor torrents, hasn't that been going on for a while?
rawells14 3 days ago 0 replies      
Sounds incredible, we'll probably be seeing much more of this type of thing in the near future.
vasili111 3 days ago 1 reply      
Lack of anonymity in ZeroNet is a big problem.
jwilk 3 days ago 2 replies      
> Page response time is not limited by your connection speed.

Huh? What do they mean?

hollander 3 days ago 2 replies      
Several years ago I had Tor running on a server at home. It was a regular Tor node, not an exit node. Later I was put on a blacklist because of this. What is the risk of using this?
DeepYogurt 3 days ago 1 reply      
Presumably you only download the site you want when you visit it. If that's the case then can you view revisions of the web sites or do you only have the current copy?
mtgx 3 days ago 1 reply      
Speaking of which, what's the progress on IPFS?
jlebrech 3 days ago 0 replies      
a youtube replacement in zeronet would rock
arcaster 1 day ago 0 replies      
This project is cool, but I'm more interested in future releases by the Askasha project.
HugoDaniel 3 days ago 1 reply      
It would be great if a simpler webtorrent version was available just for fun.
vitiral 2 days ago 1 reply      
This seems similar to ipfs. What are the main differences?
thriftwy 3 days ago 1 reply      
This is what I've waited for for quite some time.
digitalzombie 2 days ago 0 replies      
Anybody read this as Netzero the free internet dial up in the 90s?
Jabanga 3 days ago 1 reply      
A little known fact: the Namecoin blockchain's cost-adjusted hashrate [1] is the third highest in the world, after Bitcoin and Ethereum, making it unusually secure given its relative obscurity (e.g. its market capitalisation is only $10 million).

[1] hashrates can't be compared directly due to different hashing algorithms having different costs for producing a hash.

mirap 3 days ago 3 replies      
The zeronet.io is hosted on vultr.com. Why don't they use zeronet to deliver its own website?
tfeldmann 3 days ago 3 replies      
No comment about ZeroNet itself, but am I alone in the opinion that this website takes grid layout too far? It looks outright cluttered and overloaded.
Marc Andreessen: Take the Ego Out of Ideas stanford.edu
302 points by allenleein  23 hours ago   215 comments top 30
6stringmerc 22 hours ago 6 replies      
>So if technological change were going to cause elimination of jobs, one presumes we would have seen it by now.

...considering this statement was delivered while the US Workforce Participation is at 30+ year lows while productivity and technological change has made significant inroads during that time (ex: Macintosh 512k vs. iPhone 7), I think he's missing a large chunk of the, uh, big picture.

Then, contrast one of his well reasoned and very telling thoughts about the future:

>All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room? What if you get to spend that hour playing with your kid or reading the news or watching TV or actually working because you dont have to worry about driving?

Because in the United States, we should be working even while we are getting to work, because we don't work enough? SMDH. To me, the Working Class has plenty of reason to be cynical about this vision of the future..."playing with your kids in the car" time or not.

mvpu 21 hours ago 7 replies      
"Take the ego out of ideas" is sound advice for investors, not entrepreneurs. Ego is a loaded word, but if you define it, in this context, as an irrational belief that you are right and the world will catch up, then it's essential for every entrepreneur. "New ideas" get no support. You're the only support. You have to strongly believe that the world will get there, do whatever it takes to convince them to get there, and survive long enough to bank on that moment. Without that ego in your idea, you probably won't survive long enough.
blahman2 1 hour ago 0 replies      
1 hour commute is fine? No. There were all these visions about how with the advent of the Industrial revolution people would have to work half a day because that's how long it would take them to finish their norm. Instead, they were asked to produce twice as much.

Now we have our 'great' thought leader try to convince us about the virtues of hard work and 1 hour commute again.

How about "Put the type of Ego in your ideas that will remove the need for you to have a job in a few years"? Because jobs will be going away, and we don't need an even more hard core rat race in the US.

6d6b73 3 hours ago 0 replies      
every year in the U.S. on average about 21 million jobs are destroyed and about 24.5 million are created, Andreessen says

FFS.. No. They are not destroyed and created. These jobs are just shifted from one company to another, and most of them are seasonal, or part-time jobs.

d--b 21 hours ago 0 replies      
I would say more broadly "take the ego out of work".

In tech, we meet so many people who are emotionally attached to their work, who would treat their production as 'their baby'. This is a terribly common counterproductive bias. It prevents from:

- taking criticism productively: people "put their soul" in their work, and then someone tells them it's perhaps not the best way. Do hear them.

- assessing one's position objectively: people who are attached to their work often misconstrue their vision with the reality of the work. They tend to minimize weak points and emphasize strong points.

- delegating your job away: people infatuated with their work have a hard time giving it away. Necessarily, the delegate will screw it up.

That should be rule number 0 of all jobs: Be invested in the mission, not in the solution

BjoernKW 21 hours ago 1 reply      
> All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room?

This is ridiculous. That's what our supposedly most innovative thinkers can come up with? Turning your car into a living room so we can have even more commuting (with all the wonderful side effects that come with it ...)?

What about eliminating the need to commute in the first place?

wyc 16 hours ago 1 reply      
Re: tech creates jobs, Tyler Cowen's Average is Over has an interesting passage about automation:

"Keeping an unmanned Predator drone in the air for twenty-four hours requires about 168 workers laboring in the background. A larger drone, such as the Global Hawk surveillance drone, needs about 300 people...an F-16 fighter aircraft requires fewer than 100 people for a single mission."

It's well known that the industrial revolution created countless new jobs that were unimaginable at the time, a sentiment echoed in The Second Machine Age by Brynjolfsson. But how do you pick the winners that will bring the most jobs? Some say disruptive innovation, but it still seems like an open question.

davidf18 16 hours ago 1 reply      
> "Self-driving cars, for example, could potentially put 5 million people involved in transportation jobs out of work....."

On a work day, NYC subway provides 6 million trips. Think of all of the car drivers it is displacing. And then there are the buses! And that is in NYC alone. Just think of all of the drivers mass transit has already displaced throughout the nation!

Then there is intercity transit: think of all of the drivers displaced by planes, trains, and buses!

Self-driving trucks? Trucks have been displaced by trains, barges, container ships, ....

Cars, even electric ones, create air pollution which impacts health as well as greenhouse gas. Electric cars are charged from electric power plants -- most of the US electricity is generated by carbon-based fuels -- coal and gas.

Using Via which transports multiple passengers [part of Manhattan, part of Brooklyn, Chicago, Washington DC] (or Uber pool for example) at least helps to reduce air pollution and greenhouse gas compared with single passenger vehicles that at least helps to reduce air pollution / greenhouse gas.

e2e4 15 hours ago 0 replies      
Commuting to work with self driving car sounds like a faster horse carriage.

I wonder why isn't telecommuting / virtual presence a big part of his predictions.

bkohlmann 14 hours ago 0 replies      
I was fortunate enough to be the one to interview him for this event. He's a remarkable intellect and kept me on my toes the entire time!
nadermx 22 hours ago 0 replies      
I guess with the growing remote work movement this becomes harder and harder to do since you spend less time with your peers whom you can "argue with" mentally since you lack time around them to get a better sense of how they think
0xCMP 18 hours ago 3 replies      
I do like the idea of almost a rolling office. I've always wanted a sort of vagabond life fueled by tech. There so much out in the world and so many people. It's a shame that we're often stuck in the same places for such long periods of time.

If I become a remote/work-from-home/smb-owner I'd love to just being a self-driving car doing stuff on the go and also changing where I am all the time.

zackmorris 21 hours ago 1 reply      
"Take the Selfishness out of Profit"
lappet 19 hours ago 0 replies      
> "Most of the good ideas are obvious, Andreessen says. They just might not work right away"

That seems like a gross simplification of the way things usually work. Saying good ideas are obvious sounds a little egoistic, which seems ironic, considering the title

ashray5 19 hours ago 0 replies      
All of a sudden you can have the idea that an hour-long commute is actually a big perk because instead of driving and having to sit and focus and lurch through traffic, what if your car is a rolling living room? What if you get to spend that hour playing with your kid or reading the news or watching TV or actually working because you dont have to worry about driving?

I suppose one can find these answers from people who commute by company shuttles, trains or subways.

matt_wulfeck 21 hours ago 12 replies      
I'm getting tired of all of the hot air coming from these tech oligarchs. They're so enriched by a tech boom and a decade of easy money that we worship at their feet. Their vision and goal for the future is simply more money for themselves at the expense of others.

"Guys look! An hour long commute is actually a good thing because you can spend it with your kids!" Why is it so hard to spend time with our kids now?!

I know that sounds harsh, but we seriously need to stop the hero worship in SV culture and begin building a society that benefits everyone, not a society that works itself to the bone just to eat the cake of a larger corporation and enrich the early investors. They will just as quickly dilute your quality of life as they will dilute the shares in your company.

0xCMP 18 hours ago 0 replies      
We can't look/talk to each other at lunch without staring at our phones why do we think we're going to spend quality time with people in a self-driving car? The car isn't a solution to that problem.
wonderous 22 hours ago 0 replies      
Video & Transcript: Marc Andreessen on Change, Constraints, and Curiosity


raspasov 7 hours ago 0 replies      
Put science into ideas.
justinmk 22 hours ago 1 reply      
> Marc Andreessen: Take the Ego Out of Ideas

Shouldn't it be:

> Anonymous: Take the Ego Out of Ideas

smallboy 21 hours ago 0 replies      
Wasn't this the guy who said India should still be colonized? Not taking advice from him.
omegaworks 20 hours ago 0 replies      
Mark Andreessen: Get ready for White Flight 2

So much for the short-lived renaissance of the city. Will millennials still want short commutes when they can pass the time in their cars?

graycat 14 hours ago 0 replies      
So, Andreessen is talking about "ideas" -- hmm ....

His ideas seem to be (A) some large changes in the economy and society from (B) some exploitations of largely existing computer technology to meet some want/need previously unnoticed or infeasible to meet.

But, even for just (A) and (B), there is potentially MUCH more potential in ideas that Andreessen seems to ignore.

An example was Xerox: Copying paper documents was important. The main means was carbon paper. Xerox did quite a lot of engineering research based on some early research, IIRC, at Battelle. The result was one of the biggest business success stories of all time.

Andreessendoesn't discuss research ideas -- how to have them, pursue them, apply them, evaluate them, etc.

wonderous 22 hours ago 0 replies      
RichardHeart 17 hours ago 0 replies      
Whomever wrote the title, didn't follow it's advice.
debt 22 hours ago 5 replies      
he's a media vc. facebook is basically his crown jewel and that's it. facebook/media is cool i guess, but i don't see how he know much about anything else such as robotics or ai.

just look at andreesen horowitz investments. many are largely media companies(buzzfeed, stack exchange). they've tried doing finance which is a much bigger market but like clinkle clearly imploded and coinbase probably is next(literally transfers went down the other day eek). so fb is still all he's got.

he hasn't invested in any big winners yet beyond fb/media. so why should i listen to this guys advice(unless of course if i'm building a media company).

underwater 13 hours ago 0 replies      
Is the attribution of the quote in this headline meant to lend it extra weight? Rather ironic.
LordHumungous 18 hours ago 1 reply      
good_vibes 22 hours ago 2 replies      
That picture makes me want to not keep reading but then I remember Netscape.
spectistcles 19 hours ago 0 replies      
How about we take the ego out of Marc Andreessen
Uber said to use sophisticated software to defraud drivers, passengers arstechnica.com
332 points by dralley  1 day ago   220 comments top 42
tyre 1 day ago 17 replies      
I don't see the scandal.

1) Uber's upfront estimate is based on a naive calculation of getting from A -> B. From a software perspective, that makes sense. The consumer hasn't even committed to riding, so let's just toss out a ballpark figure.

2) If the consumer looks at the figure and says, "Yes, that's reasonable for transportation from A -> B", which they indicate by clicking "Request Ride", then they are agreeing to pay that price for the service.

3) The rider can verbally request a different route once in the Uber.

4) The driver is paid based on minutes and miles, via some formula that they've agreed to. The rider is charged based on an up-front calculation, which they can decide if it is worth it or not.

It sounds like the lawsuit is alleging that the rider is being defrauded by being taken on a different route than the one displayed at time of purchase.

I think this is silly because, to my knowledge, everyone taking an Uber is paying for the transportation and not any particular route. I.e. being taken on a specific route isn't what the rider is agreeing to pay for. Also, as noted in (3), the rider is always free to change the route.

Additionally silly because the rider seems to be alleging that they were defrauded by being taken by a more efficient route. There just doesn't seem to be any "harm" in what's happening here. I can understand the case if the user agreed to go from San Francisco down to San Jose, based on a route straight down the 101 highway, then, once they got in, was driven to San Jose through Los Angeles.

lithos 1 day ago 5 replies      
Uber has killed so much goodwill and their reputation so well that no one will be surprised at almost any accusation directed at Uber.

I know my first thought was "not surprising", and I imagine others will think the same.

Digit-Al 1 day ago 0 replies      
To me it seems the disconnect here is between the fixed fee on one side (the passenger) and the flexible fee on the other side (the driver).

Uber is, sort of, acting as an insurer and underwriting the cost of the journey. The passenger pays a fixed fee for a projection of how much the route will cost and the driver gets paid by how much it actually costs in driving time and distance. If there is some sort of unexpected delay and the journey takes longer then, presumably, the driver will be paid more than the passenger paid so Uber will lose out.

As with all insurers Uber charges a higher initial charge to act as a buffer and minimise the chances of losing money on the journey.

I can't really see any way of getting round this as long as the passenger pays a fixed price and the driver is paid a flexible fee.

mabbo 1 day ago 1 reply      
Both driver and passenger think they know the full truth of the matter for the financial transaction they're agreeing to, but they don't. There's implicit dishonesty in that, and when you combine dishonesty with money we call it 'fraud' usually.

But let's set aside the question of whether it was legal. Was it moral?

Software like this doesn't fall from the sky- management approved it, software teams wrote it, maintain it and system tests probably exist to validate it works... how do those developers feel okay about this? How do they not feel like they're cheating people out of money? When your Mom hears about it, and asks if you were part of it will you spend 20 minutes giving a long-winded answer about how it was actually not a bad thing? That's a bad sign, man.

I'm reminded of the scene from 'Clerks' discussing Contractors[0]


tmh79 1 day ago 3 replies      
I think people misunderstand upfront fares. Its like buying an airplane ticket: the airline charges passengers the appropriate price to fill the plane, and it pays pilots a salary. Pilots who fly more profitable routes don't get paid bonuses because their passengers pay more. Same thing with UPS drivers, who get paid a fixed amount to drive packages around. The concept of "up front fares" seems to be widely practiced in logistics companies, and is probably a part of the transition as ride sharing companies become less like taxis and more like UPS/airlines.
ihsw2 1 day ago 0 replies      
The fare discrepancy can extend beyond longer/shorter route calculation -- there is also the issue of surge price disparity between driver and passenger. For example, the user would see 3x surge pricing while the driver would see 2x surge pricing, where the user is charged for 3x but the driver is paid for 2x. This is pure speculation and I have not witnessed this behavior but it's another way things can go wrong in Uber's favor.

Uber might be able to defend itself saying that the data provided to the driver and passenger are different because of misconfigured caching and stale data being served to either party, but it's a moot point in case Ars Technica has concrete and verifible claims of methodical and programmatic fraud. Personally I have witnessed being billed for $0 in-app after taking a round trip (effectively zero distance traveled) but the email notification showed the proper billing value, and there may be more instances of this "confusion."

basseq 1 day ago 1 reply      
As an Uber user, I'm unaware of this "upfront" pricing model. I thought the price charged was based on the actual time/distance (which, incidentally, they email me on the receipt). I know I can estimate the trip cost, but I thought that was just an estimate.

Am I wrong? What is this "upfront" pricing?

And is the reverse true? E.g., can I commit to some committed price then have the driver take some crazy route?

chickenbane 1 day ago 1 reply      
A bit off topic, but I was just in New Orleans for a fun trip. I normally use Lyft, but apparently Lyft pickups were not allowed at the airport so I used Uber (my last Uber ride was months ago).

After waiting at least 15 minutes in the pickup spot, my driver cancelled. Annoyed, I requested another Uber ride (which went fine). However, I was shocked to learn that Uber had still charged me a cancellation fee for the first ride and continued to argue it was appropriate when I protested.

I finally resolved it when I continued to press the issue, but I found the whole scenario incredibly customer-hostile. Along with the litany of gross Uber stories, I will continue to prefer Lyft!

vinay_ys 1 day ago 0 replies      
UberGo is most prevalent option in India. In UberGo you are shown a fixed final price at the time of starting the trip. This is supposedly calculated based on the best route you will take from point A to B. Def. of best is - cheapest cost - by trading between short/long routes vs traffic congestion on those routes that cost time.But once the rider gets into the car and driver starts google maps, it can show a different route due to changing traffic conditions. Or driver can refuse to follow google maps and use his own judgement on which route is better (for him). In either case, Uber should be transparent and show to both rider and driver the difference between what was initially calculated vs what it actually cost based on the actual trip. But uber does not do this. Instead of they also add another arbitrary/opaque surge multiplier. If at all they have to do any fraud, they are better off doing that "fraud" by showing different multiplier for rider vs driver. Consumer protection law agencies should insist Uber should at least be transparent and predictable in how they determine surge multiplier and their distance/time metering is accurate.
enknamel 1 day ago 2 replies      
This just sounds like Uber quickly charges you for the worst case since if they charge you for the best case and things go wrong, Uber loses. No one can know what route will even be possible given how chaotic traffic and closures can be. Then the driver gets paid by whatever route is actually taken. I don't really see this as an issue at all.
itchyjunk 1 day ago 2 replies      
"27. In the overwhelming majority of transportations, the upfront price is the amount that a User is ultimately charged for the transportation services by the driver.28. When a driver accepts a Users request for transportation, the Users final destination is populated into the drivers application and the driver is providedwith navigation instructions directing him or her to the best route to the Usersdestination"


It seems like User sees a price X for a ride and accepts it. The driver might see a price X-y if conditions have changed. Doesn't that imply User agrees to price X and driver to price X-y ? Uber might be able to adjust the price at the end but can they be sued if both party agrees to it before hand?


"36. Had Plaintiff and the Class known the truth about the Uber Defendants deception, they would never have engaged in the transportation or would havedemanded that their compensation be based on the higher fare."


I am curious as to how they reached to a conclusion that Uber was intentionally doing this. Did a bunch of drivers co-ordinate experiments with riders to see if there was price differences? Did they just log out and log back into different accounts to see the price differences?

I am neutral to Uber so I feel its natural to question if Uber is seen as an easy target to go after since they are already in a legal swamp. IANAL so would love to read what people familiar with law have to say.

davidf18 1 day ago 2 replies      
There should be an app where different services (Uber/Lyft/Gett/Via/Arro (which is Yellow Cabs) bid on a ride and the lowest bid gets the ride. That would help to fix this problem.

In NYC I had noticed that Uber was charging as much as Yellow Cab for some of the trips and I was surprised about their algorithm. Now I understand why.

legulere 1 day ago 1 reply      
In the thread about Uber retreating from Denmark, people asked why taximeters are sensible regulation. This is the reason. We need a trustable third party that ensures fair transactions. Uber cannot be this because they have their own interests. Regularly checked taximeters can ensure this at least partly.
lancewiggs 1 day ago 0 replies      
Uber has deliberately fostered a culture that thinks it's acceptable to rip off people and institutions. We should never support companies with behaviour like this.
jeffdavis 1 day ago 1 reply      
If there is a sudden traffic jam and it takes twice as long, then presumably uber must pay the extra money to the driver. So are the drivers asking for some kind of "flat rate or variable, whichever is greater" contract?
throwaway-blue 1 day ago 0 replies      
FWIW Lyft's upfront pricing words the exact same way.

This is a non-story and just good product management. This feature solves the problem of presenting a surge multiplier to the customer. With a surge multiplier the customer has to guess how much it's going to cost. With this they just see the price and figure out if it is worth it or not. Reducing purchase friction and uncertainty increases demand and is good for drivers.

Plus, both Uber and Lyft are assuming risk with upfront pricing. They are guaranteeing a price. Sometimes it will be higher and sometimes it will be lower. The driver is accepting a different payment arrangement based on distance and time.

Both companies are classic middlemen and taking advantage of consumer surplus.

anigbrowl 1 day ago 0 replies      
If the allegations in this suit are true, fuck Uber forever. They should go out of business, their assets should be stripped from the investors and redistributed to the users, and Kalanick and a bunch of other people should go to jail for fraud. There is no way to overlook the persistent structural problems displayed by this company. Some things could be matters of opinion (like the values of their corporate culture and so on), but there are multiple instances by now of Uber actively choosing to circumvent laws or deceive people on a systematic rather than an occasional or ad-hoc basis. I've rarely seen such a clear chase for revocation of a business license.
mrow84 1 day ago 0 replies      
Here's an article about this issue from a couple of months ago, for anyone looking for additional information: http://therideshareguy.com/how-to-beat-ubers-upfront-pricing...
mtgentry 1 day ago 4 replies      
If they can prove it, this is super shady on Uber's part. Reminds me of Michael Bolton's money making scheme in Office Space.
sharemywin 1 day ago 1 reply      
If they business model is based on breaking local laws, why would anyone expect them to deal ethically with them?
Wissmania 1 day ago 2 replies      
I am not sure about the legality here, but I will say that I see this arrangement as good for both the driver and the passenger.

As a passenger I can know exactly what I'm paying ahead of time, and don't have to worry about my driver intentionally increasing the time/distance of a trip to charge me more.

As a driver, you are compensated on a time/distance basis, which means you don't have to worry as much about special requests/traffic/other issues messing with what you earn.

Uber is the one accepting the risk here, which the chance that the payout to the driver exceeds the flat rate the passenger paid because of an extra long trip.

mark212 1 day ago 0 replies      
This is all very interesting, but Uber has pretty robust arbitration and class action waiver clauses in their contracts, both with the users and the drivers. Sadly, this will go to arbitration on an individual basis pretty quickly. I haven't seen Uber lose a motion to compel (except once in S.D.N.Y., but it was quickly reversed on appeal).
pnathan 1 day ago 1 reply      
If, at the same time, driver and customer are being shown different prices, as the article alleges, then that is a problem.

If the price is an estimate, and the estimate is revised based upon actual time and distance, then that is within reason.

That said, this problem would be much more tractable if the drivers were employees of Uber. Perhaps that's what should be done? :)

ashish10 1 day ago 0 replies      
Hmm.. So in my area where Lyft is always expensive 20-30% more than Uber. I don't know how much these Lyft guys are stealing ?
comments_db 1 day ago 0 replies      
At this rate, soon Uber will be a verb. Unfortunately, associated with all the wrong things.

a la "...just don't uber it..."

socrates1998 1 day ago 0 replies      
The issue is with the agreement with Uber driver's, if Uber is changing the terms of the relationship without letting them know and agree to it, then that's a major violation of the contract and Uber could see a massive labor lawsuit.
JCzynski 1 day ago 0 replies      
This seems entirely appropriate and not fraud. For a flat rate, you're charged based on a somewhat longer, non-ideal route, rather than the optimal route which you'll take if everything goes well.
pizzetta 1 day ago 0 replies      
I have to think this is not their mode of operation, but if it is, what the hell, Uber? If they actually do something like this as matter of course, goodbye. That's just unacceptable and very dirty.
savanaly 1 day ago 2 replies      
Are we worried that shady behavior that hurts consumers and riders might become the new equilibrium? I don't see how it could be. Whatever the machinations of Uber to artificially alter prices, no matter how sneaky, at the end of the day they'll lose drivers and riders to competitors if their margins drift too far from the economic cost of being the middle man. A driver don't need to know in what way he or she is being lied to or maniuplated to know that they make less per hour driving for Uber than for [Uber's next best competitor]. Thus, I don't see how there could be an equilibrium where Uber is overcharging and still has a significant portion of the market.
AsyncAwait 1 day ago 0 replies      
I have no sympathy for Uber, but it does start to feel like someone is out to get them, the guys just can't seem to catch a break.
lloydatkinson 1 day ago 0 replies      
jesus christ hacker news has a fucking erection for anything anti-uber, get over yourselves, they provide a taxi service that actually helps people
employee8000 1 day ago 3 replies      
No this doesn't happen. Some people may think that Uber charges the rider a different rate than the driver receives but it isn't the case.

A driver I had was sure this happened and asked me that on a fairly expensive trip (I think it was around $80).

I told him this didn't happen and I gave him my personal phone number and the amount of fare I was charged. I told him to check his daily numbers and if he didn't see this charge then to call me immediately. He never did.

Macsenour 1 day ago 0 replies      
I'm no lawyer but I understood "fraud" to be misrepresentation. Who is being defrauded? The passenger pays one price, the company pays the driver and takes a bit of that fee. The company is defrauding the driver by not telling him the full fee the passenger is paying? If they word it such as: "A % of the fee and other fees", I don't see fraud. Not a lawyer.. feel free to correct me.
elif 1 day ago 0 replies      
So, to both parties, they are under-promising on an unknowable future event.

that is better than over-promising?

i don't see the issue here.

dullgiulio 1 day ago 0 replies      
Updated description for a start-up: "We are like Uber, except for the lawsuits."
Rainymood 1 day ago 0 replies      
We walk a thin line between deception and incentives.
awqrre 1 day ago 0 replies      
They probably will blame this on previous executives...
elastic_church 1 day ago 0 replies      
All it takes is a tiny tiny line in the terms of use that states that the fare differences can be different between client side apps.

The user isn't paying the driver, they are paying uber. The driver is paid by uber under a separate agreement.

So it honestly doesn't matter.

partycoder 1 day ago 0 replies      
I heard that Uber is more likely to give you surge pricing if you are running out of battery. Might be a rumor.
cwyers 1 day ago 0 replies      
Ars Technica Uses Sensationalist Headlines And Shallow Understanding Of Subject Matter To Defraud People Into Reading Their Articles
rdiddly 1 day ago 1 reply      
One good way to make sure the fare matches the distance would be to install some sort of device that measures mileage. The driver could start the device when the ride starts and turn it off when it's over. It could even calculate and display the fare for both parties!

Of course that kind of transparency wouldn't be possible unless all the vehicles had the device. So you'd probably need a licensing system for them. Which in turn could be overseen by a commission made up of industry reps and local government officials to ensure fairness and local control.

Wild ideas man, wild ideas.

bdrool 1 day ago 0 replies      
Oh, come on.

I doubt it's all that sophisticated.

Fact Check now available in Google Search and News blog.google
289 points by fouadmatin  22 hours ago   240 comments top 52
jawns 21 hours ago 12 replies      
I'm a former journalist, and one of the mistakes I often see people make is to either give too much or not enough credence to whether the facts in a news story (or op-ed) are true.

Obviously, if you disregard objective facts because they defy your assumptions or hurt your argument, you're deluding yourself.

But an argument that uses objectively true and verifiable facts may nevertheless be invalid (i.e. it's possible that the premises might be true but the conclusion false). Similarly, a news story might be entirely factual but still biased. And in software terms, your unit tests might be fine, but your integration tests still fail.

So here's what I tell people:

Fact checking is like spell check. You know what's great about spell check? It can tell me that I've misspeled two words in this sentance. But it will knot alert me too homophones. And even if my spell checker also checks grammar, I might construct a sentence that is entirely grammatical but lets the bathtub build my dark tonsils rapidly, and it will appear error-free.

Similarly, you can write an article in which all of the factual assertions are true but irrelevant to the point at hand. Or you can write an article in which the facts are true, but they're cherry-picked to support a particular bias. And some assertions are particularly hard to fact-check because even the means of verifying them is disputed.

So while fact checking can be useful, it can also be misused, and we need to keep in mind its limitations.

In the end, what will serve you best is not some fact checking website, but the ability to read critically, think critically, factor in potential bias, and scrutinize the tickled wombat's postage.

endymi0n 21 hours ago 6 replies      
The problems aren't facts. The problems are what completely distorted pictures of reality you can implicitly paint with completely solid and true facts.

If 45 states that "the National Debt in my first month went down by $12 billion vs a $200 billion increase in Obama first mo." that's absolutely and objectively true - except that Obama inherited the financial meltdown of the Bush era and Trump years of hard financial consolidation (while any legislation has a lag of at least a year to trickle down into any kind of reporting at government scale).

Fact-checking won't change a thing about spin-doctoring. At least not in the positive sense.

pawn 21 hours ago 3 replies      
I think this has huge potential for abuse. Let's say politifact or snopes or both happen to be biased. Let's say they both lean left or both lean right. Now an entire side of the aisle will always be presented by Google as false. I know that's how most people perceive it anyway, but how's it going to look for Google when they're taking a side? Also, I have to wonder whether this will flag things as false until one of those other sites confirms it, or does it default to neutral?
provost 21 hours ago 2 replies      
I want to think about this both optimistically and pessimistically.

It's a great start and hope it leads to improvement, but this has the same psychological effect as reading a click-bait headline (fake news in itself) -- unless readers dive deeper. And just as with Wikipedia, the "fact check" sites could be gamed or contain inaccurate information themselves. Users never ask about the 'primary sources', and instead justread the headline for face-value.

My pessimistic expectation is that this inevitably will result in something like:

Chocolate is good for you. - Fact Check: Mostly True

Chocolate is bad for you. - Fact Check: Mostly True

Edit: Words

sergiotapia 20 hours ago 3 replies      
Snopes and Politifact are not fact-checking websites.

>Snopes main political fact-checker is a writer named Kim Lacapria. Before writing for Snopes, Lacapria wrote for Inquisitr, a blog that oddly enough is known for publishing fake quotes and even downright hoaxes as much as anything else.

>While at Inquisitr, the future fact-checker consistently displayed clear partisanship.She described herself as openly left-leaning and a liberal. She trashed the Tea Party as teahadists. She called Bill Clinton one of our greatest presidents.


I think fact checking should be non-partisan, don't you?

allemagne 21 hours ago 1 reply      
I think that politifact, snopes, and most fact-checking websites I'm aware of are great and everyone should use them as sources of reason and skepticism in a larger sea of information and misinformation.

But they are not authorities on the truth.

Google is not qualified to decide who is an authoritative decider of truth. But as the de facto gateway to the internet, it really looks like they are now doing exactly that. I am deeply uncomfortable with this.

artursapek 21 hours ago 0 replies      
I see Google having good intentions here, but I fall back to my previous sentiment on trying to assign "true/false" for all political stories and discussions.


tabeth 21 hours ago 1 reply      
Fact checking is irrelevant. What's necessary is education. Just like spellcheck will not allow you to magically compose elegant prose, fact check is notgoing to prevent people from being misled. Notice how both of these "problems"have the same solution. In fact, fact check can be counter productive as peoplenow sprinkle their articles with irrelevant facts.

Education is the solution to all social problems.

pcmonk 21 hours ago 1 reply      
What I wish they would do is use their fancy AI to put in a link to the original source. Tracking down original sources is extremely tedious, but it generally gives you the clearest idea of what's actually going on.
throwaway71958 20 hours ago 0 replies      
This is incomplete: they need to also include the political affiliations of owners of "fact check" sites, and perhaps also FEC disclosure for donations above threshold, and sources of financial support. I.e. this site comes from PolitiFact, but its owner is a liberal and he took a bunch of money from Pierre Omidyar who also donated heavily to the Clinton Global Initiative. Puts the fact checks in a more "factual" light, IMO. Fact check on the fact check: http://www.politifact.com/truth-o-meter/article/2016/jan/14/...

Things have gotten hyper-partisan to the extreme in the past year or so, so you sometimes see things that are factually true rated as "mostly false" if they do not align with the narrative of the (typically liberal) owners.

DanBC 21 hours ago 2 replies      
I'd be interested to see how it copes with UK newspapers.























sweetishfish 21 hours ago 2 replies      
Who fact checks the fact checkers?
scottmsul 21 hours ago 2 replies      
A better idea would be to look for disagreements. Given a news article or claim, are there any sources out there which disagree? Then the user could browse both claims and decide for himself.
ksk 20 hours ago 1 reply      
Are we in the twilight zone? An advertising company fact checking political discourse? Would google apply the same fact check to their own company?

"Does Google dodge taxes"

civilian 21 hours ago 0 replies      
So I mean, this is just a metadata tag. Anyone can make one. I'm looking forward to Breitbart & HuffPo abusing this...

I think it would be interesting to collect a list of websites that disagree on a claim review.

gthtjtkt 15 hours ago 0 replies      
Snopes and Politifact are abject failures. Nothing but glorified bloggers who have declared themselves the arbiters of truth.

Even Rachel Maddow has called them out on numerous occasions, and she was rooting for the same candidate as them: http://www.msnbc.com/rachel-maddow-show/watch/politifact-fai...

smsm42 19 hours ago 0 replies      
Reading the article, it looks like what is going on is that news publishers now can claim that their articles were fact checked, or certain article is a fact check article on another one, using special markup. They also say the fact checks should adhere to certain guidelines, but I don't see how it would be possible for them to enforce any of these guidelines. It looks like just self-labelling feature, with all abuse potential inherent in this.
forgotpwtomain 21 hours ago 1 reply      
This is a bad slippery slope - it suggests that a 'little sponsored banner' (which google chooses) can waive the necessity of being diligent in thought.
josefresco 19 hours ago 0 replies      
What if I told you (cue the Morpheus meme), that people consuming the "fake news" don't care that it's fake? It's called confirmation bias and winning. Education isn't going to solve this issue, you can't forcibly educate people nor can you change their core "values" and their determination to be "right".

The only "education" that I can envision working is quantifying the real-world-impact of their votes on the personal level. Ex: Your health insurance was cancelled? The representative you voted for caused that. This unfortunately is normally executed with a partisan goal, however should be applied as a public service to all Americans.

orangepenguin 21 hours ago 0 replies      
There is obviously a lot of debate on whether or not fact checking is accurate and useful. I think simply presenting a fact check will help people think more critically about headlines they see every day. Like "Mythbusters Science". It's not perfect, but it helps people to think.

Relevant: https://xkcd.com/397/

narrowrail 16 hours ago 0 replies      
Who will fact check the fact checkers?

Well, perhaps these trusted sources should implement a system similar to Quora/StackExchange but for opposing arguments?

Lots of comments call into question the biases of sites like Snopes/Politifact/etc. and allowing some sort of adversarial response would help claims about 'leftists wanting to control our minds.'

Maybe it's just a widget at the bottom of a fact check post leading to a StackExchange'd subdomain. A wiki or subreddit could work as well. Anyone looking for a side project?

oldgun 20 hours ago 0 replies      
Besides political debates, anyone else thinks this 'ClaimReview' schema put to use by Google is one step towards the application of Semantic Web? There might be something more than just a 'new app by Google' here.
okreallywtf 19 hours ago 0 replies      
I'm amazed at how much cynicism I'm seeing here about this. People just keep repeating what can be boiled down to the same premise: complete objective truth basically doesn't exist. Truth is messy, tricky, subjective business. This is not new, this is just how the world is. Truth and understanding is best-effort and always has been, so why is a tool to attempt to combat some of the most egregious falsehoods even remotely a bad thing? Nobody should claim that its bulletproof, but I'm not seeing anyone really do this? The problem is some of us never deal in absolutes, we see nuance in everything (climate science, economics, political science) but there are others who do deal in absolutes and make a killing doing so. Sitting around having the same debate over and over about facts and truth doesn't do anything to tackle the problem.

My rule of thumb is that generally there is safety in numbers. Don't trust any single source and don't trust something that doesn't have a chain of reasoning behind it. I trust all kinds of scientific statements that I don't have the qualifications or time to vet myself - but we have to do our best and that often means doing a meta-analysis of how a conclusion was reached and how many other people/groups (who themselves have qualifications and links to other entities with similar qualifications) that the statements are linked to.

Fake news isn't 100 levels deep, its usually 1 level with no real supporting information. When people (like Trump) categorically denounce someone elses statement they often provide no real information of their own. Similarly, when refuting a fact-check, most people don't dig into it and refute something in their chain of reasoning, they just say "well that is just not true!" and leave it at that.

We don't need to fundamentally fix the nature of truth but we need to be able to combat the worst cases of misinformation and any tool that helps do that is great. Continuing the have the same philosophical debate about truth is fine from an academic standpoint but from a practical standpoint it is sometimes not helpful. I feel similarly about climate change - its great to acknowledge nuance but what good is that if we're trending towards pogroms and a totalitarian dictatorship (to be hyperbolic, maybe)?

balozi 20 hours ago 0 replies      
One likely outcome from this is that Google Search and News will be now be perceived as partisan by the Hoi polloi. Same reason why the old media gatekeeper fell by the wayside.
return0 16 hours ago 0 replies      
It's a witch hunt. Science (rather, life sciences) has a similar problem. There are just enough (statistically significant) facts to push many agendas. Peer review weeds out some stuff, but that doesn't stop a lot of wrong conclusions being pushed to the public.

Maybe a better solution is adversarial opinionated journalism, rather than this proposed fact-ism.

ronjouch 20 hours ago 0 replies      
> https://blog.google/products/search/fact-check-now-available...

Didn't know Google has its own top-level domain o. Previous HN discussion: https://news.ycombinator.com/item?id=12609551

pcl 18 hours ago 0 replies      
The blog title is "Fact Check now available in Google Search and News around the world". I think that the extra bit at the end is worthy of inclusion, as I expect this to become a point of contention over the years.

I would not be surprised if different governments take issue with Google adding any sort of editorial commentary, even if it's algorithmically determined etc.

mark_l_watson 14 hours ago 0 replies      
I don't like this, at all. People need to rely on their own reasoning skills and critical judgement and not let centralized authorities have a large effect on what people can read. I like systems to be decentralized and this seems to be the opposite.
pklausler 19 hours ago 1 reply      
I really wish that major legitimate institutions of journalism (i.e., the ones that require multiple independent sources, publish corrections and retractions, &c.) would just stop pussyfooting around with nice simple accurate words like "lies" when they're reporting on somebody who's blatantly lying. False equivalency and cowardice is going to get us all killed.
westurner 13 hours ago 0 replies      
So, publishers can voluntarily add https://schema.org/ClaimReview markup as RDFa, JSON-LD, or Microdata.
losteverything 19 hours ago 1 reply      
Billy Jack was rated M.

This is just another new rating system.

As long as they don't prevent me from reading false things, I can live with it.

Keep it my choice.

dragonwriter 21 hours ago 0 replies      
Original title is "Fact Check now in Google Search and News"; the different capitalization vs the current HN headline ("Fact Check Now...") is significant, the new feature "Fact Check" is now available in Google Search and News, rather than a feature "Fact Check Now" being discussed in those services.
coryfklein 18 hours ago 0 replies      
Pretty neat! Unfortunately doesn't help when searching for "obama wiretap trump tower".


takeda 20 hours ago 0 replies      
I know a person who eats those "alternative facts" like candy. When I tried to prove one of them wrong, I pulled out a website to do a fact check and his response was: "you trust Snopes?" so I have doubts this will help much, but I would like to be wrong.
Mithaldu 17 hours ago 0 replies      
Like very often when google says "everywhere" they don't remotely mean everywhere and should instead be saying "in the usa". My country's edition of google news has no fact check at all.
DrScump 17 hours ago 0 replies      
It's interesting timing that just today, for the first time in a couple of weeks, my Facebook feed has fake news clickbait ads again.

Unless both Kevin Spacey and Burt Reynolds are, in fact, dead. Again.

ArchReaper 21 hours ago 2 replies      
Anyone have an alt link? 'blog.google' does not resolve for me.
thr0waway1239 19 hours ago 0 replies      
Factual Unbiased Checks for Knowledge Upkeep by Google.
xster 16 hours ago 0 replies      
The fact that this came from CFR/Hillary's State Department's Jigsaw is very troubling.
sova 17 hours ago 0 replies      
Hurrah for Google! Now if only Facebook and SocialNetworkGiants(tm) would follow suit!
retox 14 hours ago 1 reply      
I don't trust Google to tell me the sky is blue.
codydh 21 hours ago 1 reply      
I tried a slew of recent statements that are objectively false but that a certain politician in the United States has tried to say are true. Google returned fact checks for exactly 0 of the queries I tried.
keebEz 21 hours ago 1 reply      
A fact has no truth value. Truth only comes from reason, and reason only exists in each person's head. This is reducing the demand for reason, and thus destroying truth.
ffef 21 hours ago 0 replies      
A great start in the right direction and a kudos for using Schema to help battle "'fake news'"
debt 21 hours ago 0 replies      
this is just gonna create a pavlovian response akin to "ah okay this is fact-checked i'll read" which'll just compound the problem. it presumes that google's fact-checking algorithms and methodology are sound.
SJacPhoto 17 hours ago 0 replies      
And Who controls the fact-check facts?
throw2016 18 hours ago 0 replies      
'Fact checking' should be limited to blatantly false news items fabricated and posted for online ad clicks ie 'Obama to move to Canada to help Trudeau run country' or 'Trump applies for UK citizenship to free UK citizens from Brussels despots'. These should be relatively easy to identify and classify.

There is a wide line between the fabrications above and news and journalism as we know it full of opinion, bias, agendas, propaganda and maybe some facts twisted to suit narrative.

The latter takes human level ai to sift through and even then detecting bias, leanings or manipulation depends on one's background, world view, specialization, knowledge levels, understanding of how the media works and a well informed general big picture state of the world.

This is impossible to classify for bias, falsehood or manipulation and will need readers to use their judgment. Trying to 'control' this is like trying to control news, favouring media aligned to your world view and discrediting those whose views you disagree with. It is for all purposes propaganda as we understand the term. Calling it fact checking is sophistry.

gokusaaaan 11 hours ago 0 replies      
who fact checks the facts checkers?
isaac_is_goat 19 hours ago 0 replies      
Snopes and Politifact? Really? smh
MrZongle2 21 hours ago 1 reply      
So what takes place when the inevitable happens, and an employee decides that an existing "fact check" (conducted by a third party, Google hastens to add) is philosophically inconvenient and thus removes it?

Also, FTA: "Only publishers that are algorithmically determined to be an authoritative source of information will qualify for inclusion."

What's the algorithm? Who wrote it?

huula 20 hours ago 0 replies      
snowpanda 21 hours ago 2 replies      
Snopes and Politifact, they can't be serious. Not that I expected them to pick a neutral source, nor am I surprised that Silicon Valley's Google picked 2 leftist "fact" sources. This is a stupid idea, everyone has a bias. This isn't to help people, this is to influence how people see things.
Do not let your CDN betray you: use subresource integrity (2015) mozilla.org
304 points by handpickednames  3 days ago   83 comments top 18
AdamN 2 days ago 1 reply      
"The security properties of a collision resistant hash function, ensure that a modification results in a very different hash."

I really appreciate the clarity of this post. The author is building up the groundwork without skipping steps that may be obvious to many readers. I of course knew the purpose of a hash before reading the article, but some people don't - and that sentence clearly let those users know why the hash matters without making it less readable for knowledgeable readers.

Writing clarity matters.

Klathmon 3 days ago 1 reply      
If you use webpack, just drop in webpack-subresource-integrity [0] for basically "free" SRI tags on all scripts.

It's not really as useful if you are serving your static assets from the same place as the HTML (and you always use HTTPS) but if you load your js/css on another server SRI can still provide some protection.

[0] https://github.com/waysact/webpack-subresource-integrity

yeldarb 2 days ago 8 replies      
It'd be cool if the browser used this to allow cross-origin caching as well.

Say I have jQuery previously loaded a page that included jQuery from CDNJS and now I'm in China and another site tries to load jQuery from Google's CDN.

Currently that request would get blocked by the great firewall. But since the browser should know that this file matches one it has seen (and cached) before it should be able to just serve the cached file.

This could also save a network request even if I'm linking to a self-hosted file on my own servers if I include the hash.

ejcx 2 days ago 3 replies      
Not just CDN, there's benefits to rolling out SRI for lots of your third parties.

Your stripe js, scary ad networks js, front-end analytics companies. SRI is really neat and helps protect yourself from these many 3rd parties being pwned.

theandrewbailey 2 days ago 1 reply      
See also Content-Security-Policy require-sri-for


recursive 2 days ago 2 replies      
"An important side note is that for Subresource Integrity to work, the CDN must support Cross-Origin Resource Sharing (CORS)."

This doesn't make sense to me. Why shouldn't I be able to perform integrity checking on resources from non-CORS domains?

forgotpwtomain 2 days ago 2 replies      
This might have been mentioned somewhere else but - will browsers remove or make an exception instead of blocking mixed-content[0] when a sub-resource integrity check is present? I mean there really is no reason to be paying the TLS over-head for commonly used libraries.

[0] https://developer.mozilla.org/en-US/docs/Web/Security/Mixed_...

depr 2 days ago 1 reply      
And get all your resources requested twice on Chrome: https://bugs.chromium.org/p/chromium/issues/detail?id=677022
nighthawk454 2 days ago 0 replies      
Looks like it has good support in Firefox and Chrome, but none in IE/Safari.


arghwhat 2 days ago 0 replies      
This only helps for JavaScript (and soon CSS) resources.

If your HTML goes through a CDN (say, you use the full Cloudflare package), the CDN can of course just remove or modify these integrity attributes, or add new scripts altogether.

vog 3 days ago 2 replies      
The title should have a "(2015)" suffix.
gszathmari 2 days ago 0 replies      
This tool lets you quickly assess whether third-party assets are protected by SRI: https://sritest.io/

Disclaimer: I am the developer of sritest

awqrre 2 days ago 0 replies      
or even better, avoid CDNs? it might even be cheaper when you account for the extra work... and faster when you don't have to load data from 10 servers to load just 1 web page
tofflos 2 days ago 0 replies      
zitterbewegung 2 days ago 1 reply      
I wonder if it would be a good idea that if the SRI detected a modified javascript file that a warning should be presented to a web user when this occurs?
nwmcsween 2 days ago 1 reply      
It would be infinitely better if I could use a small hash instead of the giant sha variants, imagine 40 or so resources x sha-x length.
sedatk 2 days ago 0 replies      
fyi, Edge and Safari has yet to support this feature.
homakov 2 days ago 0 replies      
SRI shouldn't use static hashes, it should set pub keys of different people and the response must have N/M signatures. This way updates are possible and you know N people confirmed the source as safe.
Jeff Bezos Is Selling $1B a Year in Amazon Stock to Finance Race to Space nytimes.com
309 points by sndean  2 days ago   223 comments top 18
itchyjunk 2 days ago 5 replies      
I think spacex getting some competition is good and healthy for the space market. Even though I like spacex a lot, monopoly isn't the preferred choice.

I was trying to find difference between blue origin and F9 and found these old articles. [0] [1]



[1] http://www.popularmechanics.com/space/rockets/a18711/blue-or...

oblib 2 days ago 17 replies      
I appreciate the tech very much, but the visions of "Millions of people working in space"? I have no desire to be a part of that but I do wonder what they'll be working on?

And "Living on Mars". No thanks, I'll pass on that too. I can certainly see the thrill of the ride described though and who wouldn't want to be weightless for 4-5 minutes? But the market for that carnival ride is about as big as the number of cars Ferrari sells each year so I don't get that.

One can call this a "steppingstone" tech for now if they choose but it's more likely to be a cliff unless there's something of real value here. Even if we go out on a limb and say all this is really a way for the wealthy to escape the planet they'll find there's no place close enough to go so even that doesn't make sense.

No, none of the above makes sense to me yet so there must be something more fueling this race than what's being said.

And hey, isn't there also a downside to poking holes in the upper atmosphere and/or ozone depletion? How long do we let someone profit off the effects of that?

I honestly don't know the answers to those questions but I do wonder about them.

adventured 2 days ago 1 reply      
$78.4 billion in personal wealth. 94% held in Amazon, which is currently sporting a $433b market cap, up 50% in the last year and trading at a generous 200 times earnings.

I hope he sells more than he needs, faster than he needs, before this latest bubble (slight or extreme is open for debate) gives out. I wouldn't have guessed that Blue Origin could cost him ~$1 billion per year to subsidize. I'm glad he's doing it, very few people on the planet could afford to; of those that could, fewer still would care to.

nyxtom 2 days ago 0 replies      
Blue Origins seems pretty likely to tap into a much lower cost market for panoramic views of earth at high altitudes, similar to Virgin Galactic (had they not completely messed things up). Developing something that will get them into LEO is massively more difficult, with significant funding at $1B a year however (which is about 5% the total NASA yearly budget), hopefully they can make up their lag. I would love to see more and more ventures enter this market. The more the better!
OrwellianChild 2 days ago 4 replies      
I'm impressed by the burn-rate of $1B for a company that isn't actively launching on a regular basis... Does anyone have any idea how much capital SpaceX sunk before it got its Falcon 1 up and running?
appleflaxen 2 days ago 4 replies      
Space should be a much lower priority than climate change. There's not as much excitement or fun in it, but it's about a million times more pressing.
chki 2 days ago 1 reply      
I'm not really sure about the amount of money needed to have an actual impact on a stock like Amazon but maybe somebody more knowledgeable might have an idea: Does this measurably lower the market valuation of Amazon? If there is constantly someone selling shares in those volumes there should be some effect, even if Bezos is obviously not selling $1B in one day but over the course of the whole year?
myroon5 2 days ago 4 replies      
When he talks about millions of people working in space, what jobs is he talking about? Primarily resource mining?
tryitnow 2 days ago 1 reply      
Is this perhaps a signal that it might be a good time for others to start selling Amazon stock?

I know if I were Bezos I would sell when I thought the stock was overpriced. I wonder if this is an indication that

Bezos and his team were brilliant for selling a bunch of convertible debt right before the 2000 crash. And let's not forget Bezos got his start at the quant fund DE Shaw.

rokhayakebe 2 days ago 1 reply      
I just hope that these planets to get to do not become first come first serve and these individuals claiming ownership of the vast majority of them.
skdotdan 2 days ago 1 reply      
Imagine a reusable rocket, like the SpaceX one, but bigger and being able to reflight in, say, one week.

If in every flight there were a satellite + space tourists, going to the space would be much more cheaper, and it's feasible I'm the near term. I see both SpaceX and Blue Origin offering "cheap" traveling to Space in less than a decade.

peter303 1 day ago 2 replies      
I wonder why Bezos is spending so much of his own money (OK less than 2% a year). Musk has perfected the art of spending Other People Money- government loans, green subsidies, IPO. Plus he has good customer revenue stream in two of his companies.
zeristor 2 days ago 1 reply      
To be honest, what else would one do with untold billions?
Pica_soO 2 days ago 0 replies      
Rich guy Escapism reaching escape velocity
kakarot 1 day ago 0 replies      
I always figured Amazon and SpaceX would just grow to be sister companies. Let SpaceX handle hardware and negotiations and let Amazon handle logistics and sales. Instant 45-minite Anywhere-On-Earth delivery service. I think there would be enough cash for them to share.

I have no doubt that Amazon wants to dig deep into space mining and being the backbone of the early solar economy. Who doesn't. It's gonna take a series of extremely smart investments for whoever does manage to pull that off. The barrier to a sustainable space economy is quite high.

good_vibes 2 days ago 1 reply      
He will be the richest man in the world by 2020 I think.
seaghost 2 days ago 2 replies      
chrismealy 2 days ago 0 replies      
Low earth orbit, meh.
A girl was found living among monkeys in an Indian forest washingtonpost.com
336 points by mrb  15 hours ago   132 comments top 21
DanielleMolloy 2 hours ago 2 replies      
This is the darker (and probably more truthful) variant of the story: https://www.theguardian.com/world/2017/apr/08/indian-girl-fo...

" 'In India, people do not prefer a female child and she is mentally not sound,' DK Singh said. 'So all the more [evidence] she was left there.' "

sandworm101 14 hours ago 8 replies      
The story is too good. The girl, the monkeys defending her, the policeman ... all Disney-level stuff but where are the non-disney facts? A real story always has dark sides. This one is too perfect. I'm not saying that it is all fake, rather that I don't think we are getting the entire story. I wouldn't be surprised if we eventually learn that this girl was only living with the monkeys for a very short while, that her issues are more long-standing. Perhaps the truth is that she was a disabled girl found amongst monkeys and the story has been elaborated from those simple facts.

>>> "She behaves like an ape and screams loudly if doctors try to reach out to her."

Like an ape or like a monkey? She was raised by monkeys but acts like an ape? A lay person perhaps wouldn't know the difference but by now someone with knowledge would be on site. I have been around several disabled children. The screaming and fear of being looked at or touched is not uncommon. No mention of how she reacts to being clothed? I'm no expert on feral children but I would expect that after eight years of being naked one would not be happy about clothing and that would deserve some mention ... unless of course clothing is nothing new to her.

I want to see her feet, specifically her toes. If she really hasn't ever worn shoes then her toes will show it.


pmoriarty 12 hours ago 2 replies      
This reminds me of the story of Kaspar Hauser[1] (which was made in to a movie by Werner Herzog[2]) and of the fascinating book Seeing Voices by Oliver Sacks.[3]

In his book, Sacks investigates various cases of children growing up without language, how they cope (or don't cope) with it, how they finally acquire language (if they do), and how differently they see the world in both the pre-linguistic and post-linguistic states. Hauser was one of the most famous cases of this sort, Helen Keller[4] was another.

Reading this book inspired me to learn sign language, which I expected to be radically different from spoken and written language, and more powerful in many ways, as you can physically describe things in ways that has little parallel to spoken and written languages.

[1] - https://en.wikipedia.org/wiki/Kaspar_hauser

[2] - https://en.wikipedia.org/wiki/The_Enigma_of_Kaspar_Hauser

[3] - https://www.amazon.com/Seeing-Voices-Oliver-Sacks/dp/0375704...

[4] - https://en.wikipedia.org/wiki/Helen_keller

kumarm 15 hours ago 4 replies      
Hope her integration to society is handled carefully. So far she has been treated like an animal in zoo by humans too (Check the photos of groups of people looking at her):



jacquesm 14 hours ago 2 replies      
The monkeys seem to have been doing a better job at parenting than the people here. Note how the text below one of the pictures says she's frightened of people and the picture right above it has a whole bunch of (all male cast) busybodies crowding into a little room with her in it.
malandrew 14 hours ago 0 replies      
Would have been interesting to have Jane Goodall involved. She could have left the child integrated but used the circumstance to bridge the communication divide between us and other primates because this girl surely knows things we never will.
faitswulff 15 hours ago 3 replies      
I read somewhere that reintegration with human society mostly fails for feral children. Is it really a rescue if she dies at a young age, alone?
narrator 13 hours ago 0 replies      
Now the battle begins to shape her story such that it can be used to reconfirm one of a number of different competing narratives about man's relationship with nature, nature vs nurture, theories about language acquisition, the "critical period" and early childhood development. Did I miss any?
baron816 14 hours ago 1 reply      
It's possible she could end up like this unfortunately: https://en.wikipedia.org/wiki/Genie_(feral_child)
zaroth 1 hour ago 0 replies      
These types of junk-news stories seem to make their rounds on the Internet for several weeks before finally evaporating into the ether.

What's interesting is that in the past they would seem to manage to stay off the HN front page.

Now it seems like I see these stories start circulating on Outbrain or the other click bait networks and I think, well, that'll be on HN in a week or so!

These stories are usually large part fake news, or reality tweaked or skewed with some angle to make it almost irresistible to read about. I personally have no use for these types of stories on HN but certainly understand they are created with a very compelling hook to want to share them.

dmix 14 hours ago 0 replies      
Anyone know what kind of monkeys they were? I can't find any mention of it in this article or the original referenced source.
popol12 12 hours ago 1 reply      
How ethical is it to force her to leave the monkeys to become a "normal" human ?
slitaz 10 hours ago 1 reply      
I feel it is a badly-written article.

If the girl managed to survive for so many years, she should have been left with the trouppe of primates and get observed. This sudden change will probably be worse than any other less brutal change in the environment.

smdz 10 hours ago 1 reply      
The first expression I had was: What rights do we humans have to take her back from her family(monkeys in this case) and her home (the forest)? Just because she is our kind, should we impose our culture, our values, our ways (and our governments) on her?

But then - this feels more like a creative story. From the videos it looks like she might have been in the forest only for some time and needs rehab, but I am no expert here.

abrkn 1 hour ago 0 replies      
An observation that adds nothing to the story/discussion: DK Singh. Donkey Kong.
throw2016 13 hours ago 0 replies      
The story if true is discomforting, the mind ponders, and it does not completely add up.

We know the 'facts' but we also don't. This is exactly the kind of story that needs fact checking, but to get that you need people on the ground, who are experienced and confirmation will take time which the attention span of the news cycle will not allow.

The worst is turning it into some kind of circus. Hope that now with the global attention the Indian authorities will immediately retrieve her from the current facilities with people clearly not trained for this, and get her the kind of specialized care and sensitivity she needs.

mythrwy 8 hours ago 0 replies      
It's finally happened.

Washington Post has completed the transition into a full blown supermarket tabloid.

johnb777 13 hours ago 1 reply      
bingomad123 11 hours ago 0 replies      
It is common in India for family members to put their autistic/badly born kids into a cage and display them in circus. I remember a family showing three of their kids in a circus as "animals" just because the babies were autistic and had tail like features.
aaron695 8 hours ago 1 reply      
No one else here disturbed that a intellectually handicapped girl who was abandoned by the system has been turned into a dancing monkey for HN's amusement.

Surely the discussion here should be more about what a horrific system exists in parts of India that handicapped people are turned into stories.

Do I really need to spell it out it's an abandoned handicapped girl found near monkeys????

The doctor says when she was brought in she was near starving (video)? Were the monkeys looking after her or not?

This is a common fairy tale, seriously people, what is wrong with you that you can't see the real story here. It's about poverty, people not dealing with mental illness and broken systems???

The fact doctors even allowed her to be filmed for your amusement shows they are not very well trained.

kazinator 8 hours ago 1 reply      
> Numerous stories of feral children ...

"Feral children?" How amusing; is that an actual phrase?

It evokes a domesticated species of rug-rat, bred in the wild.

Espresso Googles peering edge architecture blog.google
325 points by vgt  3 days ago   92 comments top 18
itchyjunk 3 days ago 1 reply      
"We defined and employed SDN principles to build Jupiter, a datacenter interconnect capable of supporting more than 100,000 servers and 1 Pb/s of total bandwidth to host our services."

This type of scales boggle my mind. Though I have found I can no longer keep up with all the terminologies popping up every day. Posts like these are my only connection to learning the massive scaling of things to make the modern networks work.

"We leverage our large-scale computing infrastructure and signals from the application itself to learn how individual flows are performing, as determined by the end users perception of quality."Is this implying they are using Machine Learning to improve their own version of content delivery network?

mmaunder 3 days ago 1 reply      
"Google has one of the largest peering surfaces in the world, exchanging data with Internet Service Providers (ISPs) at 70 metros and generating more than 25 percent of all Internet traffic. "


Apocryphon 3 days ago 3 replies      
The official Android testing framework from Google is also named Espresso. Are we running into a classic hard computer science problem?
smaili 3 days ago 1 reply      
The essence of what Espresso is begins towards the end of the post:

Espresso delivers two key pieces of innovation. First, it allows us to dynamically choose from where to serve individual users based on measurements of how end-to-end network connections are performing in real time.

Second, we separate the logic and control of traffic management from the confines of individual router boxes.

dkhenry 3 days ago 3 replies      
I think with platforms like this it is now safe to say that the systems and services Google is deploying are no longer in the same category as classical networked systems. This is as foreign a concept from traditional networking and the seven layer OSI model as non von Neumann computing is from von neumann computing
hueving 3 days ago 4 replies      
These presentations from Google are pretty irritating at these conferences. If you're familiar with the SDN field (as most ONS attendees would be), this presentation is essentially nothing but bragging about the scale at which they operate.

There is no useful information in here to advance the state of the art, no new ideas, no publicly available implementations (closed or open source). It's just a very high-level architectural view of a large network given by people who are incentivized to present it in the most favorable light. And due to the lack of any concrete details, it's free from critical analysis.

>Espresso delivers two key pieces of innovation. First, it allows us to dynamically choose from where to serve individual users based on measurements of how end-to-end network connections are performing in real time. Second, we separate the logic and control of traffic management from the confines of individual router boxes.

The first has been done before at many levels of the network:

* BGP anycast

* DNS responses based on querier

* Done in load balancer

* IGP protocols to handle traffic internally while taking into account link congestion

I assume their framework gives them much nicer primitives to work with than the above, which would be an advancement in the field if we could actually see an API or something.

The second is very far from "innovation". This is the essence of SDN and this has been the hottest thing since sliced bread in the networking world since 2008 at a minimum [1] and even earlier if you look at things like the Aruba wireless controller.

1. http://archive.openflow.org/documents/openflow-wp-latest.pdf

filereaper 3 days ago 0 replies      
Yup, this is the kind of thing you get when you put in $30B into infrastructure.


NTDF9 3 days ago 1 reply      
Can someone with more expertise summarize how this differs from commercial SDN solutions like: Cisco ACI, Juniper Contrail etc.?
apanda 3 days ago 0 replies      
This vision seems very similar to the 2011 talk by Scott Shenker: https://www.youtube.com/watch?v=YHeyuD89n1Y
danm07 3 days ago 0 replies      
Distracting aside: It's amusing that languages use so many references from the coffee industry. I wonder how long it will take to fill a Starbucks menu.
soVeryTired 3 days ago 2 replies      
What does "peering edge" mean? A google search only brings up this article.
piyushpr134 3 days ago 1 reply      
One biggest takeaway from this is that they can have multiple machines for the same IP address. That is just awesome and also explains how they have probably managed to scale up services without needing to use load balancers.
KaoruAoiShiho 3 days ago 0 replies      
This is pretty impressive.
Sephr 3 days ago 3 replies      
I really hope they aren't patenting any of this. I'm working on p2p tech that features (among many other features) similar real time performance measurements and smart file distribution based on load and proximity.
s73ver 3 days ago 3 replies      
Two Google products named Espresso? That won't be confusing at all.
neduma 2 days ago 0 replies      
sebnap 3 days ago 0 replies      
Wow. Isn't this a trojan horse? People start to use it because of it's convenience and then it will spread and spread and spread. I mean what's up when google will run more or less everything?
Bud 3 days ago 0 replies      
Damn, for a moment there I was hoping that Google made some sort of really cool espresso machine. Perhaps with Alexa built-in.
The Power of Prolog metalevel.at
338 points by noch  2 days ago   153 comments top 24
fizixer 2 days ago 14 replies      
Roughly half the 'power of prolog' comes from the 'power of logic programming' and prolog is by far not the only logic programming language, e.g.,

- You can do logic programming using minikanren in scheme. (you can also extend the minikanren system if you find a feature missing).

- Minikanren was implemented in clojure and called core.logic.

- It was also ported to python by Matthew Rocklin I think, called logpy.

- There is also datalog, with pydatalog it's python equivalent.

- Also pyke. And so on.

Plus logic programming has very important (arguably necessary) extensions in the form of constraint-logic-programming (CLP), inductive-logic-programming (ILP), and so on.

It's a huge area.

EDIT: ILP at an advanced level starts making connections with PGMs (probabilistic graphical models) and hence machine learning, but its a long way to go for me (in terms of learning) before I start to make sense of all these puzzle pieces.

EDIT 2: You can have a taste of logic programming without leaving your favorite programming language. Just try to solve the zebra puzzle [1] (without help if you can, esp. through any of Norvig posts or videos; they're addictive).

[1] https://en.wikipedia.org/wiki/Zebra_Puzzle

EDIT 3: An "expert system" (a term from the 1980s) is largely a logic programming system paired up with a huge database of facts (and probably some way of growing the database by providing an entry method to non-programmer domain experts).

EDIT 4: In other words, logic programming (along with other things like graph search etc) is at the core of GOFAI (good old fashioned AI), the pre-machine-learning form of AI, chess engine being a preeminent example of it.

jacquesm 2 days ago 2 replies      
One of the nicest examples of the power of prolog is that Windows NT contained a prolog implementation to take care of network configuration, replacing the previous codebase:


samwalrus 2 days ago 0 replies      
I run a youtube channel called 'playing with prolog' https://www.youtube.com/channel/UCfWpIHmy5MEx2p9c_GJrE_g

We demo little things that you can do in prolog.

Happy to take suggestions for videos :)

hexmiles 2 days ago 6 replies      
i have always trouble understaing prolog, lot of guide online seem to just tackle the syntax or assume you already know a lot of logic programming, for example the first example in the link (https://www.metalevel.at/prolog/facets):

 list_length([], 0). list_length([_|Ls], N) :- N #> 0, N #= N0 + 1, list_length(Ls, N0).
i don't undestand it, i read the segment describing it multiple time but i still don't get it, and is not the syntax i don't undestand how it should work, a tried reading the next few chapter but i feel i'm missing something!is there a "prolog for dummies" out there?

haddr 2 days ago 2 replies      
There is a class of problems which you can solve using Prolog with pure pleasure.

There is one thing however: Prolog can magically hide the complexity of many things, which is a two-sided sword. On many occasions you are hiding away the computational complexity and wonder why the execution is so slow. This rarely happens in imperative languages (where you are more aware of all the loops and recursions). I guess this is why many people hate Prolog...

andai 2 days ago 0 replies      
For an introduction to Prolog:

Learn Prolog Now - Free Online Version http://www.learnprolognow.org/lpnpage.php?pageid=online

I like this one because it's beginner-friendly (and uses characters from Pulp Fiction as examples).

-- edit: actually, The Power of Prolog to is great, too!If anyone knows more good resources, I'd be happy to hear about them!

timonoko 2 days ago 3 replies      
Never understood why natural numbers (or some set of N) is not in the search space:

fib(X,X):- X<2.

fib(X,Y1+Y2):- fib(X-1,Y1),fib(X-2,Y2).

I tried to get answer to this kwestion in reddit, but all I got was personal insults.

skdotdan 2 days ago 4 replies      
Something I would like to be able to understand/know/study is how logic programming languages are implemented and how their runtime looks like.
huherto 2 days ago 0 replies      
I implemented the Wumpus World form Peter Norvig's AIMA using different techniques. I found that Bayesian Logic was much more powerful than Logic programming. Perhaps that explains why Prolog has flourished.

Logic programming is limited to values of true or false. 0 or 1. Bayesian logic can deal with uncertainty values like .2 or .3. It almost seems like a superset. It is also more intuitive IMHO.

I keep the wumpus implementation here if anyone is interested.https://github.com/huherto/aima3/tree/master/src/wumpus

moderation 2 days ago 0 replies      
See the Open Policy Agent [1] project that has a language and REPL called Rego inspired by Datalog.

1. http://www.openpolicyagent.org/documentation/how-do-i-write-...

SagelyGuru 2 days ago 0 replies      
I used to teach Prolog in the GOFAI days of early 80's. It certainly was fun and probably the quickest way how to start solving interesting search based problems without having to write pages and pages of code. It was very good for motivation. Also for encouraging "top-down" design.
camelNotation 2 days ago 0 replies      
One of my professors in college was an original creator of the Prolog language. He made us learn Prolog so that he could teach us something we could have just as easily done in C or Java. I strongly disliked him. For that reason, I am filled with negative vibes when I think about Prolog.
lacampbell 2 days ago 3 replies      
Can anyone give examples of the kinds of problems prolog is ideally suited to? I took a course on it at university. It looked interesting but I didn't really "get it". It might be worth another look now I have a bit more experience under my belt. I've got a lingering feeling it would solve a certain kind of problem very easily.
pjonesdotca 2 days ago 0 replies      
A great intro to Prolog can be found here if you're not the type that learns solely from reading. https://www.youtube.com/watch?v=SykxWpFwMGs
mark_l_watson 2 days ago 1 reply      
I bookmarked this. I have been revisiting my own programming history. I used Prolog a lot in the 1980s, partially because I was influenced by the ill-fated Japanese 5th generation project. A few weeks ago I pulled my old Prolog code experiments from an old repo. Prolog is a great language for some applications, and Swi-Prolog has some great libraries and sample programs.

I also have used Common Lisp a lot since the 1980s and I am in the process of working through a few of the classic text books, and I have it on my schedule to update my own CL book with a second edition.

norswap 2 days ago 0 replies      
If you want to exercise your Prolog-foo: https://github.com/norswap/prolog-dry
bschwindHN 2 days ago 0 replies      
I once wanted to solve a problem that is perfect for Prolog, but I wanted it in Clojure. Turns out there's a great library for that! I don't think it has the full power of Prolog (I only know of Prolog and what it does but I've never used it), but for integer constraint programming it was a joy to work with.


zodiac 2 days ago 2 replies      
I wrote some prolog for a PL class recently and had to debug some cases of nontermination caused by the depth-first-search unification algorithm. I was wondering why prolog (or some other logic programming language) couldn't use breadth-first search instead, to avoid those cases, but couldn't find answers online - could someone who knows prolog better here have an answer?
nwatson 2 days ago 2 replies      
The Japanese government spent US $400 million in the '80's (a lot in those days) to try to jump ahead of "western" computer technology via its "5th Generation Project". https://news.ycombinator.com/item?id=14047780

The basis for it all ... Prolog.

pjc50 2 days ago 0 replies      

(Disclaimer: this is a joke referencing prolog's default return when it can't find a solution.)

signa11 2 days ago 3 replies      
erlang syntax is prolog inspired, not sure if that's a good thing though :)
arikrak 2 days ago 0 replies      
Question - is prolog used in the real world? I.e. are there businesses that use prolog in production?
hota_mazi 2 days ago 1 reply      
Prolog's mathematical foundation is sound but the devil is in the details, and very soon, you encounter two of Prolog's most glaring flaws that lead to spaghetti code worse than what even BASIC ever produced:

- It's dynamically typed

- The cut operator

taylorking 2 days ago 2 replies      
       cached 8 April 2017 15:11:01 GMT