hacker news with inline top comments    .. more ..    13 Dec 2016 News
home   ask   best   2 years ago   
PostGraphQL: PostgreSQL meets GraphQL compose.com
152 points by otoolep  4 hours ago   59 comments top 11
jeswin 3 hours ago 10 replies      
I tried applying GraphQL to standard webapp/services projects, and could not get really far. Maybe I'm missing something, help is appreciated.

GraphQL lets consumers (Web UIs, Mobile Apps etc) define the shape of data they want to receive from a backend service. Theoretically, this is awesome since at the time of writing the backend service you don't always know which fields and relationships a consumer might require. The problem arrives when you add security. There are plenty of things that a client is not supposed to see, often based on the roles of the requesting user. That means that you can't pass a GraphQL query directly to an underlying DB provider without verifying what is being requested. You'd end up writing about the same amount of code that you'd have written with a standard REST-style (or other) interface.

I also considered using it for Server to DB data fetches, where my backend Web Service would request an underlying GraphQL aware driver for some data. I did not find it particularly more expressive than using SQL or using an ORM.

One good thing about GraphQL is that it sort of formalises the schema and the relationship between entities. You could perform validations on incoming query parameters and mutations. It also helps clients understand the underlying schema and would serve as excellent documentation. This might be useful, but a REST API also offers some level of intuitiveness, and ORMs (especially in strongly-typed languages) offer some level of parameter and query validation.

These are probably early days, but it'd be nice to see some real world use cases for GraphQL which lie right in the middle of simple todo-list type of apps, and the unique needs of FaceBook.

jjn2009 2 hours ago 0 replies      
Is there any way to implement access control logic outside of row security policies in Postgres? Or have any sort of logic in between requests based on what type of request is coming in?

The automagical nature of this software seems great but any relatively complex application would require not only CRUD manipulation but also side effects to go along with it as well.

With express I suppose you could have a middleware fire off before hand to parse the incoming query, figure out what it is, and take any extra action as necessary such as denying a request or making some side effect of the query happen. This would be a by default open policy however for those queries which you have a postgres scheme for but lack the express javascript to parse the incoming request.

tommikaikkonen 4 hours ago 1 reply      
Related: https://edgedb.com/

I can't remember where I stumbled upon that, but it implements an object database over PostgreSQL and offers a GraphQL superset for querying. Don't know if it's still in active development, but it looks interesting.

mileycyrusXOXO 4 hours ago 1 reply      
Been lurking HN for years, created an account just to say this. I got into GraphQL shortly after the reference implementation through a bit after Relay was released. I was so excited for the technology but unfortunately there was not much out there in terms of middleware, tutorials, support or adaptation. I built a few toy apps with it, but work applications were designed as usual and life went on. It is exciting to see GraphQL picking up speed and I cannot express how happy I am that this exists.
jakewins 3 hours ago 2 replies      
This is super cool - but as others have said, you can't expose this to a public URL, it needs a security model. The security model in PG is, unfortunately, severely outdated.

Now, if you combined this with a good RBAC security model, particularly if you baked that model into the GraphQL -> SQL conversion layer, so it sends SQL queries that work on an allowed subset.. that'd be very cool.

The security mechanisms in Firebase might serve as an inspiration :)

rkv 1 hour ago 0 replies      
This is awesome!

Lot's of confusion here on what GraphQL is actually trying to accomplish. In no way were the authors of GraphQL attempting to replace or enhance SQL with the GraphQL language; they are two completely different entities. GraphQL sets out to solve annoyances that people run into while building and maintaining large RESTful services.

boubiyeah 59 minutes ago 1 reply      
For many of us here, GraphQL is a premature optimization that is quite intrusive in the stack. Think hard whether you really need it.
WhitneyLand 4 hours ago 2 replies      
GraphQL seems like a nice idea.

Are there any negatives that might not be obvious from just reading the docs?

zeronone 2 hours ago 0 replies      
I am wondering if there are any implementations of translating Swagger API configurations (or similarly RAML, JSON Schema) to GraphQL. For example, it could be integrated with the API Gateway in order to translate GraphQL queries to ordinary Rest API calls on the fly.
rojobuffalo 1 hour ago 1 reply      
I'm working on a non-trivial app with this in the stack (React, Relay, PostGraphQL, PostgreSQL). I'm a big fan. The author, @calebmer, is a great guy to work with also.
chime 2 hours ago 0 replies      
This would be a fantastic fit for open/public/research databases.
Ubiquiti all the things: how I finally fixed my dodgy wifi troyhunt.com
156 points by jimmcslim  4 hours ago   99 comments top 32
finnn 3 hours ago 3 replies      
Since we're all saying nice things about Ubiquiti, I feel obliged to point out that they are actively violating the GPL: http://libertybsd.net/ubiquiti/
myrandomcomment 6 minutes ago 0 replies      
So I have posted this before as there has been a ton of Ubiquiti threads.

3 story house, 1 AP AC Pro per floor1 AP AC Pro in the detached office 1 Switch 16 POE in the house1 Switch 16 POE in the office - 2 x Cat6a between switches in a LAG1 Security Gateway 3P1 Cloud Key

I upgraded all the firmware on the complete system as I was typing this message using their app for the iPad.

Run a FreeNAS Mini in the office 2 x 1g in a LAG.

Run Insteon home automation for lights, plugs, HVAC, Camaras, leak & door sensors.

3 TVs with Intel Compute Stick with Kodi plugged into the TV HDMI. Added USB to 1g on the stick and wired to switch. Logitech Harmony remotes (same in every room) for control.

Lots of laptops, phones, pads all wifi. TVs are Wifi for their apps (used Ethernet for Compute Sticks).

Every product here is rock solid and just works (okay Kodi is buggy as is the Windows 10 it runs on).

I love the Ubiquiti gear. We use the APs in the office at work as well. 2 older APs (2 year old models - plan to upgrade soon) with 35+ devices on it at any time (most are developer laptops with lots of traffic to the DC). We use TrueNAS to boot all the servers, etc (FreeNAS commercial version).

So for wifi Ubiquiti is pretty dang good and love FreeNAS (and the IXsystems hardware) if you need a NAS.

For less technical I have recommended Eero to a few people and they all say it is quite good so far.

thaumaturgy 2 hours ago 3 replies      
I'm happy to see Ubiquiti getting some exposure here and on Reddit and other places recently (and also a bit worried that this means they're about to start releasing garbage and/or get bought by Cisco, since that seems to be a phenomenon that happens).

We've been installing and using their gear in deployments all over the place for over 5 years now and it always just works. I think we've had to replace two faulty units in that time, out of, I dunno, several dozens.

Their software is getting better and better too, and their security camera system is IMO way ahead of most of the competition in the same price range (with the exception of a network disconnection issue that has had us and Ubiquiti tech support tearing our hair out for weeks now).

I hate that their management software requires Java though, it can be fiddly and annoying to install and I think one of our techs finally set up a VM specifically for their management software.

uean 1 hour ago 2 replies      
I've taken over deployments of UniFi gear and do not have nice things to say about them. We typically deal in the Aerohive and Meraki world and I find the Ubiquiti zero-handoff to be terrible (I see real-world handoff times of up to 8 seconds in some cases) auto channel selection inneffective, band-steering implementation completely broken, and then just some basic lack of features of things that, to be fair, I wouldn't expect at this price point (L7 firewall, etc) but, given how aggressive some of the marketing has been that the Ubiquiti gear can compete with enterprise grade stuff makes you start to miss the enterprise features. It's too bad that my experience with them is not good as I want to like the stuff, but anything over 3 access points in a typical office environment with competing RF and need for seamless handoff, this stuff just doesn't cut it.

I'm a network admin for a small managed service provider.

Happy to provide more detail if needed:)

drewmol 2 hours ago 0 replies      
Long time HN listener, first time caller. I've used Ubiquiti products in multiple deployments over the last few years, in both business and residential environments. I have nothing but praise for their hardware quality as well as software features, stability and ease of use. This includes EdgeRouters, PoE switches and UniFi AP's (AC-LR and PRO models). Very satisfied customer, and with lots of experience and headaches involving various mgfs. configuration interfaces or hardware quirks, my only regret is not trying out Ubiquiti products sooner!
shmerl 2 hours ago 3 replies      
> I got a variety of responses including that I should install the open source dd-wrt firmware... No, no, a hundred times no...

Silly. If the author doesn't want to follow good advice, too bad for him.

I'm using WRT1900ACS with DD-WRT. It works like a charm.

> if I buy a product then I expect it to work as advertised and not need to implement hacks to keep it alive.

It's not a "hack". It's installing a better quality OS on the device. Again, if the author doesn't want to do that, there is no reason to complain.

nikcub 3 hours ago 0 replies      
The best side effect of buying proper WiFi gear is breaking up what is for most people usually an all-in-one device into devices - modem, router, access point - that are dedicated to each distinct task.

The high-end gear many get for running open firmware can get expensive. It isn't hard to spend less money on dedicated devices and get better performance.

What I spent on an EdgeRouter, AP AC Pro and a good managed switch is less than what the top recommendation from this thread:


cost - $250 for the Linksys EA8500 vs $130 + $50 for the UniFi setup and add $50 for a TP-Link switch.

I now want to get a bit fancier with the router - so i'm swapping it out for an eBay sourced Cisco or similar (I want dual WAN and failover, along with routing some traffic over VPN's) - still cost the same but much better (and a setup that is applicable up to 100+ users)

tbrock 3 hours ago 2 replies      
Ubiquiti is the best. Their stuff rivals and even beats lots of the more expensive stuff at a fraction of the price. It's all very high quality but so cheap that you think they are joking when you see the price.

Edge router: amazing

Edge switch: amazing

AC access points: amazing

I've never tried any of the unifi stuff though.

colanderman 1 hour ago 3 replies      
For the rest of us with budgets of less than a month's rent, I'd recommend Mikrotik. Just as reliable as an Ubiquiti (as in, never needs a reboot), yet is still a single box you can set in the corner and forget about if you wish. Or set up a mesh of their $20 units to blanket your three-story house if you're so lucky.

(Nothing against Ubiquiti which I'm sure is great, but I've been a very happy Mikrotik user for years. Recently updated my main AP to their gigabit (wired and wireless) hAP AC and loving it. I use a second Mikrotik as a fully-bridged repeater, and have an IoT wired+wirless virtual network firewalled off from the rest.)

mahyarm 22 minutes ago 0 replies      
I don't understand why he needed the expensive PoE switches although. He could of used cat 6, consumer non-PoE switches & standard $15 PoE injectors close to the switches themselves. He wouldn't get the fancy interface for the switches, but he would of had similar wireless and wifi performance.

Or does Ubiquity require you to use their switches for anything to work properly?

freen 2 hours ago 2 replies      
I'll repeat what I have said on previous threads: if you have a low RF noise environment, ubiquiti is ideal. Anything else, you need to buy a real system with three radios per node and a smart controller that adjusts channel/power on the fly.
pbarnes_1 2 hours ago 1 reply      
You can get basically this for multiples of $99 if you buy a bunch of Google Asus OnHub's on eBay and backhaul them over Ethernet with the latest firmware.

It has no knobs what-so-ever, but it's as good as Ubiquiti for far less $.

mitchty 2 hours ago 1 reply      
So I just got the ac pro access point last week. Its fine, about my only complaint is my laptop will continually drop from ac to n constantly and I'm not more that 15 feet from the AP.

My phone stays on ac all the time. Not sure how to fix it. That and the stupid java application needed to configure were not my most favorite on boarding experiences. Seems an ok system but not super great for getting set up. Its definitely enterprise though.

mmastrac 3 hours ago 1 reply      
I have Unifi AC APs in my house. Expensive, but 100% worth it. They've struck a really great balance between configurability+power/ease-of-use.

I compare it to my Mikrotik switch that while being able to do pretty much anything I could want to do, has such a steep learning curve that I ended up just using it as a slightly fancy home firewall/switch.

I'm considering pulling the trigger on the Ubiquiti switches and another three AC units for my house to cover the last few dead zones. It's been one of my favorite purchases. I really want to play around with VLANs for guest networks.

peckrob 3 hours ago 0 replies      
I run UAP-AC-Pros in my house (along with pfSense for a router) and have nothing but good things to say about them. Eliminated the wifi problems I was having and they just work. Rebooted the three I have the other day to install newer firmware after 122 days of uptime. It's nice to have something I don't have to think about much.

And they're just a bit more expensive than a good wifi/router combo. For the features it feels like I'm getting the biggest bargain.

ChuckMcM 3 hours ago 0 replies      
I was in a similar conversation with another friend about networking gear. We both have similar philosophies which are split networks, one guest and one private. The guest network gets things that want to phone home, the private network gets things which are supposed to be on the network, both networks have their outbound accesses logged. Firewall in the router, deep packet inspection with source/destination IPs to identify rogue (or hacked) devices, QoS limits on things that should never get the whole network to themselves.

It is way more complex than one would think it should be, except that we've seen time and time again how crappy network configurations screw up everything. It is also helpful to have historical data when complaining to the ISP. It is also amazing to see the guest network which has given out 60 leases, sure some of those are the phones of people who came over but a lot of them are things that want to be "online".

sandGorgon 2 hours ago 3 replies      
what is a good way to connect router or access points OVER WIFI. i dont have wired backhaul in my office and it would be fairly cumbersome to build one.

are there APs with two radios each - one for backhaul and one for service?

csmajorfive 1 hour ago 2 replies      
For those of you well versed in Ubiquiti, what's the recommended approach to connecting two switches that don't have wired backhaul between them? Right now I am using two consumer ASUS routers and one is in "wireless bridge" mode. I don't think the Ubiquiti access points support that model. So what's recommended? The distance between is fairly small but with many walls in between.
alfredxing 1 hour ago 0 replies      
I have a Ubiquiti setup for WiFi at home (though not as complex as in the blog post - just an EdgeRouter X and UniFi AC Lite). It has never gone down once, and performance is the same as on day 1:

 ubnt@ubnt:~$ uptime 06:16:26 up 140 days, 13:15, 1 user, load average: 1.08, 1.03, 1.05

rb2k_ 2 hours ago 1 reply      
The only thing I'm slightly sad about with my ubiquiti setup is that the Edge Router PoE doesn't integrate with the controller.It's basically the same hardware as the security gateway, but no way to manage it :(
im3w1l 2 hours ago 0 replies      
Reads like an ad.
ioquatix 1 hour ago 0 replies      
For something just as awesome but a fraction of the price take a look at MikroTik/RouterBoard Hex GR3 and wAP AC.
Thaxll 3 hours ago 0 replies      
Well unifi had huge issues in the past, and by huge I mean there is a 1000+ page thread on the forum with a super bugy firmware that makes the AP useless for most devices. As of today I still have issue on my iPhone6 / nexus 10 with the patern "full wifi bars slow internet"
nodesocket 3 hours ago 2 replies      
Great write up, love all the details. Surprised that nobody is mentioning Cisco Meraki gear in the recent HN networking posts.

My ideal (budgeted) setup is:

 - (1x) MX65 -- 12 GbE with 2 PoE+. PoE+ powers the access points. - (2x) MR33 -- 802.11ac Wave 2 powered via MX65.

xupybd 1 hour ago 0 replies      
Ubiquiti is getting some golden press on HN lately.
remir 2 hours ago 0 replies      
Would Open Mesh be a good alternative to his setup?
mrbill 3 hours ago 0 replies      
Love my Edgerouters and UAP access points. Currently running an ERX and a UAP-AC-LR.
calebm 3 hours ago 0 replies      
How does the Enterprise Ubiquiti gear differ from their Amplifi system?
andrewfromx 3 hours ago 0 replies      
silly question could https://eero.com/technology work for this guy much cheaper?
esaym 2 hours ago 1 reply      
I've thought about Ubiquiti in the past, but they are not "stand alone" units right? Meaning I am forced into allowing the units to phone home so I can manage them through Ubiquiti's cloud service? (something I don't want or need for a small home set up)

Last year I bought my wife and I our first set of "smart" phones. Yes I'm serious, I've been in IT all my life but never felt I had a need for anything other than a flip phone. But I noticed Samsung selling a Galaxy Core Prime for $90 and I bought my wife the LG Stylo for $180 since I wanted her to have a better camera.

For my home network, my modem runs into a linux box with Shorewall where it is natted/firewalled and split into two subnets.

I've been a fan of the netgear prosafe access points for the last 10 years, as I could always find older models on ebay for cheap.

Currently I was using a WN203 (2x2 802n). For the most part it was just my laptop and a Roku box connected. never had problems. But enter in these new smart phones...

Within a few weeks (of buying the phones) I noticed random times of terrible wifi lag. Looking at the AP's management webpage, I noticed during these random times of lag, my wife's phone would be connected at just 1M. I'd tell her to restart her phone and the problem would go away for a day or two. But it kept happening. I wasn't sure what the problem was but I used it as an excuse to get another access point, I was wanting one that had 5ghz anyway. I sniped a netgear WNDAP660 (3x3 802n) off of ebay, new in box for $95. They are normally $350 new. Figured that would solve my problems.

To my horror after a few days of having the new WNDAP660 set up, I started getting the same terrible lag and my wife's phone would be connected at 1M again. This time though the WNDAP660, through the web interface had an option to save wifi traffic packets. During the next time of lag, I saved a few minutes of packet captures and opened them in wireshark.

I was surprised to see that even though my wife's phone was connected at 1M, it was not the issue. My phone (the Core Prime) was spamming pwr_mgt request packets, 100's per second. It was basically using up all the bandwidth. In disgust, I moved everything to the 5ghz band (gave it a different id), and left only my phone on the 2.4ghz. So all was well....

But that was just a couple of months ago. I've since out grown my core prime (which doesn't take much) and bought a Galaxy S6. I turned off the 2.4ghz band on the AP, and now everything (including my new S6) are all connected to 5ghz.

And then you guessed it... I was sitting at the kitchen table and noticed lag while trying to browse the net on my laptop. I looked up and noticed the Roku box playing on TV was also stuck loading. I reached over and picked up my new S6 and put it into airplane mode. Instantly all was well on the airwaves. I haven't actually done a packet dump yet, so I don't know if the S6 is spamming pwr mgt requests or not.

But this is really annoying. I don't know what is at fault either. It seems smart phones don't play nice. But I've also caught my Roku box spewing RTS requests, even after rebooting it. I thought it had been hacked or something and was trying to dos me, but after restarting one of the cell phones all went back to normal. Its as if certain devices don't play well with each other. I mean, in my original lag case, the core prime was spamming packets, yet restarting my wifes phone would solve the problem just as good as restarting my phone. Makes no sense....

So I guess if you get random lag on wifi, try turning off a cell phone or two until you find the culprit. And once you find it, then.... Well actually I don't know what you do then. Any tips? lol

swayvil 3 hours ago 0 replies      
I use Ubiquiti gear (security cams). Solid stuff.
Cieplak 2 hours ago 4 replies      
Does anyone know how to tell if a wifi device uses beam-forming versus omnidirectional microwave emissions? I imagine that beam-forming devices expose people to less microwave radiation (2.4 and 5 GHz).

Somewhat alarmist but an interesting perspective:


Listen to world radio by navigating interactive globe radio.garden
111 points by gamma_raj  5 hours ago   24 comments top 13
Sideloader 1 minute ago 0 replies      
Cool! I even like the fake static and squelch... it reminds me of tuning the 80s era shortwave radio I found when I was a kid.
malikNF 1 hour ago 0 replies      
Amazing example of how good UI can make something so interesting and useful.

I have seen plenty of websites which lists the radio stations from all over the globe, but this interface makes it so much more interesting and fun.

Really well done to the devs.

studiopuckey 2 hours ago 4 replies      
Developer here. We weren't exactly expecting this to become as popular as it did.. Just survived being #1 on Reddit sheesh.

It seems our non-profit Bing maps key was revoked.. Switched to Arcgis imagery instead for now. Too bad, the Bing imagery was really great.

dpitkin 4 hours ago 1 reply      
Two additional global radio places to explore:

global time machine with http://radiooooo.com/and streaming with http://tunein.com/radio/regions/

prawn 3 hours ago 0 replies      
Love the concept and simple design. Very cool. Resist those wanting land boundaries added; it will lose some mystery then.

Reminds me of that not-uncommon movie intro implying that aliens are listening to Earth, where the camera zooms in on the planet as random stations and static play.

niij 2 hours ago 1 reply      
I love this! I love the static when tuning as well as how quickly it starts playing from each station. The only suggestion I have it to put political boundaries so it is easier to see where you're at.
jejones3141 3 hours ago 1 reply      
As it appears to me (running Chrome on Linux) it's really hard to go looking for a particular location, since it's just dots on a solid blue browser tab. Is that by design? (Also, it shows stations in Madrid, Spain at a spot that I think is a bit west of Ames, Iowa.)
fernly 1 hour ago 0 replies      
Brilliant concept and execution! I figured, it must be every station that has a live stream, but no, my nearest station, KZSU Stanford, has multiple streams and isn't there. And the list of presences in the one green dot in SF is much too short. So... what's the source?
noobermin 4 hours ago 1 reply      
Simply awesome. Ironically, I haven't had a radio in years and I don't own a car, but I found a local radio station I've never listened to before.

The designer's website is a trip too[0].

[0] http://puckey.studio/

Animats 2 hours ago 0 replies      
How did they pick the stations? There are only two in Japan, one of which is playing "I've been Working on the Railroad" in Japanese.
sean_patel 4 hours ago 0 replies      
Wow! This site is pretty amazing. I like how it picked up my location and tuned into a local San Francisco radio station. The I scrolled the globe and rotated into my Dad's hometown of Bombay (they call it "Mumbai" now) and it zero-ed in on a Gazhal station. This one => http://radio.garden/live/mumbai/planetradiocity/

Question to the creator / OP: Are the ads (voice ads) that play injected into the stream? I ask because even though I selected an Indian radio channel, it played a long AT&T get a go phone this holiday season blah blah blah for like 2 minutes, and the voice was American accent and also the address it said to go to was att.com/gophone (which I would think is only US customers). What gives?

eksurfus 2 hours ago 0 replies      
Love this! Works great and simple design.
bmpafa 4 hours ago 0 replies      
Perfect--now I can bone-up on Russian in time for the inauguration.
Opendoor, a startup worth emulating stratechery.com
127 points by mooreds  4 hours ago   64 comments top 10
jdross 4 hours ago 9 replies      
Co-founder here, I'll do my best to answer questions. The number one question we get is likely "what happens in a downturn/recession?", but that's more of a blog post than a HN answer. Anything more detailed/technical though, shoot!
tommynicholas 4 hours ago 4 replies      
One thing not mentioned: Opendoor absolutely does not have to carry the balance sheet risk of owning the homes forever. They can chop up those assets and sell them to anyone (hedge funds, banks, individual investors, etc) at any point. At scale, they'd be able to do this and still maintain most of the upside.

At some point, Opendoor will be able to let businesses that simply manage financial risk for a living (i.e. banks) spread the downside risk out over the entire financial system, while Opendoor takes advantage of its customer acquisition and data advantages to capture the bulk upside. It's not a perpetually risk model if they don't want it to be.

harmmonica 1 hour ago 1 reply      
Have you given thought to opening up your pricing model/algorithm to the public for every home in a market (a la Zestimate/Redfin estimate) as a means of eventually driving seller inventory even if someone is not ready to sell today?

If someone could look up what their place is worth and, unlike Zillow and Redfin, actually be told explicitly why it's worth that as opposed to being told a "black box" value, you'll open the top of the funnel by driving folks who aren't yet ready to sell (today's owner is tomorrow's seller). And since those values change frequently the not-ready-to-sell owner could track the changes to value and the underlying data driving that value, which would keep you on the tops of their minds when they're ready to sell down the road.

If you think the pricing model/algorithm is too proprietary to share beyond the "I'm ready to sell today" market, I'll stop by for a coffee to try and convince you otherwise.

Good luck. I'm stoked to see someone trying to fundamentally change the residential market.

Edit: oh, btw, not advocating sharing the actual algorithm, but the results of it (e.g. your house is worth 5 psf less than the comp from 3 blocks over bc your place is directly under the airport's flight path).

free2rhyme214 4 hours ago 4 replies      
The issue a lot of people have with OpenDoor is their business model. Only some people in silicon valley can raise $320mm to wholesale homes using algorithms. The skepticism I have is that OpenDoor isn't disrupting anything because they're not Airbnb or Uber. They're not going after unused inventory and taking huge amounts of business from incumbents, they're simply amassing massive amounts of debt like no wholesaler could and when the economy crashes, and it will as nothing goes up forever, they're going to have this backfire on them. And then what?

When the economy goes down, Airbnb will do just fine. More people will want to do short term hospitality to earn cash and more people will want to drive for Uber. OpenDoor will have to liquidate their debt which will prove difficult. Also if I remember correctly, Keith Rabois said he was talking with Peter Thiel and Peter mentioned how real estate has rarely changed. Later Keith helped create, more like fund, OpenDoor. What I wonder is, why hasn't Peter invested in Keith's new company? It's not because he's too busy with Trump's transition team.

OpenDoor isn't a startup you should emulate. I wholly disagree with this article.

WhitneyLand 2 hours ago 1 reply      
The article talked about Opendoor being able to increase liquidity. However a lot of real estate illiquidity is due to people who refuse to sell at market value for irrational reasons.

For example, "I paid X so I won't sell below that", "I added a pool plus 100k in renovation so I should get that much of a premium", "I'll just wait a few months and see if prices get better".

A big function of realtors is to console and convince people to sell at a realistic price.

This behavior makes it harder for OD to improve liquidity, and in fact liquidity could take a hit with no realtor therapy for sellers.

I predict a major issue for Opendoor will be seller sticker shock. Many will balk at paying 8-12% in fees even if it's in their best interest.

Willingness to paying more for convenience or speed is not a constant. See mental accounting from University of Chicago, or ask any realtor about seller psychology.

hkmurakami 4 hours ago 1 reply      
It's amusing that the article labels this as "flipping" instead of being framed as a market maker function in an illiquid asset class.
mbesto 4 hours ago 2 replies      
> Or, in the next downturn, the entire company might go bust.

> To that end I hope Opendoor succeeds simply so it can be a role model for tech: taking on big risks for big rewards that create real value by solving real problems is the best possible way our industry can create benefits that extend beyond investors and shareholders;

The only reason (ok one of the big reasons, not the only) Opendoor exists is that the massive amount of capital deployed has the potential to make investors and shareholders a ton of money. It's an all or nothing proposition. Furthermore, I don't think the author has rightly assessed (or assessed at all) the negative externalities associated with Opendoor's model to the economy. Part of the whole reason the housing market revelation of "too big to fail" of banks was exposed was because too much of the real estate market was tied up in too few organizations (Fannie/Freddie, Wells, etc), which means when it crumbles down, our economy cannot sustain the burden. If OpenDoor's risk mitigation model is to basically "own more of the market" it means that if/when the market does go for a downturn and OpenDoor goes bust, then it's not just VC's who lose, but a whole lot of homeowners, or even worse, homeowners and taxpayers.

That's hardly a model I'd admire or try emulate, unless of course, I want to make a buttload of cash (or fail very hard trying).

> Opendoor is creating value as opposed to taxing a few bucks off the top of an existing market or simply trying to be cheap.

Opendoor's arbitrage is no different then an ad network, they're both exploiting inefficiencies. I'm really having a hard time understand why it's so much more "benevolent" as it's suggested here.

nostromo 4 hours ago 2 replies      
I wish a startup would decrease the transaction costs of buying real estate rather than increasing them.
klochner 4 hours ago 2 replies      
Any concerns about adverse selection?

I'd want to use OpenDoor if their pricing model came in above what I could reasonably get in the market, and would be more likely to wait it out if they came in low.

Mao_Zedang 3 hours ago 1 reply      
Can you IPO so I can invest?
YC's Winter Reading List ycombinator.com
386 points by yurisagalov  12 hours ago   175 comments top 39
Uhhrrr 9 hours ago 5 replies      
This is my favorite passage from Titan:

With a talent for seeing things anew, Rockefeller could study an operation, break it down into component parts, and devise ways to improve it. In many ways, he anticipated the efficiency studies of engineer Frederick Winslow Taylor. Regarding each plant as infinitely perfectible, he created an atmosphere of ceaseless improvement. Paradoxically, the mammoth scale of operations encouraged close attention to minute detail, for a penny saved in one place might then be multiplied a thousandfold throughout the empire. In the early 1870s, Rockefeller inspected a Standard plant in New York City that filled and sealed five-gallon tin cans of kerosene for export. After watching a machine solder caps to the cans, he asked the resident expert: How many drops of solder do you use on each can? Forty, the man replied. Have you ever tried thirty-eight? Rockefeller asked. No? Would you mind having some sealed with thirty-eight and let me know?34 When thirty-eight drops were applied, a small percentage of cans leakedbut none at thirty-nine. Hence, thirty-nine drops of solder became the new standard instituted at all Standard Oil refineries. That one drop of solder, said Rockefeller, still smiling in retirement, saved $2,500 the first year; but the export business kept on increasing after that and doubled, quadrupledbecame immensely greater than it was then; and the saving has gone steadily along, one drop on each can, and has amounted since to many hundreds of thousands of dollars.

Rockefeller performed many similar feats, fractionally reducing the length of staves or the width of iron hoops without weakening a barrels strength[...]

ericdykstra 3 hours ago 1 reply      
Comments on a couple of the selections:

- Grit - I can see why it's caught on, it's pretty well-written and informative, but it's not one of the stronger books in the genre and I don't think it will stand the test of time. I recommend The Willpower Instinct - by Kelly McGonigal and Peak: Secrets from the New Science of Expertise - by Anders Ericsson in its place.

- The Rent Is Too Damn High - Matt Yglesias is an intellectually dishonest pundit and I recommend staying away from anything he publishes. He's a leading representative of what Nassim Taleb calls the Intellectual Yet Idiot[1]. He deleted 3000 tweets praising Obamacare that look bad in retrospect[2][3]. He's also mentioned directly in the Podesta emails as a pundit to be "cultivated."[4] This article from 2011 [5] points out numerous examples of his sloppy reporting and intellectual dishonesty where he doesn't own his mistakes, deletes critical comments, etc.

1. https://medium.com/@nntaleb/the-intellectual-yet-idiot-13211...

2. https://twitter.com/BuffaloBlueBear/status/79120869059868262...

3. https://twitter.com/JimmyPrinceton/status/791127776388583424 (check the whole thread)

4. https://wikileaks.org/podesta-emails/emailid/31954

5. http://www.chequerboard.org/2011/02/matt-yglesias-the-one-ma...

ThomPete 10 hours ago 2 replies      
Creativity INC is one of the best books I have ever read on creativity and I have read a lot to know how much most of them really suck.

It has the added bonus of providing an alternative biography of Steve Jobs which in itself is interesting.

It's much more than a story about Pixar. It's a great insight into some of the very problems you deal with as you build and try to maintain a culture.

I can't recommend it enough.

If you want a peek into the books content Ed Catmul did a great talk at Standford.


siavosh 10 hours ago 4 replies      
My personal picks as I continue to try to understand global economic trends since the 1980's through the prism of the 2008-crash:

The Global Minotaur: America, Europe and the Future of the Global Economy by Yanis Varoufakis [1]

The Price of Inequality: How Today's Divided Society Endangers Our Future By Stiglitz, Joseph E. [2]

The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War by Robert J. Gordon [3]

1. https://www.amazon.com/Global-Minotaur-America-Economic-Cont...

2. https://www.amazon.com/Price-Inequality-Divided-Society-Enda...

3. https://www.amazon.com/Rise-Fall-American-Growth-Princeton/d...

kashyapc 8 hours ago 0 replies      
Pleasantly surprised to see Lee Child mentioned there.

The Enemy (mentioned in the list) is a prequel, though. But it is special in the sense that it is narrated in first person. Special because (no, not a spoiler): Reacher, the protagonist, doesn't say much, but the internal thinking is described in a very attractive way through out (so readers naturally long to hear in first person). The most common thing you read in the books is: "Reacher said nothing". Heck, it's so common that there's even a book written with that phrase as the title; it shadows the author, Lee Child, to investigate what it takes to make the popular character -- http://www.penguinrandomhouse.com/books/529959/reacher-said-...

FWIW, other books from the author I enjoyed and recommend: Echo Burning, Die Trying, Tripwire, Persuader. (/me fondly recalls reading 17+ books (even saving money as a student 10 years ago to pre-order) until a few years of years ago; will resist making a comment on the Reacher movies; but makes a sincere please to read the books first, and ignore, as best as you can, the movies).


Related author: Robert Crais (characters: Elvis Cole and Joe Pike).

habosa 5 hours ago 0 replies      
I imagine many people here saw Don Quixote on the list and kept scrolling. Sounds like YC is trying to make you go back to a book you probably gave up on in high school.

The edition pictured, translated by Edith Grossman, is extremely approachable. It uses mostly modern language which makes the original humor of the book really stand out. It's incredible that a book written over 500 years ago can still be funny and engaging. I'd recommend it to everyone, after reading this translation it moved from 'boring book I couldn't finish' to 'one of my favorite novels ever'.

soheil 1 hour ago 0 replies      
I spot checked several of these books on Pirate Bay and they were all well seeded and many uploaded today, touche Hackernews! Not that I would be interested in downloading them or anything.
tomp 9 hours ago 2 replies      
I cannot recommend Manna enough (I assume it's similar to the online version [1]). It portrays a version of the future that I believe is achievable by continuing open-source software development and extending it to (more and more powerful) AI.

[1] http://www.marshallbrain.com/manna1.htm

peller 11 hours ago 0 replies      
Both The Idea Factory and Titan were excellent. Haven't read any of the others yet. I'd recommend reading The First Tycoon[0] before Titan, as chronologically it sets the stage very well for the world Rockefeller rose to power in.

[0] http://www.goodreads.com/book/show/4839382-the-first-tycoon

cocktailpeanuts 8 hours ago 2 replies      
This is interesting, I just went to one of the book's Amazon page and this is what I see: http://i.imgur.com/O6x8eNQ.jpg

It's almost like a horizontal scroll version of this blog post. I guess it's the small sample size.

siavosh 3 hours ago 0 replies      
Regarding Hillbilly Elegy: I read it after another post-election book list recommended it, and found it to be a very mixed bag and generally a disappointment. The valuable part was the depiction of his home life and the disintegration of the social fabric of his community over a few generations driven by unseen but acutely felt economic trends. Very touching and powerful accounts.

The latter half of the book, however, was by far the weakest. There he attempts to recommend fixes for the issue from a libertarian perspective. His beliefs are not surprising given his current employment in a Thiel hedge fund. This part of the book had little insight and seemed to be an ideologically driven argument by, ironically, a newly minted financial elite that his own community distrusts.

wainstead 11 hours ago 0 replies      
Cannot recommend "City of Gold" enough. Great writing and a page-turner. Full disclosure: I knew Jim Krane in the 80s when he played guitar in hardcore punk bands like Starvartion Army.
protomyth 11 hours ago 4 replies      
My disagreement with the placement of "Strangers in Their Own Land" is basically that it really is well loved by Mother Jones[1], NYT[2], etc. but not at all liked by the people it claims to report on[3]. I guess if you hold liberal beliefs and want some reinforcement, then its a good book, but I would think you probably want to read books by people who actually are the people being talked about.

1) http://www.motherjones.com/politics/2016/08/trump-white-blue...

2) http://www.nytimes.com/2016/09/25/books/review/strangers-in-...

3) https://www.washingtonpost.com/news/book-party/wp/2016/09/01...

yarper 9 hours ago 1 reply      
For those interested, C. S. Forester's Hornblower book(s) became a pretty good TV series[0] starring Battlestar Galactica's own Jamie Bamber!

[0] https://en.wikipedia.org/wiki/Hornblower_(TV_series)

jaseemabid 2 hours ago 0 replies      
Someone added all this into a goodreads list, so that you can mark them 'read later'.


chadcmulligan 9 hours ago 0 replies      
One i'd recommend is The illusion of life - the history of animation in disney, it covers some of the business, the people and the techniques and of course the movies, and it's one of the most beautiful books I own. https://www.amazon.com/Illusion-Life-Disney-Animation/dp/078...
dmourati 2 hours ago 0 replies      
I picked up Red Notice from the "library" at a resort in Chile last September for the flight home. What an awesome story. I learned a ton reading it and the book reads like fiction even though it is factual. Highly recommended.
wdages 5 hours ago 0 replies      
Glad to see Shoe Dog on this list, that was one of the most memorable books I listened to this year (the audiobook narration was awesome). It's always interesting hearing the origin story and struggles of a company that was the underdog in the industry for so long, and ended up on top. I thought Phil Knight's account of his journey with Nike was really honest and thoughtful, I had no idea how long it took them to get momentum, or how many times they were on the brink of bankruptcy. Can't recommend this enough, I'm looking forward to reading it a second time next year.
CalChris 11 hours ago 2 replies      
The Grossman translation of DQ is a fine read. I read it in a seminar which is the academic equivalent of a book club. That was awesome. If you get into it, it's a great discussion book.

I'd recommend Marryat's Mister Midshipman Easy over Forester's Mister Midshipman Hornblower. Marryat speaks from authority when he speaks of the sea and of naval warfare. Neither Forester nor Patrick O'Brian sailed.

lintiness 10 hours ago 0 replies      
thanks for including some fiction! so many "data" people miss out on so much because they mistakenly assume something that "didn't happen" can't help or enrich their understanding of what is.
acl2149 10 hours ago 1 reply      
I've read shoe dog. Entertaining fast read but I don't think you should prioritize it unless you're really into sneakers or Nike.
clumsysmurf 9 hours ago 0 replies      
A few new books the YC crowd might like:

"Whiplash: How to Survive Our Faster Future"https://www.amazon.com/gp/product/1455544590

"What the Luck?: The Surprising Role of Chance in Our Everyday Lives"https://www.amazon.com/gp/product/1468313754

"Shrinking the Earth: The Rise and Decline of American Abundance"https://www.amazon.com/gp/product/019984495X

mch82 9 hours ago 0 replies      
Also consider Robert Harris' historical fiction trilogy "Imperium," "Lustrum," and "Dictator" about the life of Cicero and the fall of the Roman Republic under Julius Caesar (assuming it's okay to recommend additions).

The ideas explored in these books are fascinating and could not be more timely. The historical notes are interesting. The reading is fun! "Dictator" is the third book in the trilogy & its Wikipedia page links to the rest: https://en.m.wikipedia.org/wiki/Dictator_(Harris_novel) The narration on the Audible editions is fantastic.

IndianAstronaut 9 hours ago 0 replies      
Not a book, but definitely check out The Economist's Christmas edition. A lot of in depth articles on a wide array of subjects ranging from historical artifacts to road journeys to contemporary life.
miraj 3 hours ago 0 replies      
think this book is a particularly interesting read, especially considering the U.S. election opera of 2016:

"Infomocracy" -by Malka Order.

+++ some other favorites:

When Breath Becomes Air. -by Paul Kalanithi.

Arkwright. -by Allen Steele.

The God's Eye View. -by Barry Eisler.

Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley. -by Antonio Garcia Martinez.

Ego Is the Enemy. -by Ryan Holiday.

evtothedev 11 hours ago 0 replies      
I just finished The Attention Merchants by Tim Wu and I cannot recommend it enough.

It traces the history of advertising and attention capture from billboards through Facebook.

karmicthreat 11 hours ago 1 reply      
Highly recommend Manna. Its short but still worth it.
adamnemecek 9 hours ago 0 replies      
There should be a list like this but more technical
rl3 10 hours ago 0 replies      
>Creativity, Inc.

Ed Catmull, the co-founder of Pixar with Steve Jobs and John Lasseter, on how they built a culture of openness, honesty, self-reflection, and risk-taking that protects new ideas and creativity instead of squashing them. Aaron Epstein

I wonder if it comes with any helpful pointers on how to execute long-term, systematic wage-fixing[0] schemes.

The top one-star review[1] on Amazon sums it up nicely.

[0] http://www.cartoonbrew.com/artist-rights/ed-catmull-on-wage-...

[1] https://www.amazon.com/gp/aw/review/0812993012/R1CW8GBYEH3UQ...

datavirtue 9 hours ago 0 replies      
I ordered the first book on the list about a week ago form my son and I over winter. Great minds think alike.
blakes 11 hours ago 0 replies      
I actually pre-ordered Grit based on the excerpt I had read with Pete Carroll. It sounds fascinating but I have yet to read it.
bbcbasic 6 hours ago 0 replies      
It's 36 degree C here today so guess I'll have to wait a bit to read these.
ArlenBales 10 hours ago 10 replies      
One genre that is always missing from HNer's and YC's recommendations is Fantasy.

It feels like most people here read books to acquire knowledge and philosophy to apply to real life.

Most fantasy books are read for entertainment and imagination. There's no hidden message to parse and put toward your next start-up project. That doesn't mean Fantasy books are a waste of time though if they're engrossing and entertaining. That's why I read them.

Some fantasy recommendations:

The Name of the Wind, by Patrick Rothfuss

The Lies of Locke Lamora, by Scott Lynch

The First Law series, by Joe Abercrombie (especially the standalone books #4, #5 and #6)

The Way of Kings, by Brandon Sanderson

minimaxir 11 hours ago 1 reply      
rcavezza 10 hours ago 1 reply      
Rescinded comment

Link to original post: https://news.ycombinator.com/item?id=13117521

tomcam 11 hours ago 3 replies      
overcast 10 hours ago 2 replies      
I guess no book list can ever escape the mediocrity of Neal Stephenson. At least it wasn't Snow Crash this time.
cylinder 9 hours ago 2 replies      
This list took a credibility plunge when Thomas Friedman was spotted on it.
wowsig 1 hour ago 0 replies      
Excellent list! I've added all the books for people to save books to their reading list here on ShelfJoyhttp://shelfjoy.com/shelfjoy/wrap-up-your-2016-with-ycs-wint...

The summer reading list is also available here:http://shelfjoy.com/shelfjoy/ycombinators-summer-list-of-201...

Thanks for the wonderful recommendations to everyone at YC!

How Discord handles over a million requests per minute with Elixirs GenStage discord.engineering
293 points by Sikul  11 hours ago   112 comments top 19
jtchang 9 hours ago 3 replies      
The most important part of this article is the concept of back pressure and being able to detect it. It's common in a ton of other engineering disciplines but especially important when designing fault tolerant or load balancing systems at scale.

Basically it is just some type of feedback so that you don't overload subsystems. One of the most common failure modes I see in load balanced systems is when one box goes down the others try to compensate for the additional load. But there is nothing that tells the system overall "hey there is less capacity now because we lost a box". So you overwhelm all the other boxes and then you get this crazy cascade of failures.

jondot 15 minutes ago 0 replies      
Hate to be a party pooper, but I'd like to give people here a more generic mental tool to solve this problem.

Ignoring Elixir and Erlang - when you discover you have a backpressure problem, that is - any kind of throttling - connections or req/sec, you need to immediately tell yourself "I need a queue", and more importantly "I need a queue that has a prefetch capabilities". Don't try to build this. Use something that's already solid.

I've solved this problems 3 years ago, having 5M msg/minute pushed _reliably_ without loss of messages, and each of these messages were checked against a couple rules for assertion per user (to not bombard users with messages, when is the best time to push to a a user, etc.), so this adds complexity. Later approved messages were bundled into groups of a 1000, and passed on to GCM HTTP (today, Firebase/FCM).

I've used Java and Storm and RabbitMQ to build a scalable, dynamic, streaming cluster of workers.

You can also do this with Kafka but it'll be less transactional.

After tackling this problem a couple times, I'm completely convinced Discord's solution is suboptimal. Sorry guys, I love what you do, and this article is a good nudge for Elixir.

On the second time I've solved this, I've used XMPP. I knew there were risks, because essentially I'm moving from a stateless protocol to a stateful protocol. Eventually, it wasn't worth the effort and I kept using the old system.

poorman 10 hours ago 0 replies      
That's awesome and it just goes to show how simple something can be that would otherwise involve a certain degree of concurrent (and distributed) programming.

GenStage has a lot of uses at scale. Even more so is going to be GenStage Flow (https://hexdocs.pm/gen_stage/Experimental.Flow.html). It will be a game changer for a lot of developers.

hotdogs 10 hours ago 1 reply      
"Obviously a few notifications were dropped. If a few notifications werent dropped, the system may never have recovered, or the Push Collector might have fallen over."

How many is a few? It looks like the buffer reaches about 50k, does a few mean literally in the single digits or 100s?

coverband 9 hours ago 10 replies      
Quick serious question: How does this company plan to make money? They're surely well funded[1], but what's their end game?

[1] "We've raised over $30,000,000 from top VCs in the valley like Greylock, Benchmark, and Tencent. In other words, well be around for a while."

pwf 10 hours ago 4 replies      
50k seems like a low bar to start losing messages at. If this was done with Celery and a decently sized RabbitMQ box, I would expect it to get into the millions before problems started happening.
bpicolo 10 hours ago 1 reply      
I love Discord, and love Elixir too, so this is a pretty great post.

Unfortunate that the final bottleneck was an upstream provider, though it's good that they documented rate limits. I feel like my last attempt to find documented rate limits for GCM/APNS was fruitless, perhaps Firebase messaging has improved that?

AgentK20 9 hours ago 2 replies      
Anyone know of a equivalent libraries like GenStage for other languages? (Java, NodeJS, etc)

I'd definitely be able to put to use things like flow limiters and queuing and such, but none of my company's projects use Elixir :(

erikbern 9 hours ago 3 replies      
"requests per minute" is such a useless unit of measurement. Please always quote request rates per second (i.e. Hz).

Makes me think of the Abraham Simpson quote: "My car gets 40 rods to the hogshead and that's the way I likes it!"

mevile 8 hours ago 1 reply      
I spend a lot of time in the PCMR Discord, which is pretty lively. The technology seems to be solid, while the UI has issues (notifications from half a day ago are really hard to find for example on mobile devices). Otherwise I'm on Discord every day and love using the service. I miss some slack features, but the VOIP is very good.
user5994461 8 hours ago 0 replies      
I'd like to say that the official performance unit is the "request per second". And its cousin, the requests per second in peak.

The average per minute only gets to be used because many systems have so little load that the number per second is negligible.

manigandham 3 hours ago 0 replies      
Akka(.NET) or any actor system is a perfect fit for this and brings the same functionality to other languages and frameworks.
dimino 9 hours ago 3 replies      
What is up with Discord? I feel like it's quietly (maybe not so quietly) one of the bigger startups to come out in the last two years.

It seems to have totally taken over a space that wasn't even clearly defined before they got there.

sandGorgon 4 hours ago 0 replies      
how does one achieve this in Celery 4? I remember there was a celery "batch" contrib module that allowed this kind of a batching behavior. But i dont see that in 4
IOT_Apprentice 3 hours ago 0 replies      
Why not use Kafka for back pressure?
rv11 5 hours ago 0 replies      
just wondering, what is the difference if I use two kind of [producer, consumer] message queues (say rabbitmq) instead of this? Does genstage being a erlang system makes a difference?
snambi 8 hours ago 2 replies      
million requests per minute, is this a big deal?
sbov 8 hours ago 1 reply      
Is the number of Push Collectors to Pushers constant or can it vary based upon notification load?
imaginenore 5 hours ago 0 replies      
> "Firebase requires that each XMPP connection has no more than 100 pending requests at a time. If you have 100 requests in flight, you must wait for Firebase to acknowledge a request before sending another."

So... get 100 firebase accounts and blast them in parallel.

Pharma Execs Arrested Over Fentanyl Rackateering justice.gov
14 points by blawson  2 hours ago   3 comments top 3
jsjohnst 6 minutes ago 0 replies      
I'm glad they put an end to this, but man, I can't help but think there's a serious problem with our government based on how many agencies were involved to do it.
rblatz 16 minutes ago 0 replies      
Interestingly Insys donated $500k against the marijuana legalization measure (prop 205) on the ballot in Arizona.


joering2 1 minute ago 0 replies      
[...] said United States Attorney Carmen M. Ortiz

This is the same individual who prosecuted Mr. Aaron Swartz. Gosh I thought she was long time gone!

Instant search for video of NBA plays 3ball.io
63 points by Matetricks  5 hours ago   31 comments top 12
rfrank 4 hours ago 1 reply      
I like the idea, but I'm having some trouble with it. Search results are only from this season, right? A search for Steve Nash showed a Steph Currey highlight as result one for me. More slang oriented searches like "dunked on" or "dunks on" don't really work either. Dunk/Dunks shows up in results, but nothing beyond that. Was looking for one of my all time favorite highlights, Baron Davis dunking on Kirilenko [1]. For me this type of thing would be pretty awesome with some sort of stats element. A 'visual box score' vs. 'highlights search engine.' ie show the box score for a game, and clicking in any particular field shows video of the plays that produced those stats.

1. https://www.youtube.com/watch?v=tYpwjB0IzoU

rlau26 5 hours ago 1 reply      
Where is all the footage from? Is this allowed by the NBA? The quality of the footage is quite good.

This seems to be just searching the clip title. For example, if you search 'curry 3' ('curry three' returns nothing), it'll return things like "Curry 2' Finger Roll Layup (6 PTS) (Iguodala 3 AST)" or "Curry REBOUND (Off:0 Def:3)". If it could match the search query with play-by-play data, now THAT'd be cool.

prawn 1 hour ago 0 replies      
Would be great if it could learn which were most likely to be highlight plays based on the number of times they were clicked. And then give those priority in the results list. Also, maybe prioritise a player's involvement in the play.

e.g., if I search "westbrook dunk", I probably don't want the normal dunks first, or the time he passed to someone else who then had the dunk. Show me the great Westbrook dunks first, then the normal ones, then the assists to other dunkers.

laxatives 1 hour ago 0 replies      
I tried like 8 queries and none of them yielded a single relevant result. Most of them didn't yield a video at all. Granted, they were difficult/obscure, but didn't make a great impression.

If this needs some special syntax or keywords, you should let the user know.

LAMike 4 hours ago 1 reply      
Searched for "Kobe pass"

0 results lol

nodesocket 1 hour ago 0 replies      
I can't say for sure this is what 3ball is using, but here is an XML file from NBA.com listing videos with associated titles, descriptions, thmbnails, and video urls.


However it appears outdated since the dates only go up to 8/4/2016.

nodesocket 5 hours ago 1 reply      
I'm a big basketball fan (mostly college). Where are you sourcing the videos? Also, are you manually tagging each video with player names, teams, and keywords or is there some magic happening on the backend?
esturk 2 hours ago 1 reply      
Typed in "5x5", found nothing. However rare of a performance, Draymond pulled it off last season so it should be relatively recent.
derimagia 4 hours ago 1 reply      
For all the people asking where it's from, quick google brought up http://www.nba.com/sitemap_videos_0001.xml and changing it to 0002 seemed to work. It's just an index of something from nba.com and nothing is hosted on the site.
lintroller 5 hours ago 1 reply      
This is a fascinating tool but I can't imagine this will exist much longer after the NBA's lawyers find it.
djKianoosh 5 hours ago 0 replies      
cool stuff...

every letter search becomes a new route.. yikes that is awful for going back

also, I searched for ginobili assist but it found all plays with ginobili and any assist.

good start and the video quality is pretty strong on mobile

failedstoic 4 hours ago 2 replies      
How I can find who made this? Would love to work w/ him/her on a project.
Sane Thinking About Mental Health Problems srconstantin.wordpress.com
14 points by devinhelton  1 hour ago   2 comments top 2
nether 17 minutes ago 0 replies      
> They are teaching kids not to be kind to sad friends, but to report them to the authorities instead.

I've heard that this is a particularly American attitude, that the mentally ill should be isolated with a clinician before doing anything else in life. Can any non-Americans say what their cultural attitude is? I've also heard that Indian culture is pretty much the opposite in this respect.

orionblastar 51 minutes ago 0 replies      
I grew up with major depression and a major mood disorder. My doctor thought I was autistic but when she gave me an IQ test I was high functioning and in 1975 they didn't discover high functioning autism until later in the 1990s. So I was diagnosed(misdiagnosed) as depressed. I was picked on by bullies, physical, mental, emotional abuse, etc. Just because I was different in some way, called an Oddball because of X or Y, that is something the other kids didn't have.

I managed to graduate high school and college and was working a good job until I developed schizoaffective disorder in 2001. It is like bipolar with schizophrenic cycles. Less than 0.5% of the population suffers from it and it is very rare so not everyone knows about it.

In 2003 I ended up on disability as I could not find a job, and could not hide that I was mentally ill. ADA says one cannot be discriminated against because of a mental disability or mental illness, but I was called overqualified or any other reason to reject me. As I grew older I also have age discrimination.

I wish I can say it gets better, but it is like fighting with demons in your head to keep the negative thoughts away. The thought that say you are worthless, or that you will fail, nobody likes you anymore, you are past your prime, etc. The medicine helps treat a chemical imbalance but not all mental illnesses are due to chemical imbalances. Getting a good doctor is hard as well.

I'm 48 and have been suicidal about 14 times in my life. I'm still alive to talk about it, so I survived somehow.

Generation-X is called the suicide generation because of so many of my generation killing themselves or getting suicidal. It is like life is so hard to live, you are on expert mode and struggling just to wake up in the morning and get ready for work, and by the time you get to work you used up most if not all of your mental energy to get up and get ready.


Mental Hospitals as shown in that above link, are not always for helping mentally ill people and some are run like prisons and they keep you there until your insurance runs out to maximize their profits.

A mental illness is an invisible disability, when companies think of disabled people, they think of someone in a wheel chair, a deaf person, a blind person, someone missing body parts, etc. They never consider a mentally ill person or accommodating them. If the stress is too much and it is making you sick, other employees will do things to you to see how angry they can make you. Just because you can't smile due to flat effect paralyzing your face muscles, some employees think you might be up to something because you have a 'poker face' and didn't wave back to them when you walked in during the morning because you are too busy fighting demons in your head to see people wave at you.

There is no cure, no magic, it does not go away, you just try to learn skills to cope with it and find ways to screen out negative thoughts and maybe try a different medicine and see if it works better.

I am not 100% recovered, but I am making progress and trying to get back into programming. I am writing this to let people know that there is no magic bullet to kill the mental illnesses, and that we are not all violent like those public shooters they call mentally ill but they are sociopaths and most of us are not, we are just dysfunctional in some way.

To Build a Better Ballot ncase.me
207 points by bpierre  11 hours ago   110 comments top 29
comex 10 hours ago 1 reply      
I suspect Gary Johnson getting chosen by a Borda count isn't as weird/surprising as the number of question marks implies. Caveat: I didn't see a raw distribution of preference orders on the Vox survey; if it's linked somewhere, I missed it. But I think Johnson is effectively acting as a compromise candidate: a lot of Clinton voters really hate Trump and a lot of Trump voters really hate Clinton, enough that they'd both prefer Johnson over the other candidate, even if they don't actually know much about him, or even mildly dislike him. Borda count tends to choose compromise candidates, so that's what you end up with. Now, Johnson's policies are fairly fringe and weird, and so are the other third party candidate's (Stein), but that's because serious candidates don't run on third party tickets that have no chance of winning. If the election was actually conducted using a Borda count, you'd probably see a number of relatively boring centrists running, and one of them would win instead of Johnson - which is a perfectly reasonable result. Well, except that that would also fundamentally change the coverage of the race, so there's no reason to expect the election would go anywhere near the same way overall (it would probably be less polarized), but the point is that to the limited extent the survey results reflect this hypothetical world, they don't cast it in a bad light.

Edit: Also, I like Clinton, but her being the Concordet winner doesn't mean much in terms of that world either. When almost all of the ballots give the top rank to one of two candidates, whichever of the two gets more votes is the Concordet winner. But that's only the case because of first-past-the-post, both because there aren't any good third-party candidates (neither centrist nor extremist), and because the structure of the race strongly encourages voters to sign up for one bandwagon or the other .

sandGorgon 24 minutes ago 2 replies      
I'm actually very interested in the mathematical implication of score voting. Because in Indian elections - the world's largest elections - IRV is fairly impractical. We have constituencies with more than 10-15 candidates and the "recalculations" will kill the system. We also have very frequent recounts.

I'm still thinking of how to explain score voting to an illiterate voter. It requires a level of sophistication that is orders of magnitude more involved than a simple checkbox next to a candidate.

lifeformed 3 hours ago 0 replies      
This format of interactive article is really excellent. It's a good example of taking advantage of the medium of the web.
cthor 10 hours ago 3 replies      
When talking about the effect of voting systems on elections, something I think is overlooked is the game-theoretic implications on the candidates.

Take the second diagram in the article and move the candidates around. What's the optimal strategy? To move as close to your opponent as possible, while staying closer to the centre.

Now take the diagram with three candidates, and move them around. What's the optimal strategy? To move as far away from both opponents.

This works even if you consider an n-dimensional space for every conceivable issue. Two-party system encourages both parties to move towards the centre. Three- (or more) party system encourages them to spread out.

tomohawk 7 hours ago 1 reply      
Turkey ballot. Always add a 'none of the above' choice. This is the turkey.

This also makes the ballot less ambiguous.

In the election, if a candidate gets 50% + 1 vote, they win. If not, anyone getting less than the turkey is tossed and cannot run in the runoff. No candidates win? Enroll a new slate.

Yen 9 hours ago 1 reply      
So, the biggest complaint about Instant-runoff voting seems to be that it can lead to a counter-intuitive scenario, where a candidate becomes more popular, but loses the election.

From what I've seen of constructed scenarios that have this situation, they have 'left', 'compromise', and 'right' candidates, with the majority of voters tending to prefer left>compromise>right, or right>compromise>left.

If 'compromise' is the weakest candidate, either left or right ends up winning. But if left or right become popular enough, and make left/right the weakest candidate, than compromise wins out over both extremes.

Frankly, this actually doesn't seem like that much of a problem to me. If we end up with everyone's second-favorite choice, everyone is at least second-most happy.

For example, in the USA - Without commenting on Gary Johnson's politics or experience, I think the USA would have been happier had he been chosen. Half the population is pulling their hair out over Trump. Had Clinton won, the other half would likely feel the same way. Most of the population would be less-excited to see Johnson in office than their preferred candidate, but relieved that at least opposing candidate didn't get in.

erentz 8 hours ago 3 replies      
Why do these voting comparisons always only discuss electing single winner? A big part of the change needed is to introduce more proportional representation. Asides from moving strictly to an MMP style system, one way is to enlarge the districts so that each district elects (e.g.) five representatives using STV. Don't get me wrong, IRV is 10x better than what we have today, but only makes sense for the president or senate where there can be only one winner each election.
KingMob 9 hours ago 3 replies      
Kudos to the interactive diagrams, but this seems misinformed about spoiler history.

The most famous "spoiler" was Perot, not Nader. Bush v. Gore was an extremely tight election, but there were third parties that drew more votes from Bush than Gore, and wer within the margin of error, possibly counterbalancing Nader's effect. Whereas with Perot, he received 19% of the popular vote, most of which were clearly drawn from George Bush, ensuring a Clinton win.

niftich 11 hours ago 1 reply      
In addition to the great overview of voting systems and the policy commentary, the interactive parts -- which they call 'Explorable Explanations [1]' -- are fantastic! The code for this one is on Github [2].

[1] http://explorableexplanations.com/ [2] https://github.com/ncase/ballot

winstonewert 10 hours ago 1 reply      
I think the discussion is severely lacking a proper consideration of how strategic voting affects the systems it likes best. For example, score voting encourages people to exaggerate their preferences, and approval voting makes it non-trivial to figure out who you should approve.
a_imho 20 minutes ago 0 replies      
Sortition, closely followed by direct democracy.

1) is also proven for selecting juries?

stretchwithme 4 hours ago 1 reply      
What we need is a system that let's everybody have representation, both in the legislative and executive branches. Proportional representation is much better at this then winner-take-all elections.

Why do we need to place all this power in the hands of a single person anyway? Switzerland has an executive branch with 7 members from 5 different parties and a presidency that rotates annually.

PR also makes it much harder for lobbyists to influence lawmakers. Candidates don't need to convince everybody in order to represent those who identify with their party. And a candidate that represents one of many parties has to do a good job representing that party in order to keep that job.

We could move the House of Representatives to PR and keep the Senate as is.

quadrangle 8 hours ago 1 reply      
Needs to add the new best-compromise best-of-both-worlds proposal: SCORE RUNOFF per http://www.equal.vote/

That's most of the benefits of score voting (which is acknowledged as superior in the study here) with a runoff stage to address strategic voting.

rrradical 1 hour ago 0 replies      
These sorts of explanations are very neat and educational, I think, but to me they aren't that effective in actually bringing us closer to using a new voting system. All the discussion here is an example of what tends to happen, which is people arguing over what the very best voting system is. But it's so easy to construct argument for or against any particular system.

I think a better tactic to actually use a new system is to share a vision to the general population of what voting under a new system would actually be like. Once the public at large is in favor of the general idea of moving to a new system, actually picking the best system should be more of an implementation detail.

In other words, walk the voter through a simulated ballot casting and show them what the results of the election might be under such a system.

I gave an effort to do this here: http://asivitz.com/voting/I'm not sure how well I succeeded though.

bluecaribou 9 hours ago 1 reply      
Recounting my comment from a previous article[1], comparing all these voting systems is not really the right way to think about it. First, you need to separate the mechanism used to express voter preference (i.e the ballot design), from the method used to choose the winner. Those are separate things that, in theory, can be mixed-and-matched to produce various voting systems.

In terms of ballot designs, they are basically all just restricted subsets of the "score" voting ballot. That is, any voter preference that can be expressed in an "approval", "ranked choice", "ranked choice with ties", or traditional "single choice" ballot, can also be expressed with a "score" ballot.

That means every voting system is a "score ballot" system with some restrictions applied to the ballot. This means that, for example, you could have an election where you allow the voter to choose whichever ballot they are most comfortable with. Then you just interpret the ballot as a score ballot.

There are multiple ways to choose a winner from a set of score ballots. But debating between them is counterproductive to getting better voting systems adopted. Just start off with one that's easy to understand (i.e. "sum of ratings", or "only the first choice counts"), and worry about improving it later.

The important thing is to give the voter the option to use a more expressive ballot. Whichever one they feel most comfortable with. You could even make it so that initially, all ballots are converted to traditional "single choice" ballots for tallying, but let voters know how the vote would have turned out under other evaluation methods (like "sum of ratings" and Schulze). I think voters would quickly realize the value of counting all of their expressed preferences.


But that is a very cool site. Probably the kind of site the web was intended for, don't you think?

[1] https://news.ycombinator.com/item?id=12950566#score_12952384

ryandvm 10 hours ago 4 replies      
I love the topic of voting systems, but... we can't even get rid of the fucking penny over here. The odds of the U.S. changing its voting methodology are so infinitesimal, I'm having trouble conceptualizing it.
dane-pgp 3 hours ago 0 replies      
There is actually "One Weird Trick" to fix democracy, and it's the voting system not mentioned in the article: Direct and Party Representative (DPR) voting.

At least for parliamentary / congressional elections it is both more proportional than FPTP and simpler to count than all the alternatives mentioned. You simply provide the voters with two ballot papers: one to select a local representative (counted and decided in the same way as a traditional FPTP election), and a second ballot paper where the voter can choose which party they want to have more power at the parliamentary / congressional level.

The trick is that these second votes are totalled across the whole nation and used to calculate the ratio of support for each party nationally, then those ratios are used to normalise the voting power of the representatives in the legislature. So if 10% of MPs elected are from the Triangle Party, with 20% of the national vote, then each MP gets effectively a double vote on bills, relative to a nominal MP with a proportionally correct amount of national support.

bikamonki 4 hours ago 1 reply      
Voting system isn't broken, Democracy is.


To aggravate: many elections lately are win by tiny margins and with not all citizens casting a vote. Hence, the majority per se is not really setting the course of nations.

On a side note, I wonder why close results happen. I do not think it is an accident (some that I can remember now that are almost split in half: the popular vote in 2016 US elections, the yes/no vote for the Peace Treaty with FARC in Colombia, the 2016 presidential elections in Peru). Maybe there is no pattern either, but it seems odd that when facing an important decision, voters split in half. (my own conspiracy theory is that given the lack of grey area options - a raking effect as proposed by the OP - voters MUST pick a side and they are manipulated to veer in one direction by smart and sophisticated communications)

whytheam 7 hours ago 1 reply      
This article leaves out proportional representation and single transferable vote. PR is widely used in many democracies and should be considered for the U.S. Senate. Specifically, mixed member proportional which allows each district (in our case State) to elect one member to the legislature and then the rest of the 50 seats would be filled to create the closet proportion to popular support each party has. Single transferable vote, which is actually used in some democracies, would fit the House of Representatives quite well.

Duverger's law tells us that we will not see electoral diversity in the U.S. until we change the way we vote.

donatj 8 hours ago 2 replies      
The cutsie graphs make the incorrect presumption that people's political views are evenly distributed around the political spectrum.
tantalor 11 hours ago 0 replies      
ypeterholmes 8 hours ago 0 replies      
Good piece but seems remiss to have this discussion without also acknowledging the ways in which our electoral system + a complete lack of transparency are also undermining confidence in the system. It's 2017 yet somehow people are waiting 10 hours in line to vote? And then there's no way to verify that your vote counted?
belovedeagle 8 hours ago 3 replies      
I find it truly inspiring how many people suddenly care about democracy, even though "this isn't about the 2016 election". /s

Some might say that any support for a better system is good, despite the motivations, but I disagree. All this support will vanish as soon as our current system chooses the "right" (well, the left) candidate.

asdf1234321 9 hours ago 1 reply      
Another approach: an ordered subset specifying fall-through candidates you would want your vote to go to if the current one has not path to victory. https://drive.google.com/file/d/0B1E-bZdxnzc_eTJvLWtsSGE3Y3c...
obilgic 11 hours ago 2 replies      
Why use a 2d map instead of 1 dimensional, seems more practical.
EGreg 6 hours ago 1 reply      
Actually one method is not really debunked: Approval Voting

Can anyone say anything negative about it that FPTP doesn't already have?


serge2k 8 hours ago 1 reply      
> Justin Trudeau, Canada's Cutie-In-Chief, ...


> ...will be moving his nation towards a better voting system in 2017.

well, you know, maybe. If the people want it in their fancy questionnaire. They might not though, since clearly FPTP is the most effective now that it resulting in a liberal majority.

Finbarr 11 hours ago 0 replies      
Small point of feedback: the header of this site is extremely distracting.
virtuexru 11 hours ago 1 reply      
Why I still wont review for or publish with Elsevier talyarkoni.org
399 points by geospeck  11 hours ago   106 comments top 11
nacc 9 hours ago 6 replies      
I always wonder why some disciplines are more open than others [0]. As someone in biology: the state of publication is very sad. We have to pay thousands to _submit_ a manuscript, and then:

- get rejected right away, or

- the manuscript gets distributed to fellow scientists (who reviews for free). The reviews get collected and manuscript rejected, or

- we get a chance to address the reviewer's concern, resubmit and gets rejected, or

- The editor does some proof-reading and publish the paper behide a paywall. I lose all the rights and I may need to ask the journal for permission to use part of it in my thesis, otherwise I risk plagirizing my own writing.

Sometimes when I read preprints in computer science/physics/bioinformatics etc. I feel in those disciplines researchers are a big happy family, and we biologists are locked in a prisoner's dilemma because we can't communicate. Then we fight each other and the publication companies are selling tickets for others to watch.

[0]: http://www.idea.org/blog/2011/04/04/fees-and-other-business-...

Oatseller 10 hours ago 0 replies      
Elsevier was also recently awarded an "Online peer review and method" patent which earned the August, 2016 "Stupid Patent of the Month" from the EFF [0].

[0] https://www.eff.org/deeplinks/2016/08/stupid-patent-month-el...

dkarapetyan 10 hours ago 6 replies      
The scientific publishing industry makes no sense to me. It wouldn't take much for a few universities to get together to set up the required infrastructure for sustaining the entire enterprise online with on-demand printing as a last resort. Why they don't do this is the part that makes no sense to me.

What is the value that Elsevier is adding to have the de-facto monopoly on the entire enterprise of scientific publishing in so many scientific disciplines?

jack1243star 9 hours ago 1 reply      
My alma mater will drop subscription to Elsevier in 2017, stating that they have been increasing the fee 4% each year.
IshKebab 11 hours ago 4 replies      
Without reading the article, I presume his reason is that they are as close to pure evil as a scientific publishing company can get.
grigjd3 10 hours ago 1 reply      
While I very much abhor the practices of Elsevier, one has to be really careful how one acts to push back. Even without getting into the result that not interacting with Elsevier has on careers in the biomedical sciences, it's easy to fall into traps with other publishing firms. For instance, PlosOne is a pay-to-publish "alternative". However, the motivations here are backwards. PlosOne makes money for each accepted publication and thus limits their motivation to do serious peer review. I'm not saying that means PlosOne articles are necessarily bad, the very nature of their publishing model suggests something could be wrong - and it's pretty hard to get recognition for publications through them. I personally prefer when the professional organizations, like APS, handle their own mainstream of publishing. The Phys Rev journals have a strong motivation to promote the best research, because it speaks well of the industry overall, but because they are beholden to their membership, they are less likely to promote the kind of unethical practices used by Elsevier.
lordnacho 10 hours ago 4 replies      
What exactly is preventing someone from doing a big switch? Pure coordination problem?

Are there a lot of academics who would be against moving over?

eva1984 8 hours ago 1 reply      
What value does Elsevier provide comparing to something like arxiv.org? Just curious.
MaxfordAndSons 2 hours ago 0 replies      
There's a real gem tucked in here, not specific to Elsevier, but about corporate social misbehavior and its apologists in general:

> For what its worth, I think the fiduciary responsibility argumentwhich seemingly gets trotted out almost any time anyone calls out a publicly traded corporation for acting badlyis utterly laughable. As far as I can tell, the claim it relies on is both unverifiable and unenforceable. In practice, there is rarely any way for anyone to tell whether a particular policy will hurt or help a companys bottom line, and virtually any action one takes can be justified post-hoc by saying that it was the decision-makers informed judgment that it was in the companys best interest.

overcast 10 hours ago 2 replies      
For the rest of us, what is the TLDR of what Elsevier is?
datavirtue 9 hours ago 0 replies      
I ordered the first one on the list last week for my son and I to read. Great minds think alike.
Uber employees used the platform to stalk celebrities and their exes businessinsider.com
200 points by kevcampb  5 hours ago   96 comments top 18
johansch 4 hours ago 8 replies      
This seems like a good place to tell Uber users that the only way of removing your credit card details from your Uber account is to either:

a) plead with Uber's customer service to do so


b) add another payment method (like another credit card)

This, of course, is horribly bad practice. I can only imagine that they arrived at this very peculiar arrangement after extensive A/B testing - Uber has hired plenty of FB folks and those people tend to be really into that kind of thing. I haven't seen this kind of outright customer-hostility from a large Internet company.. well, ever, before.

So, no, I'm not surprised that this company is doing other unethical things - it sort of seems interwoven into their DNA.

KuhlMensch 43 minutes ago 0 replies      
> Uber would not give more details on its technical controls. In practice, the security sources said, Ubers policy basically relies on the honor system. Employees must agree not to abuse their access. But the company doesnt actually prevent employees from getting and misusing the private information in the first place, the security sources said.

If true, that is fantastically ludicrous.

It seems I wasn't paying attention, in 2014 - as this "God view" news passed me by. I will be keeping a closer eye on this as it plays out.

Uber obviously seems to be in a strong position, but going only by this article, Uber might fare poorly in a multi-region privacy-legislation legal battle (war?).

JumpCrisscross 4 hours ago 1 reply      
Don't forget that Uber now requires you allow them to access your location, even when you aren't using their app [1].

Side note: consider the value to foreign (or domestic) intelligence agencies of this weakly-guarded pot of gold.

[1] http://www.theverge.com/2016/11/30/13763714/uber-location-da...

sidchilling 2 hours ago 2 replies      
I seriously don't understand why the updated Uber app asks to access my location all the time -- as opposed to only when I'm using the app. Not only is it not required but it's a huge drain on the phone's battery, potentially decreasing the battery's life.

Now I'm from a third-world country and can't afford to buy a $1000 phone every year, so I have to be careful with the life of my phone.

The turnaround this, I found, is to disallow location to the Uber app when not using the app and allowing access only when I use the app. This, however, is a pain and the Uber app behaves weird if I do so (the previous trip does not end after hours of it actually ending).

Very poor UX from Uber, potentially dangerous, definitely unethical. This is definitely a trend -- startups start with being caring of its customers, but once they grow big, they become callous and even malicious when it comes to users (I don't ask of them to give every customer personal support, but not mis-using customers is the least I can expect).

taneq 4 hours ago 4 replies      
A few days ago (on the discussion of the Uber app tracking users' movements after the end of their ride) an Uber employee commented on their data handling: [0]

> Individual users' data is very closely guarded internally. It's immensely difficult to look at user data without specific access. Overwhelmingly, this data is queried in aggregate and fed into machine learning systems. The risk of abuse is exceptionally low.

Obviously this doesn't add up. What gives?

[0] https://news.ycombinator.com/item?id=13085775

sargun 1 hour ago 1 reply      
Let me ask a question of everyone complaining: why not use Lyft? I switched.

Yeah, their prices are a little more than uber's, and their wait times are a bit higher, but these are functions of scale.

ProfessorLayton 31 minutes ago 0 replies      
I've begun to ditch apps that I've perceived as going downhill regularly, and its been working out pretty well. Uber, Youtube, Facebook/Instagram, and Twitter have decent mobile sites.

I can even silo whatever service I want into its own browser to limit tracking, and all location/permissions/etc are all sandboxed by the browser.

A huge bonus is battery life + ad blocking.

firloop 4 hours ago 0 replies      
This article merely re-reports this source: https://www.revealnews.org/article/uber-said-it-protects-you...

Mods should probably change the OP to link there.

Animats 2 hours ago 0 replies      
Operationally, "God Mode" doesn't need to show who the passenger is. It's reasonable to have info about where all the cars are and their status and destination available to everyone involved with dispatching, but passenger identity? Sloppy.

Do they still have "Ride of Glory" detection?

ben_jones 38 minutes ago 0 replies      
I wonder what it's like working for Uber and hearing this story. I imagine people form into two camps, with one doubling down their loyalty to the company (which could be properly placed for all we know), and another that becomes a little more suspicious walking into work next time.
danso 2 hours ago 0 replies      
Creating an audit system and locking down "God" mode seems like something that would save Uber a lot of major headaches down the road. How often do Uber employees need to legitimately track someone's information other than in response to a customer request? I'm guessing about as often as the average Google employee needs access to a specific user's search history, which is to say, fairly rarely.

Without locking down such access, you get incidents like these (and this was even when Google purportedly had strong auditing): http://www.pcmag.com/article2/0,2817,2369188,00.asp

> Google this week confirmed that it fired an engineer who accessed the Gmail and Google Voice accounts of several minors and taunted those children with the information he uncovered.

The public sector has its fair share of these too: http://articles.orlandosentinel.com/2013-01-22/news/os-law-e...

Here's a URL to the plaintiff's declaration: https://www.documentcloud.org/documents/3227535-Spangenberg-...

Lots of tidbits there...including how all payroll information is apparently contained in an "unsecure Google spreadsheet"

blairanderson 1 hour ago 0 replies      
Vote with your fingers and delete the app
otterley 46 minutes ago 0 replies      
Please don't change the original title. The article is quoting an accusation. No claims have been proved yet.
iblaine 1 hour ago 0 replies      
In my experience, every private company does this and every public company does not, due to sox compliance.
lamontcg 2 hours ago 0 replies      
Uber seems to be company founded by sociopaths with a hiring process that stacks the company with sociopaths.
logicallee 4 hours ago 2 replies      
I think it's time for the government to give you as many names as you want to give out to companies, and there's no reason for anyone who isn't suing you for it to know which of your aliases go together. Also aliases should be shared, to further conflate things. (nothing should stop my friends and i from sharing an alias and persona - companies should be legally forced to bend over backwards and enable this, for everyone. For example Google should be legally forced to allow you to create a new gmail inbox with a new name in a single click and not have it tied on any way to the old name.) Also credit card companies should be forced to give you as many cards in whatever names that you ask for. Nobody who isn't suing you should have a right to know your true name. They shouldn't even have it on record. If they wanna know something about you, they should ask you.

It works for writers, celebrities, etc - why not the rest of us.

EDIT to clarify: this is a serious comment, you can read it literally.

droopyEyelids 3 hours ago 10 replies      
Can we make the rule that it's not OK to post negative stories about YCombinator companies on this site?

There are a million places to talk negative about everyhing. Here, we're trying to build things. We know no one is perfect. Lets make this place a bastion of positivity instead of negativity.

beedogs 3 hours ago 0 replies      
Did I need another reason to loathe Uber? No, but reasons keep showing up.
Radeon Instinct Optimized Machine and Deep Learning radeon.com
197 points by hatsunearu  14 hours ago   62 comments top 14
slizard 12 hours ago 4 replies      
What's particularly interesting here is that the Fiji card they propose is a very different beast than any of the NVIDIA offerings.

The MI8 card's HBM has a great power and performance advantage (512 GB/s peak bandwidth) even if it's on 28 nm. NVIDIA has nothing that has even remotely comparable bandwidth in this price/perf/TDP regime. None of the NVIDIA GP10[24] Teslas have GDDR5X -- not to surprising given that it was rushed to marked, riddled with issues, and barely faster than GDDR5. Hence, the P4 has only 192 Gb/s peak BW; while the P40 does have 346 GB/s peak, it is far higher TDP, different form factor and not intended for cramming in into custom servers.

[I don't work in the field, but] To the best of my knowledge inference is often memory bound (AFAIK GEMV-intensive so low flops/byte), so the Fiji card should be pretty good at inference. In such use-cases GP102 can't compete in bandwidth. So the MI8 with 1.5 the Flop rate, 2.5x bandwidth and likely ~2x higher TDP (possibly configurable like the P4) offers an interesting architectural balance which might very well be quite appealing for certain memory-bound use-cases -- unless of course the same cases are also need large memory.

Update: should have looked closer at the benchmarks in the announcement; in particular the MIOpen benchmarks [1] MI8 clearly beating even TitanX-Pascal which has higher BW than the P40 indicates that this card will be pretty good for latency-sensitive inference as long as stuff fits in 4 GB.

[1] http://images.anandtech.com/doci/10905/AMD%20Radeon%20Instin...

zitterbewegung 13 hours ago 7 replies      
Does anyone use AMD for deep learning in scientific / industry ? All the libraries for deep learning I have seen require CUDA and NVIDIA is winning by merely being the most popular API. Searching github it looks like they are university assignment projects see https://github.com/search?utf8=%E2%9C%93&q=opencl+deep+learn...
visionscaper 8 hours ago 0 replies      
There seems to be considerable effort being undertaken to allow TensorFlow to work with OpenCL [0]. Also see [1]. This coincides nicely with the introduction of these AMD cards.

I'm looking forward to the day that Nvidia gets some competition in the GPUs-for-deeplearning market. Further, doing some smaller Deep learning experiments on my MacBook Pro with AMD discrete GPU is another benefit I'm looking forward to ;)

[0] https://github.com/tensorflow/tensorflow/issues/22

[1] https://github.com/benoitsteiner/tensorflow-opencl

rsp1984 12 hours ago 1 reply      
Interesting that NVDA is down almost 4% for the day [1] while AMD is up 3% [2]. Is Wall Street realizing that NVidia is not alone in the ML Hardware space?

[1] https://www.google.com/finance?q=NASDAQ:NVDA

[2] https://www.google.com/finance?q=NASDAQ%3AAMD

LeanderK 11 hours ago 0 replies      
Ahh, what exciting times we live in. Just look at the example applications:

- autonomous vehicles

- autopilot drone

- personal assistant

- personal robots

- ...

i know it's optimistic, but it's not science-fiction.

echelon 11 hours ago 6 replies      
What's a good GPU / setup for someone doing deep learning at home? Does anyone have recommendations?
kyledrake 12 hours ago 0 replies      
One thing that would be interesting is if you could use cards like this for rendering multiple instances of X, for the purpose of running things like WebGL browser screenshotters.

I had to ship out a high-end gamer GPU with a dummy HDMI adapter for this purpose recently. But it's obviously not very efficient. It would also be nice to be able to run multiple screens in parallel, not just one per GPU.

I doubt there will ever be a product for my use case, but one can dream...

That said, these are cool. I think they're lower power than the Nvidia equivalent, but I could be mistaken (I just recall the Tesla models being power hungry.. enough to cause a real problem in a datacenter rack).

raj_m 12 hours ago 2 replies      
I really don't think this will make a dent in CUDA's platform. CUDA has a well established ecosystem in deep learning and compatible cards like Quadro coupled with very matured platform makes it miles ahead of platform.

That said, I would love to be proven wrong. Healthy competition such as this fosters much better results. Also CUDA is not without issues in certain matters.

stuckagain 11 hours ago 1 reply      
I can't even look at the press picture without remembering that is the exact same metal card slot tab that I had on my IBM PC 35 years ago. They should take a picture of the other end or something.
milesf 10 hours ago 0 replies      
Am I the only one that saw this and immediately thought "mining cryptocurrency"? :)
cordite 11 hours ago 0 replies      
Other than ML applications, what can I write on this?

OpenCL? Something comparable to CUDA?What about utilizing Vulkan?

nickeleres 9 hours ago 0 replies      
loving on the UI, very reflective of their product
ilaksh 11 hours ago 1 reply      
But Keras and Tensorflow still only work on nVidia right?
ipunchghosts 12 hours ago 0 replies      
This really doesnt matter for deep learning. There is a large ecosystem built around CUDA. Unless AMD becomes CUDA compatible (they are working on it but not there yet) and I can install Torch/TF and run it on my AMD GPU, I will stick with NVIDIA.

I am all for choice, but AMD has a lot of catching up to do.

Leonardo Da Vinci Lost Drawing Discovered nytimes.com
93 points by Mz  11 hours ago   12 comments top 3
n0mad01 9 hours ago 3 replies      
Although I am not at all a connoisseur in this area i know that counterfeits are relatively frequent - especially when we're talking about $15.8 million.

What is especially noticeable to me is the back which assigns that sheet to Leonardo da Vinci, being most likely original but would hardly sell for a price that high.

The front on the other side is - almost too perfect - too unbelievable, to good to be true.

tomcam 9 hours ago 1 reply      
What a rush. Leonardo's drawings are always brilliantly vivid and lifelike to me. I find his paintings much less interesting. It's also nice to know that his drawings sometimes look better in photographs, because they've been enhanced the frequent state of affairs is fading.
vinchuco 2 hours ago 2 replies      
On mobile, you have to click the "show full article" to, well, show the full article. It really irks me that not only this is unnecessary, but possibly devious, a dark design pattern:

You want to see what's inside, but you haven't decided if you care, however by clicking it you're tricked into thinking you do (after all, you are the one clicked on it, now you're invested, it's not like it was the carefully crafted headline /s).

Do I exaggerate?

Distributed Representations of Sentences and Documents (The Paragraph Vector) [pdf] stanford.edu
52 points by espeed  8 hours ago   7 comments top 5
argonaut 6 hours ago 2 replies      
The second author of this paper was unable to replicate the results.

He was merely advising the first author, who actually wrote the code. Source: (requires Google login) https://groups.google.com/forum/#!topic/word2vec-toolkit/XC7... and https://groups.google.com/forum/#!msg/word2vec-toolkit/Q49FI....

This highlights something people on HN don't appreciate about machine learning: how hard it is to actually trust results, and how likely it is that the results were affected by bugs in the code or how the dataset was handled. In this case the second author was only able to replicate if he didn't shuffle the dataset. Graduate students almost never write tests for their code.

espeed 6 hours ago 0 replies      
For background on "Distributed Representations" as cited in the paper, see:

Distributed Representations (1986) http://stanford.edu/~jlmcc/papers/PDP/Chapter3.pdf

"Each entity isrepresented by a pattern of activity distributed over many computingelements, and each computing element is involved in representingmany different entities."

Full Book: http://stanford.edu/~jlmcc/papers/PDP/

mitbal 2 hours ago 0 replies      
Anybody have tried this algorithm compared to simpler strategy, like average of word vector, for document classification task? Or compared to using skipthought sent2vec pre-trained model?
bglazer 5 hours ago 0 replies      
How does this compare vs LSTM'S for sentence embedding?
mining 5 hours ago 0 replies      
I experimented with gensim's implementation of doc2vec this year; despite not being able to achieve similar results in sentiment analysis (because unshuffled datasets in the original paper) it's still really impressive. I analysed some document relations in wikipedia, and it finds some really unusual / neat relationships, e.g. Autism - Cat + Dog ~= ADHD.
Ask HN: Hiring managers, what tech skills will you be hiring for in 2017?
162 points by changeseeker  7 hours ago   72 comments top 24
meritt 1 hour ago 2 replies      
Intelligence. Tenacity. Ambition. Judgement.

Bonus points for recognizing the bullshit parade that is the current startup world. e.g.: NodeJS has value, but it's mostly the same wheel we've had for 20+ years. Or that MongoDB's changelog has consisted of standard SQL features for the past five years and that pgsql would have been just fine (had people read some boyce-codd anyhow).

eganist 6 hours ago 3 replies      
So given that I may likely be hiring in the web and mobile application security spaces again next year (I've _somehow_ filled all of my open positions this year; appsec is difficult to fill with external hires), I'm focusing specifically on three skills:

ability to assess tech/architecture risks in apps

experience in devops automation ("secdevops" if you will)

proven skill in communication regardless of depth

The ideal candidate would have all three, but I could settle with any two of these and still be happy.

I am not currently hiring, but I'll gladly keep any CVs I receive and prioritize follow-ups with anyone who reaches out to me directly. Austin/DC for curious souls.


p.s. the web appsec space is in ludicrous demand. If you've got a breaker mindset, you'll probably come out ahead if you read up on it. If you're a developer right now and want to dip into it, I'd suggest: https://www.amazon.com/Web-Application-Hackers-Handbook-Expl...

Trust me, us security folk will thank you. Heck I'd suggest it to non-hackery devs too. It's a good way to find out how us security types see the world.

throwaway95837 5 hours ago 6 replies      
I am 100% owner of an internet company making 7 figure profits annually. I am extremely secretive of my business, almost to the point that some would consider a pathology. However, I will divulge my hiring strategy, because even if everyone uses my method, there will still be many employees for me to choose from.

I look to hire people who just need a job. People who are qualified, but not overly qualified. People I know will depend on the job for a long time, but not looking to make it their lives. Hard workers - getting there on time, but also leaving at the stroke of 5. Ivy league schools are a red flag. Huge resumes are a red flag. These people will constantly question whether every decision is optimal, prod incessantly at company strategy, continuously try to impress, and are always hungry for praise, recognition, and "interesting work." When they get bored after 6 months, they quit and go somewhere else (remember they can easily do so because of their pedigrees), often to a competitor, bringing company secrets with them.

I need someone loyal, who knows how to take orders without question, and is prepared to do the work that needs to be done day in and day out because they want the paycheck. Reading the above, you might think I'm a terribly demanding boss, but using this hiring strategy has produced a 100% employee retention rate and by all accounts we are all quite happy.

alex-mohr 4 hours ago 1 reply      
I manage the Google Container Engine and Kubernetes team in Seattle (we have other sites in Mountain View and Warsaw).

Aside from the obvious interest in building container orchestration systems, I look for a passion to solve real user problems, not only building a piece of tech.

Bonus points for knowing about Docker or containers or clouds or Golang or security.

More points for meeting users where they are. And the most bonus points for leadership and initiative.

We're particularly looking for someone to lead and/or manage our software eng team building security features into Kubernetes and GKE.

ian0 32 minutes ago 0 replies      
Payments company, ~50 people, based in Jakarta Indonesia.

We hope to expand our team in early 2016 and have a mainly java micro-services with some PHP and native apps on the front. Will likely add to the java team in addition to an IOS dev.

Nice atmosphere, nice people. We try to select for people who don't like to be micromanaged (but are still friendly) and assign responsibility not tasks wherever able. Varying degrees of success but overall happy with the approach.

Looking for at least one highly skilled person with java experience and ideally a fin-tech background. Not sure the salary would be competitive with SF but cost of living is small and its a great lifestyle (for those who like daily excitement/challenges and learning new cultures). On site. Other roles would likely be unsuitable (read: cheap!) for the HN audience.

gtbcb 3 hours ago 0 replies      
Ability to read code daily, write code when necessary, SQL, understand APIs, and be good in front of enterprise customers. We went through 400 candidates to find a quality Implementation / post-sales Engineer. We're a B2B SaaS company.
chrissnell 2 hours ago 0 replies      
I run Technical Operations at Revinate. We are based in San Francisco but my team is 100% remote. I'm based in a small town in Kansas.

For 2017, I want to hire more engineers with Kubernetes, CoreOS, and Go experience. My team has deep Linux systems administration experience but we've automated ourselves out of most of the day-to-day admin work of yesteryear. Our future hires will be heavily focused on automation. We've already automated builds, testing, deployment, monitoring, and metrics in a Kube/Docker pipeline. I expect to automate load balancing and hardware deployment in 2017. I also expect that we will adapt many of our non-Kubernetes data services for running containerized in Kube.

hobonumber1 6 hours ago 3 replies      
I work at SoundHound as a Senior Software Engineer, and take part in interviewing/hiring. Full-stack JavaScript engineers are still in very short supply. Lots of people claim to know JavaScript but many fall short when working across the stack. When I say full-stack, I mean being responsible for building and managing everything the front-end webserver (NodeJS), Database (Postgres/MySQL), and front-end (usually ReactJS/Flux).

Also backend and data engineering roles (C++/Java/Go/Kafka/etc) are in high demand here.

SoundHound is hiring in SF/Santa Clara/Toronto.

dccoolgai 6 hours ago 1 reply      
I hire frontend and mixed web devs. What I'm looking for is a mature understanding of the web platform from devs at all levels. + basic architecture and good practices for mid-level devs. + business and deep architecture for seniors.

+ for new web platform things like Service Workers, advanced SVG.

Could care less about whatever franework is hot this week.

bsvalley 6 hours ago 3 replies      
People who can be micro-managed and used as resources. People who will do the job.
asher 4 hours ago 0 replies      
I'm at shopkick. We're a mobile app. We hire server, Android and iPhone engineers, but many will move across these platforms. We look for smart generalists. So, although we use Python on our servers, we don't expect server candidates to know Python.

Advice for senior engineers: brush up your practical programming. If you've been in an architect/leadership role, you may be rusty. Make sure you're comfortable on both whiteboard and keyboard.

If you spent the last 5 years writing iPhone apps, we expect you to know iPhone development pretty well. Memory management is the obvious area here.

Be ready to explain the most recent projects on your resume. Think outside the box - if you wrote code to process messages from a black box, how do you think the black box worked? If you consumed JSON messages, how much can you explain of JSON and JSON parsers? Many projects are so narrow in scope that we can't have a meaningful conversation about them, so be prepared to broaden into adjacent areas.

Advice for new grads and early-career engineers: have some solid, non-trivial code on github (or equivalent) and make sure we know about it. Be prepared to discuss it and explain design decisions. Few do this.

This post is my take on the question - what follows is especially subjective and not representative of shopkick:

Don't put stuff on your resume that you don't know. Or, brush up the skills featured on your resume.

Learn a scripting language, especially if you're a server engineer. People who only know Java/C++ are at a big disadvantage if they have to write code in an interview. How big? Turning a 5 minute question into 35 minutes is typical - and it gets worse. One very smart, very experienced man took 45 minutes on such a question. Of course, don't just port Java idioms to Python; learn Python idioms. Good languages are Python/Ruby/Perl. I think a HN reader probably doesn't need to be told this, but just in case. Properly used, scripting languages teach techniques which carry over to compiled languages.

Server engineers should be comfortable with either vi or emacs. And with basic Linux. Personally I find it astounding that a server candidate would be unfamiliar with ls and cat, but it happens.

I hope this is helpful and doesn't sound arrogant.

erichurkman 1 hour ago 0 replies      
Things in our full stack (Python, React, Ansible/AWS, APIs), a focus on strong front end engineers that are interested in mentorship-type roles. There's a unique-ish role for someone to come help us solve remote office work (think piloting VR or new techs to enable me to work closely with someone 1,000 miles away but make it feel like they are 2 feet away). Focus on security/devops.

We may also need a strong lead for a new business unit, a role akin to 'founder lite' you run a business unit with two others, you have your own burn rate, your own P&L, etc. The strongest skills someone can have for it are former founder experience (aka: broad experience doing lots of things, moving quickly, MVP, etc).

Palo Alto, San Francisco, Seattle.

ryanSrich 6 hours ago 0 replies      
On the frontend side of things I'm looking less for specific framework experience and more for overall programming competency. JavaScript development moves so fast now that it really doesn't make sense to scope your hiring to angular, react, etc.
k1w1 4 hours ago 0 replies      
When I am hiring for Aha! (www.aha.io) one of the key things I am looking for are people who have shown an interest in software development beyond just a day job. The best candidates are those for whom writing code is a passion - something which is done for fun rather than just a way to make ends meet.

This shows up in a resume in lots of different ways. For some people it is a rich Github profile. For others it is that they paid their way through college by building websites or apps.

We primarily hire Ruby on Rails developers who work remotely. Seeing in someone's Github profile that they like to contribute to open source and know how to collaborate with other developers are really important.

mtam 3 hours ago 0 replies      
Industry: Enterprise ERP tools and add-ons

- Developers: We use mostly java, swift, and JS (Angular 2) but we always look for polyglot developers, full stack developers, or whatever you want to call someone that see the language as a mean to achieve a goal and not the goal itself.

- DevOps: Deep ec2 knowledge and experience. AWS certification is a plus

troygoode 6 hours ago 1 reply      
We're an Enterprise B2B SaaS company headquartered in San Francisco.

Our stack is node.js/React/Postgres so knowing any/all of those is a bonus, but we don't specifically target those skills we instead look for a diverse, intelligent set of engineers who have a strong technical background or a newer technical background but heavy experience in a non-programming field (mathematics, economics, architecture, teaching, customer support, etc; they all have their benefits). Interest in being "full stack", participating heavily in the product management process (strong opinions loosely held!), and a belief in the critical importance of design & UX (unfortunately still heavily undervalued in the Enterprise space...) are important.

Hiring in San Francisco & Washington, DC by the way.


njay 5 hours ago 0 replies      
At Hipmunk, we'll be looking for someone with machine learning or NLP skills to take Hello Hipmunk (our virtual travel agent) to the next level. Visit hipmunk.com/jobs to learn more.
sanswork 4 hours ago 1 reply      
I am looking to hiring a jr ruby or elixir developer in the next few months that doesn't mind cross training on the job. They will probably be remote since I live in a small surfing town half way up the Australian coast at the moment.

Since it's a jr role I'm looking more for evidence that they want to learn than examples of accomplishments.

jasonmacdee 4 hours ago 0 replies      
At JDA our hiring will vary, but my division in Store Operations needs people with SaaS experience, especially with GCP. Angular, REST, DDD, and agile thinking are a bonus. But also need C#, ASP.NET, Web API, and ExtJS for other teams. Have a few spots with strong math skills too, doing complicated forecasting and scheduling work alongside PhDs.

Interviews have practicals where you work on problems you'll see regularly with skills we expect you to have (like writing code, debugging, and task breakdown). Good communication, pairing skills, quick learning, and taking responsibility for your circumstances stand out.


naakkupoochi 5 hours ago 1 reply      
In 2017 (or from this week :)), I am looking to hire 5 Sr. Dev/Cloud Ops Engineers, with competency in AWS (CloudFormation, IAM, etc), Azure, Chef, Terraform, general Ops. PM for more info.
JamesBarney 2 hours ago 1 reply      
C# + js /w react a plus

Oil and Gas

tmaly 5 hours ago 0 replies      
Compliance Technology in the Financial Services industry. Reporting, SOLID, active monitoring, microservices Perl / Go.
damosneeze 3 hours ago 0 replies      
I work at Real World React. We specialize in training engineers on front-end web development, specifically React, Redux, RxJS, and related technologies. We've trained engineers from Twilio, OpenTable, NerdWallet, Tesla, Esurance, and many more. We are based in SF.

Since we also do private consulting and project-based work in addition to our workshops, we have recently got to talking with our clients about helping them get full-time employment. So I think this post is pretty timely and very relevant to us. Here are a few reasons why we think React is important for the job market.

Lots of companies are choosing React for their front-end these days. It allows your front-end devs to embrace the full power of JavaScript for the front-end -- no more messing around with jQuery and tons of plugins. Sure, there's a bit of a learning curve, like all new things. But there is now a large and devoted community to React and it's only growing. A personal friend of mine convinced his boss to greenfield their entire app with 10,000 lines of jQuery, and rewrite it entirely in React. He was a new hire (and also a great communicator/salesman).

Coding bootcamps are embracing React as well. Since most of these institutions survive year-to-year based on how well their placement numbers are for graduates, they are paying close attention to the trends in development. One could argue that since they are probably more technical than the average recruiter, they may even have a better grip of the pulse. FullStack Academy, of New York and Chicago, recently wrote a blog on why they're moving their curriculum from Angular to React (https://www.fullstackacademy.com/blog/angular-to-react-fulls...). App Academy (SF & NYC) has had React in its curriculum for a number of months (https://www.appacademy.io/immersive/curriculum). And I've personally spoken with alumni of Hack Reactor in SF who said that most students built their capstone project in React (or attempted to).

Is React the best solution? That's arguable, as all things are. It also depends on what you want to accomplish. But for the relevancy of this post -- asking what tech skills people will be hiring for in 2017 -- I would argue that React is going to be one of the top skills. And with that includes...


As far as backend, the top three technologies that we've seen with our clients are:


But of course, all of this is moot without the foundation of strong JavaScript skills. Our students who have strong JS skills pick up React quickly -- those who don't only get confused.

Anyways, if you are skilled in React and other related technologies and you are looking for work, you can always email me: ben at realworldreact dot com with some info about yourself and/or your resume.

lowglow 2 hours ago 0 replies      
Critical thinking above all, ability to solve new problems, ability to deliver applied theories, and ability to ship. Beyond that, we're looking for hardware experience, statistics experience, heavy math skills, digital signal processing, machine learning. Python / C / C++ and ruby for some apps.
Ask HN: Hiring managers what would it take for you to reply to every applicant?
36 points by deedubaya  3 hours ago   27 comments top 9
pytrin 2 hours ago 2 replies      
> If an applicant takes the time to apply to your posting, why not give them a follow up regardless?

I'm not a hiring manager, but as the CTO I do review a lot of resumes incoming for technical positions we are hiring for.

The vast majority of applicants do not appear to be taking any time at all aside from selecting their resume to upload and clicking submit. It doesn't seem like they even read the job requirements, since 90% of them do not meet the minimal requirements we post. Some of them are not even developers, but they apply for a developer position.

If someone does appear to be relevant and did also include a cover letter relevant to the position, I will respond, regardless if they're a fit or not.

For me the biggest pain is the sheer amount of irrelevant submissions, which makes you numb after a while. This is why I don't believe in job postings anymore and mostly do headhunting.

Hope this helps!

hbcondo714 27 minutes ago 0 replies      
I was recently laid off so I'm on the job hunt. I applied to Snap, Inc and received this response within 2 weeks of applying

Dear [First Name],

Thank you for your time and interest in a career at Snap Inc. At this time, our team has decided to evaluate other candidates for the [role]. However, we encourage you to apply in the future for positions matching your goals because our needs change frequently. Thanks again!

Best wishes,Snap Inc.

They must receive an enormous amount of applicants from all over so even though I didn't make it anywhere in the interview process, I'm appreciative of receiving a response and getting closure.

When I was employed, our HR department used Monster's ATS. They found it difficult to use and didn't bother to inform candidates of their application status.

paradox95 2 hours ago 3 replies      
What kind of reply would you want? Would a simple "no thanks" be enough?

I have been in the situation before where replying to everyone with anything meaningful is simply not feasible. Maybe for a recruiter whose full-time job is that but not for a hiring manager who also has to balance their regular duties as well.

I have spent much more time on the applicant side of things than the hirer side so I understand the goal. It can be frustrating to not get anything. If it is a job you really want you may be inclined to hold everything else off until you hear something just on the hope that maybe they haven't gotten to your resume yet. So a little closure would be nice.

So maybe a better question for you is what are you trying to accomplish by getting hiring managers to reply to all candidates? Give them closure or provide feedback? If the former than maybe a simple "no thanks" will do.

By the way, I am speaking clearly to the scenario where a candidate sends in a resume and doesn't hear anything back. In my opinion, even if the hiring manager or recruiter does a phone call the candidate deserves a clear "no" email at a minimum.

adrianmacneil 1 hour ago 0 replies      
Pro tip: If you want a reply to your application, try to avoid cold emailing hiring managers your resume. Often my inbox has a lot happening, and I'm not inclined to spend time copying your resume into our hiring software unless there is something spectacular about your email or background. Emailing hiring managers out of the blue also will not help you bypass any steps in the hiring process.

By filling out the application form on our website, you load all the information into the form for me, and are guaranteed that a recruiter will follow up on your entry. If you want to send an email to the hiring manager as well to explain why you are so awesome, that's fine, but it's probably not going to help your chances of getting a job any more than just applying.

trevyn 49 minutes ago 1 reply      
If this is how you're applying to jobs, you're doing it wrong.

Target a small handful of companies strongly relevant to your experience and interests, and start informally chatting with people who work there. Ask about the culture. Get coffee. Ask how they like working there. Talk about what you've been working on that's related. Ask some questions about interesting problems they're trying to solve. Be interested and interesting. Points for going straight to an Eng VP or CTO -- even if they don't have the time to talk to you, they'll pass it to one of their underlings who does, and when your VP/CTO tells you to follow up with someone, you do.

The resume should be mostly a formality AFTER they've expressed some interest in your skills and have invited you to formally interview.

And if it doesn't pan out, you've already made personal connections with people there. Get coffee again for feedback.

6nf 2 hours ago 3 replies      
We get 200+ responses to most job postings. 90% or more of those are from candidates that just spam every job ad on the internet with their CV even if they live on the other side of the planet. We can't respond to each of those.

We will respond to everyone that gets past this first round. And if you get a phone / in person interview we will definitely call you back to say 'no sorry'.

jasoncrawford 18 minutes ago 0 replies      
If I were using a system where rejecting a candidate was a one-click operation, and it also sent them a notification, I would click it. That's what it would take--it would have to be that easy. There are too many resumes.

(That's at resume review stage. If a candidate has actually talked to you, including any kind of interview, then they deserve a response, and I do follow up with everyone who gets to that stage.)

SerLava 2 hours ago 1 reply      
I just applied to a remote position posted on HN and some other places.

They sent out a mass email about 3-4 days later saying they had 550 applicants they were trying to sort through- so hold tight basically.

Now I pretty much know I'll get a mass email "no" if they don't decide to interview me. Which is nice.

sean_patel 1 hour ago 2 replies      
> Disclaimer: I'm building https://www.hireloop.io to hopefully bring communication full-circle. I really want to make this less painful.


Goes to 403 Forbidden. Atleast put something in there???

403 Forbidden

Code: AccessDenied

Message: Access Denied

RequestId: 4XMR36267413GRGBC72

HostId: BGu7DieumfZVCvftdpMIhXeFm2Qyyy2TyJ+P9jpQr3csSyYNIZBoGKhush8nMc4rHSj6+HighM=3p-

All other pages, including Pricing page, work tho ;) https://www.hireloop.io/#pricing

Self-Healing Transistors for Chip-Scale Starships ieee.org
66 points by blackwingbear1  10 hours ago   31 comments top 8
jayajay 7 hours ago 2 replies      
It seems complicated to let our materials get damaged and then try to fix them on the fly. The earth protects itself from charged particles via a gigantic dipole field. Could the same be done with nano-electronics?

Boards could be designed to generate magnetic fields via embedded current loops. Instead of having a wire connect two components in a straight line (shortest path approach), it could be done in a way that meanders around, intentionally creating large curls. Since we're talking about scales of 1e-9m, these fields would probably be pretty strong.

Now, I don't know too much about superconductors, but vacuum tends to be pretty fucking cold (2K -- surely, lower than the critical temperature for many superconducting materials). It might even be possible to create a Meissner cage around the important components, in a way that protects our components from self-harm, while still protecting them from external charged particles.

Has this theory been tested? After all, it works for the Earth. I am afraid doing that might also be detrimental to our electronic components (unless we can somehow create a diamagnetic cage to selectively protect our components).

scrumper 9 hours ago 1 reply      
"With this nanoship, travel time to Earths nearest neighboring stars is reduced to just 20 years, as the chip can travel at one-fifth of the speed of light. ... But 20 years is a huge step up from the expected deep space travel time were currently capable of. For instance, it would take more than 100 years to get to Alpha Centauri."

So currently we are able to reach Alpha Centauri in 100 years? And now this magical chip can travel at 20% the speed of light? I'm either a horrible reader or this article left some really important details out!

woofyman 10 hours ago 1 reply      
What would the chip do once it got to the star? Seems pretty useless without a power source, transmitter and an antenna.
astebbin 8 hours ago 3 replies      
Does the technology exist which could accelerate a nanoship to "one-fifth the speed of light", or perhaps more importantly, course-correct and slow it down as it nears its destination?
corndoge 2 hours ago 0 replies      
Moon is a hell of a last name for a NASA scientist.
Aaron1011 7 hours ago 0 replies      
I didn't get a clear sense of what exactly 'chip-scale' would mean from reading the article - presumably on the order of centimeters?
lintiness 10 hours ago 0 replies      
tech could do a lot of things, but this isn't one of them.
mxvzr 10 hours ago 1 reply      
Poorly written title/article (if you can call it that). At least it contains a link to ieee.org with more content [1]. See also arstechnica [2][3] for some earlier coverage.

[1] http://spectrum.ieee.org/tech-talk/semiconductors/devices/se...

[2] http://arstechnica.com/science/2016/04/breakthrough-starshot...

[3] http://arstechnica.com/science/2016/08/could-breakthrough-st...

First water-wave laser created sciencebulletin.org
29 points by devinp  7 hours ago   7 comments top 2
andrewflnr 6 hours ago 1 reply      
I got really excited about the idea of coherent water waves, but no...
Aaron1011 6 hours ago 1 reply      
> The possibility of creating a laser through the interaction of light with water waves has not been examined, Carmon said, mainly due to the huge difference between the low frequency of water waves on the surface of a liquid (approximately 1,000 oscillations per second) and the high frequency of light wave oscillations (1014 oscillations per second).

I think that's a typo - unless the light actually has a frequency of 1khz, or I misunderstood the article.

Pebble CEO explains why he sold to Fitbit backchannel.com
208 points by steven  11 hours ago   156 comments top 18
teekert 1 hour ago 1 reply      
Am the only one around here that doesn't care one single bit about the fitness aspects?

My Pebble Time Steel broke (actually just the band), just before these announcements and I got a refund (spendable only at the company I bought the Pebble, but that's ok.) But I miss it! I miss the the notifications, I had gotten completely used to putting my Phone somewhere, anywhere withing BT range for the duration of the day. 99% of notifications do not require immediate attention (I also strictly filter what was allowed through to the watch) but some do and the ability to see that on your wrist is gold to me.

For now, sadly, there is no replacement that even comes close. I really want an always one screen and at least 5 days of battery life.

fudged71 9 hours ago 4 replies      
Pebble crowdfunded as an alternative to venture capital. Their huge success on the platform was an inspiration to so many other hardware startups that it sparked an entire generation of products (in my opinion). Kudos to that.

I still wear my original Pebble. It's reliable with very long battery life, and is one of the very few wearables that works well while wearing gloves.

zem 9 hours ago 6 replies      
> Apples emphasis on fashion and Pebbles on productivity and third-party innovation were costly detoursthe smartwatch market is rooted in health and fitness.

that's depressing :( i was hoping that they had just misexecuted, and someone else would step in and fill the niche of "e-ink watch with long battery life that is geared towards displaying things your phone sends it"; i have no use for fitness tracking and biometrics, and pebble's featureset and reasonably open ecosystem was ideal for me.

the saddest thing is that even buying used pebbles on ebay won't help me much with their servers going offline :(

dstaley 7 hours ago 3 replies      
There's only three things I want in a smartwatch:

1. Resemble an actual, reasonably sized watch.

2. Display notifications from my phone along with their actions (such as marking an email as "Done" in Inbox, or liking a Tweet).

3. Allow me to respond with my voice for notifications that support quick replies.

The Pebble Time Round was great at all three, and (as far as I know) was the only watch to have the features that I wanted in a form factor that resembled a reasonably-sized watch. The only other alternatives currently are Android Wear watches, all of which don't look like the kinds of watches I like to wear (they're thick, with large bezels and superfluous embellishments).

If I knew for sure that a Pebble Time Round would continue to be useable for the next six months, I'd buy one in a heartbeat, but the uncertainty makes me hesitant to spend the $100+ on one.

edejong 10 hours ago 9 replies      
> Netherlands city of Delft, known more for pottery than technology

Perhaps this is true, but hopefully not any longer among HN readers:

- Antonie van Leeuwenhoek: world first microbiologist, huge improvements in the microscope

- Martinus Beijerinck: discoverer of the virus

- Produces the Nuna, which won the world solar challenge in Australia 5 times

paulcole 9 hours ago 10 replies      
"seller of over two million smartwatches"..."Pebble was losing money, with no profit in sight"

I honestly don't understand this line of thinking. Why not, you know, sell something for more than it costs?

outworlder 6 hours ago 0 replies      
I own a Pebble Time Steel. It is wonderful. People really underestimate the impact battery life has. I get annoyed when I finally have to take it out to charge (about once a week). Heck, right now it's at 10% and I don't really care. It didn't even tell me yet when it's expected to run out.

The screen may not be as gorgeous as a phone screen, but it is on all the time.

It's also very developer friendly. Heck, you can even create watchfaces in Javascript.

Before the acquisition, I would not trade it for the first generation Apple Watch. Maybe the second one, just maybe, if the community does not find a way to keep the current Pebble devices running.

rebelde 8 hours ago 4 replies      
> [He considered] bringing the company down to 10 people and just seeing what would be next.

Other than the fact that he would need to fire everybody, what is wrong with reducing costs to make it profitable like this? The problem seems like there wasn't enough profit to support a staff of 120 people. I can't imagine the VCs would object. They already lost their money.

rilut 2 hours ago 3 replies      
But Citizen had offered them a better deal, didn't they? $740m rather than $40m. [1]

[1] https://techcrunch.com/2016/11/30/fitbit-pebble/

vvanders 10 hours ago 3 replies      
>... was the companys willingness to keep Pebbles developers and users in the game.

You mean like dropping warranty support and a vague statement about cloud based features degrading over an undisclosed amount of time?

bunderbunder 4 hours ago 0 replies      
I've never owned or used a Pebble, but for me it's poignant that this coincides with my decision to stop wearing a Withings Activite and dusted off my trusty old Seiko 5.

This shift was primarily motivated by two factors:

 1. I lost interest in the activity tracking features. 2. Even worrying about the battery once every 6 months was too much.
And supplemented by a third, which is that the fancy watch wasn't very readable and lacked a second hand. That made it less capable at the main thing I use it for: keeping track of the time.

I looked hard at a Pebble at one point before deciding that, since my phone is almost always in my pocket or on the table in front of me, getting it into position for viewing information would take only nominally more effort and probably gets me to a place where I can act on whatever information the device is telling me much more efficiently. Also, having a non-user-replaceable battery means that the device will only live for so long, and I'm really trying to limit my consumption of disposable technology.

I think that, for now, my most optimistic case for smartwatches is that they're at about the same phase as handheld computing was 15 or so years ago. The technology is really interesting, but there needs to be more technology development and ironing out of subtle details before the idea is quite ready to take over the consumer market.

jessaustin 5 hours ago 0 replies      
ISTM the many screen-on-wrist testimonials on this page confirm a longstanding prediction of mine: the "smartphone" is not the ultimate device form that our children will be using as adults. Instead, every personal item will become part of a personal constellation of devices. Phones will gradually disappear as better (i.e. ubiquitous, low-power, and basically free) wireless networks emerge, and our eyeglasses will talk to our hats which will talk to our shoes which will talk to our wrist devices...
hyperpallium 7 hours ago 1 reply      
> Apples emphasis on fashion and Pebbles on productivity and third-party innovation were costly detoursthe smartwatch market is rooted in health and fitness.

Over the last few years, I noticed store catalogs giving much space to fitbit, and little to pebble (or apple watch).

Perhaps this is a crude exoscope, for viewing outside the bubble/RDF? Like Buffett observing people still actually using American Express, outside Wall Street's gloom.

Those catalogs now include iPhone 5s and Samsung Galaxy S5 alongside the flagships, suggesting "good enough" and flagships have overshot...

Where will the tech, talent and investment go, if smart phones and watches are good enough, and VR/AR is a wash?

lowglow 10 hours ago 3 replies      
Hey word on the street is Fitbit didn't take many of the hardware people from Pebble. (Not sure why?) But if you're a hardware person from pebble reading this then reach out to me, I'd love to take you out for coffee and have a chat! :)
FussyZeus 10 hours ago 11 replies      
The Smartwatch as a concept is a good one, however the current offerings have one of two problems:

Pebble's problem is the lack of features, and specific to users of iOS like myself, relative instability and weirdness when it comes to important things like notifications. The monochrome screen is also less attractive though IMO bearable.

Apple Watch's problem is battery life. It's absolutely unattractive to me to have yet another thing I need to charge once a day. It wins on pretty much everything else but such a low battery life is crippling to this sort of device.

I feel like between the two you have a rough approximation of the laptop offerings of the early 80's. Yes, they did exist and some people used them, but by and large they were terrible for the functions they were built for. I have a feeling in not even that much time, we'll have proper smart watches with good integration across platforms that will have a screen like Apple's and the life of a Pebble, but for now, laying out $250 for what's basically a bleeding edge prototype is unattractive to the mainstream consumer.

Edit: Question for HN: Would you all consider a Fitbit to be a smartwatch? I mean it's a watch-esque device that does more intelligent things than just a regular watch but I feel like that's more of a wearable monitoring device.

sanj 8 hours ago 0 replies      
Only 30, Migicovsky has plenty of time for future glories. And plenty of leftover watches to keep that time.


basseq 10 hours ago 2 replies      
TLDR: Because it was either that or shutting down (either completely, or with such a massive downsize that it would have been effectively the same thing).

I'd be interested to know what he did get out of the deal. Sold for "south of $40M" and with the statement, "Hes not leaving Pebble as a wealthy man."

logicallee 9 hours ago 1 reply      
can someone summarize this for me? (I imagine the obvious actual answer to the headline is, "I'm passionate about having a place to live, eating, travelling and otherwise enjoying the results of 300 years of capitalism and industrialization.")


EDIT: Thanks for the downvote. Can I have the summary, please?

Verifying Software with Timers and Clocks cmu.edu
45 points by heidibrayer  10 hours ago   4 comments top
amckinlay 8 hours ago 2 replies      
I've always wondered how to use formal verification to prove software on 8 bit microcontrollers. There are so many peripherals like PWM/timers/counters, ADC, UART, and complicated configurations like clock prescalers, self-programming, and interrupts.
Resolve simple merge conflicts on GitHub github.com
174 points by moby  10 hours ago   58 comments top 7
pkamb 9 hours ago 6 replies      
On my team I've found that it's incredibly useful to commit the merge conflicts and conflict markers, then immediately resolve the conflicts in the next commit. This gives you one commit that shows exactly how the two branches merged together, followed by a commit that shows exactly how the conflicts were resolved. The resolution commit can then be code reviewed independently for a nice clean view of the conflicts introduced in the merge. It also allows you to easily reset to the merge commit and resolve the conflicts differently.

The standard git workflow (and this github feature) seems to promote resolving the conflicts alongside all of the other changes in the merge working copy. This make me nervous, as there's no way to differentiate the new lines that were introduced to resolve merge conflicts from the thousands of lines of (previously reviewed) code from the feature branch.

If you're not careful, completely unrelated working-copy code and behavior can be introduced in a "merge commit" and neither you or any of your reviewers will notice. "Looks good to me."

messutied 9 hours ago 1 reply      
So simple, so useful, I wonder if this feature wasn't already in Gitlab since it seems to be more full featured
rosstex 10 hours ago 1 reply      
Finally! This is excellent news.
orivej 9 hours ago 1 reply      
diff3 conflict style display would be considerably more useful.
jklein11 7 hours ago 2 replies      
To me this feels like making a commit without unit testing first. When I find a conflict I like to be able to resolve it and then do some unit testing to make sure that my revision didn't miss anything.
mojuba 9 hours ago 1 reply      
I didn't know I could merge on github.com in the first place... where is their merge function, by the way?
Anardo 9 hours ago 2 replies      
Ask HN: Where is AI/ML actually adding value at your company?
334 points by mkrecny  14 hours ago   172 comments top 55
altshiftprtscrn 13 hours ago 8 replies      
I work in manufacturing. We have an acoustic microscope that scans parts with the goal of identifying internal defects (typically particulate trapped in epoxy bonds). It's pretty hard to define what size/shape/position/number of particles is worthy of failing the device. Our final product test can tell us what product is "good" and "bad" based on electrical measurements, but that test can't be applied at the stage of assembly where we care to identify the defect.

I recently demonstrated a really simple bagged-decision tree model that "predicts" if the scanned part will go on to fail at downstream testing with ~95% certainty. I honestly don't have a whole lot of background in the realm of ML so it's entirely possible that I'm one of those dreaded types that are applying principles without full understanding of them (and yes I do actually feel quite guilty about it).

The results speak for themselves though - $1M/year scrap cost avoided (if the model is approved for production use) in just being able to tell earlier in the line when something has gone wrong. That's on one product, in one factory, in one company that has over 100 factories world-wide.

The experience has prompted me to go back to school to learn this stuff more formally. There is immense value to be found (or rather, waste to be avoided) using ML in complex manufacturing/supply-chain environments.

sidlls 14 hours ago 2 replies      
The entire product I built over the last year can be reduced to basic statistics (e.g. ratios, probabilities) but because of the hype train we build "models" and "predict" certain outcomes over a data set.

One of the products the company I work for sells more or less attempts to find duplicate entries in a large, unclean data set with "machine learning."

The value added isn't in the use of ML techniques itself, it's in the hype train that fills the Valley these days: our customers see "Data Science product" and don't get that it's really basic predictive analytics under the hood. I'm not sure the product would actually sell as well as it does without that labeling.

To clarify: the company I work for actually uses ML. I actually work on the data science team at my company. My opinion is that we don't actually need to do these things, as our products are possible to create without the sophistication of even the basic techniques, but that battle was lost before I joined.

strebler 13 hours ago 2 replies      
We're a computer vision company, we do a lot of product detection + recognition + search, primarily for retailers, but we've also got revenue in other verticals with large volumes of imagery. My co-founder and I both did our thesis' on computer vision.

In our space, the recent AI / ML advances have made things possible that were simply not realistic before.

That being said, the hype around Deep Learning is getting pretty bad. Several of our competitors have gone out of business (even though they were using the magic of Deep Learning). For example, JustVisual went under a couple of months ago ($20M+ raised) and Slyce ($50M+ raised) is apparently being sold for pennies on the dollar later this month.

Yes, Deep Learning has made some very fundamental advances, but that doesn't mean it's going to make money just as magically!

ekarulf 13 hours ago 3 replies      
Amazon Personalization.

We use ML/Deep Learning for customer to product recommendations and product to product recommendations. For years we used only algorithms based on basic statistics but we've found places where the machine learned models out perform the simpler models.

Here is our blog post and related GitHub repo:https://aws.amazon.com/blogs/big-data/generating-recommendat...https://github.com/amznlabs/amazon-dsstne

If you are interested in this space, we're always hiring. Shoot me an email ($my_hn_username@amazon.com) or visit https://www.amazon.jobs/en/teams/personalization-and-recomme...

jngiam1 12 hours ago 2 replies      
From Coursera - we use ML in a few places:

1. Course Recommendations. We use low rank matrix factorization approaches to do recommendations, and are also looking into integrating other information sources (such as your career goals).

2. Search. Results are relevance ranked based on a variety of signals from popularity to learner preferences.

3. Learning. There's a lot of untapped potential here. We have done some research into peer grading de-biasing [1] and worked with folks at Stanford on studying how people learn to code [2].

We recently co-organized a NIPS workshop on ML for Education: http://ml4ed.cc . There's untapped potential in using ML to improve education.

[1] https://arxiv.org/pdf/1307.2579.pdf

[2] http://jonathan-huang.org/research/pubs/moocshop13/codeweb.h...

malisper 11 hours ago 2 replies      
One of my coworkers used basic reinforcement learning to automate a task someone used to have to do manually. We have two data ingestion pipelines. One that we ingest immediately, and a second for our larger customers which is throttled during the day and ingested at night. For the throttled pipeline, we initially had hard coded rate limits, but as we made changes to our infrastructure, the throttle was processing a different amount than it should have been. Sometimes it would process too much, and we would start to see latency build up in our normal pipeline, and other times it processed too little. For a short period of time, we had the hard coded throttle with a Slack command to override the default. This allowed an enginneer to change the rate limit if they saw we were ingesting to little or too much. While this worked, it was common that an engineer wasn't paying attention, and we would process the wrong amount for a period of time. One of my coworkers used extremely basic reinforcement learning to make the throttle dynamic. It looks at the latency of the normal ingestion pipeline, and based on that, decides how high to set the rate limit on the throttled pipeline. Thanks to him, the throttle will automatically process as much as it can, and no one needs to watch it.

The same coworker also used decision trees to analyze query performance. He trained a decision tree on the words contained in the raw SQL query and the query plan. Anyone could then read the decision tree to understand what properties of a query made that query slow. There's been times we're we've noticed some queries having odd behavior going on, such as some queries having unusually high planning time. When something like this happens, we are able to train a decision tree based on the odd behavior we've noticed. We can then read the decision tree to see what queries have the weird behavior.

jakozaur 14 hours ago 2 replies      
At Sumo Logic we do "grep in cloud as a service". We use machine learning to do pattern clustering. Using lines of text to learn printfs they came from.

The primary advantage for customer is easier to use and troubleshoot faster.


antognini 13 hours ago 4 replies      
At Persyst we use neural networks for EEG interpretation. Our latest version has human-level performance for epileptogenic spike detection. We are now working on bringing the seizure detection algorithm to human-level performance.
Flammy 11 hours ago 1 reply      
The startup I'm part of uses ML to predict which end users are likely to churn for our customers.

We work with B2B and B2C SAAS, mobile apps and games, and e-commerce. For each of them, it is a generalized solution customized to allow them to know which end users are most at risk of churning. The amount of time range varies depending on their customer lifecycles, but for longest lifecycles we can, with high precision, predict churn more than 6 months ahead of actual attrition.

Even more important than "who is at risk?" is "why are they at risk?". To answer this we highlight patterns and sets of behavior that are positively and negatively associated with churn, so that our customers have a reason to reach out, and are armed with specific behaviors they want to encourage, discourage, or modify.

This enables our customers to try to save their accounts / users. This can work through a variety of means, campaigns being the most common. For our B2B customers, the account managers have high confidence about whom they need to contact and why.

All of this includes regular model retraining, to take into account new user events and behaviors, new product updates, etc. We are confident in our solution and offer our customers a free trial to allow us to prove ourselves.

I can't share details, but we just signed our biggest contract yet, as of this morning. :)

For more http://appuri.com/

A recent whitepaper "Predicting User Churn with Machine Learning" http://resources.appuri.com/predicting_user_churn_ml/

ksimek 11 hours ago 1 reply      
Here at Matterport, our research team is using deep learning to understand the 3D spaces scanned by our customers. Deep learning is great for a company like ours, where so much of our data is visual in nature and extracting that information in a high-throughput way would have been impossible before the advent of deep learning.

One way we're applying this is automatic creation of panoramic tours. Real estate is a big market for us, and a key differentiator of our product is the ability to create a tour of a home that will play automatically as either a slideshow or a 3D fly-through. The problem is, creating these tours manually takes time, as it requires navigating a 3D model to find the best views of each room. We know these tours add significant value when selling a home, but many of our customers don't have the time to create them. In our research lab we're using deep learning to create tours automatically by identifying different rooms of the house and what views of them tend to be appealing. We are drawing from a training set of roughly a million user-generated views from manually created guided tours, a decent portion of which are labelled with room type.

It's less far along, but we're also looking at semantic segmentation for 3D geometry estimation, deep learning for improved depth data quality, and other applications of deep learning to 3D data. Our customers have scanned about 370,000 buildings, which works out to around 300 million RGBD images of real places.

johndavi 12 hours ago 2 replies      
We exclusively rely on ML for our core product at Diffbot: automatic data extraction from web pages (articles, products, images, discussion threads, more in the pipeline), cross-site data normalization, etc. It's interesting and challenging work, but a definite point of pride for us to be a profitable AI-powered entity.
iamed2 13 hours ago 1 reply      
We use ML to model complex interactions in electrical grids in order to make decisions that improve grid efficiency, which has been (at least in the short term) more effective than using an optimizer and trying to iterate on problem specification to get better results.

Generally speaking, I think if you know your data relationships you don't need ML. If you don't, it can be especially useful.

HockeyPlayer 14 hours ago 0 replies      
Our low-latency trading group uses regression widely. We have experimented with more complex models but haven't found a compelling use for them yet.
AustinBGibbons 5 hours ago 0 replies      
I work at Periscope Data - we do our own lead scoring using home-baked ML through SciPy. It was interesting to see it play out in the real-world - interpretation of features/parameters was definitely important to the people driving the marketing/sales orgs.

We also support linear regression in the product itself - it was actually an on-boarding project for one of the engineers who joined this year, and he wrote a blog post to show them off: https://www.periscopedata.com/blog/movie-trendlines.html About 1/3rd of our customers are using trendlines, which is pretty good, but we haven't gotten enough requests for more complex ML algorithms to warrant focusing feature development there yet.

quantumhobbit 14 hours ago 0 replies      
Detecting fraud. I work for a credit card company.

Not really a new application though...

fnovd 13 hours ago 0 replies      
We've been using "lite" ML for phenotype adjudication in electronic health records with mild success. Random forests and support vector machines will outperform simple linear regression when disease symptoms/progression don't neatly map to hospital billing codes.
got2surf 11 hours ago 1 reply      
My company builds software to analyze customer feedback.

We use "real" ML for sentiment classification, as well as some of our natural language processing and opinion mining tools. However, most of the value comes from simple statistical analysis/probabilities/ratios, as other commenters mentioned. The ML is really important for determining that a certain customer was angry in a feedback comment, but less important in highlighting trending topics over time, for example.

BickNowstrom 12 hours ago 1 reply      
FinTech: Credit risk modeling. Spend prediction. Loss prediction. Fraud and AML detection. Intrusion detection. Email routing. Bandit testing. Optimizing planning/ task scheduling. Customer segmentation. Face- and document detection. Search/analytics. Chat bots. Sentiment analysis. Topic analysis. Churn detection.
ilikeatari 12 hours ago 0 replies      
We leverage machine learning in the asset replacement modeling space. Basically there is an optimum time to sell your vehicle and purchase a new one based on our model. Our company works with large fleet organizations and provides analytics suite for vehicle replacement, mechanic staffing, benchmarking, telematics and other aspects of fleet management.
Schwolop 9 hours ago 0 replies      
Once an analyst has manually reviewed something, a software system updates a row in a database to mark it as done. Our marketing team calls this machine learning, because the system "learns" not to give analysts the same work twice.

We also use ML to classify bittorrent filenames into media categories, but it's pretty trivial and frankly the initial heuristics applied to clean the data do more of the work than the ML achieves.

NumberCruncher 9 hours ago 0 replies      
In my last job at a big telco I was working with/on a scorecard driven next-best-offer system steering 80-90% of all outbound callcenter activities. I would not call it AI/ML because the scorecards were built with good old logistic regression and were pretty old (bad) but the process made us 25 M /year (calculated NPV). I don't know how much of it was added by the scoring process. We also had a real-time system for SMS marketing built on the top of the same next-best-offer system making 12+ M /year (real profit).

On the other hand I found an internal fraud costing us 2-3 M /year applying only the weak law of big numbers. Big corp, big numbers.

Now I build a similar system for a smaller company. I think we will stick mainly to logistic regression. I actually use "neural networks" with hand-crafted hidden layers to identify buying patterns in our grocery store shopping cart data. It works pretty well from a statistical point of view but it is still a gimmick used to acquire new b2b partners.

AndrewKemendo 13 hours ago 0 replies      
We use Convolutional Networks for semantic segmentation [1] to identify objects in the users environment to build better recommendation systems and to identify planes (floor, wall, ceiling) to give us better localization of the camera pose for height estimates. All from RGB images.

[1] https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn...

peterhunt 11 hours ago 0 replies      
Machine learning is great for helping you understand a new dataset quickly. I often train a basic logistic regression classifier and introspect the coefficients to learn what features are important, which are unimportant, and how they are correlated.

There are a number of other statistical techniques you can use for this but scikit-learn makes this very very easy to do.

splike 13 hours ago 1 reply      
Based on past experimental data, we use ML to predict how effective a given CRISPR target site will be. This information is very valuable to our clients.
saguppa 13 hours ago 0 replies      
We use deep learning at attentive.ai to generate alerts based on unusual events in surveillance video.

We use neural nets to generate descriptors of videos where motion is observed, and classify events as normal/abnormal.

sbashyal 13 hours ago 1 reply      
- We use a complex multivariate model to predict customer conversion and prioritize lead response- We use text analysis to improve content for effectiveness and conversionAmong other things
CardenB 13 hours ago 0 replies      
I would suspect AI/ML profits come largely from improving ad revenue at very stable companies.
mattkrea 11 hours ago 0 replies      
Pretty basic here.. we are a payments processor so we check volume, average ticket $, credit score and things of that nature to determine the quality and lifetime of a new merchant account.
sgt101 10 hours ago 0 replies      
Deep learning to identify available space in kit from images! We are dead proud of it !

Trad learning for many applicatons : fault detection, risk management for installations, job allocation, incident detection (early warning of big things), content recommendation, media purchase advice, others....

Probabilistic learning for inventory repair - but this is not yet to impact, the results are great but the advice has not yet been ratified and productionised.

garysieling 10 hours ago 0 replies      
I'm using some of the pre-built libraries to find/fix low hanging fruit of data quality issues for https://www.findlectures.com, for instance finding speaker names.

The first pass is usually a regex to find names, then for what's left run a natural language tool to find candidate names, and then manual entry.

agibsonccc 11 hours ago 0 replies      
I run a deep learning company focused on a lot of banking and telco fraud workloads like [1].We have also done dl to predict failing services to auto migrate workloads before server failure.

The bulk of what we do is anomaly detection.

[1] https://skymind.io/case-studies[2] insights.ubuntu.com/2016/04/25/making-deep-learning-accessible-on-openstack/

jc4p 13 hours ago 1 reply      
I think a lot of the real benefits from ML "at work" is more in just cleaning of data and running through the gauntlet of simplest regressions (before jumping onto something more magical whose outputs and decision making process you can't exactly explain to someone).

I would classify something like this blog post as ML, would you? http://stackoverflow.blog/2016/11/How-Do-Developers-in-New-Y...

brockf 9 hours ago 2 replies      
At our data science company, we're building a marketing automation platform that uses deep reinforcement learning to optimize email marketing campaigns.

Marketers create their messages and define their goals (e.g., purchasing a product, using an app) and it learns what and when to message customers to drive them towards those goals. Basically, it turns marketing drip campaigns into a game and learns how to win it :)

We're seeing some pretty get results so far in our private beta (e.g., more goals reached, fewer emails sent), and excited to launch into public beta later this month.

For more info, check out https://www.optimail.io or read our Strong blog post at http://www.strong.io/blog/optimail-email-marketing-artificia....

lost_name 12 hours ago 1 reply      
Nothing in my department yet, but we actually have a guy actively looking for a reason to implement some kind of ML so we can say our product "has it" I guess.
jgalloway___ 12 hours ago 0 replies      
We realized that by adjusting training models we could incorporate autonomous recognition of not only images but intent and behavior into our application suite.
taytus 14 hours ago 2 replies      
Raising money from clueless investors
room271 9 hours ago 0 replies      
Helping to moderate comments on theguardian.com!


(We're still beginners as will be apparent from the video but it's proving useful so far. I should note, we do have 'proper' data scientists too, but they are mostly working on audience analysis/personalisation).

tspike 8 hours ago 0 replies      
Wrote a grammar checker that used both ML models and rules (which in turn used e.g. part-of-speech taggers based on ML).

Wrote a system for automatically grading kids' essays (think the lame "summarize this passage"-type passages on standardized tests). In that case it was actually a platform for machine learning - ie, plumb together feature modules into modeling modules and compare output model results.

lmeyerov 10 hours ago 0 replies      
At Graphistry, we help investigate & correlate events, initially for security logs. E.g., Splunk & Sumo centralize data and expose grep + bar charts, then we add visual graph analytics that surfaces entities, events, & how they connect/correlate. "It started here, then went there, ..." . We currently do basic ML for clustering / dimensionality reduction, where the focus is on exposing many search hits more sanely.

Also, some GPU goodness for 10-100X visual scale, and now we're working on investigation automation on top :)

tomatohs 12 hours ago 1 reply      
At ScreenSquid we use statistical analysis to find screen recordings of the most active users on your website. This saves our customers a ton of time avoiding playing with filters trying to find "good" recordings.


lnanek2 12 hours ago 0 replies      
Providing users the best recommendations so they participate more, get more from the service, and churn less. Detecting fraud and so saving money. Predicting users who are about to leave and allowing us to reach out to them. Dynamic pricing to take optimum advantage of the supply and demand curve. Delayed release of product so it doesn't all get reserved immediately and people don't have to camp the release times.
katkattac 6 hours ago 0 replies      
We use machine learning to detect anomalies on our customers' data and alert them of potential problems. It's not fancy or cutting edge, but it provides value.
solresol 8 hours ago 0 replies      
Our main product uses machine learning and natural language processing to predict how long JIRA tickets are going to take to resolve.

(www.queckt.com is anyone's interested)

Without AI/ML, we wouldn't have a product.

plafl 13 hours ago 1 reply      
Predict probability of car accidents based on the sensors of your smartphone
wmblaettler 12 hours ago 0 replies      
I have a follow on question to this to all the respondents: Can you briefly describe the architecture you are using? Cloud-based offering vs self-hosted, software libraries used, etc...
vskr 8 hours ago 0 replies      
Slightly tangential, but how do you collect training data for AI/ML models you are developing
Tankenstein 9 hours ago 0 replies      
Lots of KYC things, like fraud, AML and CTF. Helps with finding new patterns.
Radim 9 hours ago 0 replies      
I run a company that specializes in design & implementation of kick-ass ML solutions [1]. We've had successful projects in quite a few industries at this point:


Aka e-discovery [2]: produce digital documents in legal proceedings.

What was special: stringent requirements on statistical robustness! (the opposing party can challenge your process in court -- everything about way you build your datasets or measure the production recall the has to be absolutely bullet proof)


Anomaly detection in system usage patterns (with features like process load, frequency, volume) using NNs.

What was special: extra features from document content (type of document being accessed, topic modeling, classification).


Built tiered IAB classification [3] for magazine and newspaper articles.

Built a topic modeling system to automatically discover themes in large document collections (articles, tweets), to replace manual taxonomies and tagging, for consistent KPI tracking.

What was special: massive data volumes, real-time processing.


Built a recommendation engine that automatically assembles newsletters, and learns user preferences from their feedback (newsletter clicks), using multi-arm bandits.

What was special: exploration / exploitation tradeoff from implicit and explicit feedback. Topic modeling to get relevant features.


Built a search engine (which is called "discovery" in this industry), based on Elasticsearch.

What was special: we added a special plugin for "related article" recommendations, based on semantic analysis on article content (LDA, LSI).


Advised on an engine to automatically match CVs to job descriptions.

Built an ML engine to automatically route incoming job positions to hierarchy of some 1,000 pre-defined job categories.

Built a system to automatically extract structured information from (barely structured) CV PDFs.

Built a ML system to build "user profiles" from enterprise data (logs, wikis), then automatically match incoming help requests in plain text to domain experts.

What was special: Used bayesian inference to handle knowledge uncertainty and combine information from multiple sources.


Built a system to extract structured fixtures and cargoes from unstructured provider data (emails, attachments).

What was special: deep learning architecture on character level, to handle the massive amount of noise and variance.


Built a system to automatically navigate banking sites for US banks, and scrape them on behalf of the user, using their provided username/password/MFA.

What was special: PITA of headless browsing. The ML part of identifying forms, pages and transactions was comparatively straightforward.


... and a bunch of others :)

Overall, in all cases, lots of tinkering and careful analysis to build something that actually works, as each industry is different and needs lots of SME. The dream of a "turn-key general-purpose ML" is still ways off, recent AI hype notwithstanding.

[1] http://rare-technologies.com/

[2] https://en.wikipedia.org/wiki/Electronic_discovery

[3] https://www.iab.com/guidelines/iab-quality-assurance-guideli...

moandcompany 11 hours ago 0 replies      
We are using machine learning to identify software as benign software or malware for customers.
iampims 13 hours ago 0 replies      
We use RNNs for voice keyword recognition.
pfarnsworth 13 hours ago 0 replies      
Sift's product is based on ML.
chudi 13 hours ago 0 replies      
We use ML for recommendation systems (I work at a Classifieds company)
fatdog 13 hours ago 0 replies      
Can't say for what/where, but, yes. Use it to super-scale work of human analysts who evaluate the quality of some stuff.
the-dude 13 hours ago 0 replies      
PCB autorouting
lowglow 12 hours ago 0 replies      
We're building models of human behavior to provide interactive intelligent agents with a conversational interface. AI/ML is literally the backbone of what we're doing.
Hotel Stays, Especially in Inclement Weather whathelpsthehomeless.blogspot.com
58 points by Mz  12 hours ago   50 comments top 11
jhwhite 8 hours ago 1 reply      
A few years ago I ran into a girl I went on a few dates with on the corner panhandling. She was now homeless. But because she knew me she wouldn't take anything from me.

I told her she could stay with me for a bit and she said no, I offered to put her up in a hotel and she said no, I offered money and she said no. She did let me buy her lunch. I saw her at different corners around town and she would never take money. But I would sometimes swing by some place and pick up a breakfast biscuit for her.

I haven't seen her in a while now. I hope she got help.

jkraker 8 hours ago 1 reply      
There's something subtle behind this that I think is more powerful than the details of the housing. It's someone taking time to interact and treat them like a person. I believe that interaction and showing others we value them as people is one of the most important things for those living in isolation from society like many homeless people do.

There's no easy way to do this. It's hardly ever convenient. It isn't a foolproof means of turning situations around. It is, however, extremely powerful and desperately needed.

Do I practice what I'm saying? Sadly, not enough.

jaboutboul 7 hours ago 2 replies      
It is just me or is anyone else reading the comments getting the feeling that mental health issues are rampant today probably the most neglected space of health care or at least public awareness?

Shouldn't society and/or the government do more to increase awareness of these mental Heath issues and make information and treatments more widely available?

interurban 2 hours ago 0 replies      
I think there's something being missed here by a lot of commenters. The title of the blog isn't "What Solves Homelessness", it's "What Helps the Homeless". As in, right here, right now.

It doesn't take systemic or policy level changes to make someone's day/week better.

tyingq 10 hours ago 4 replies      
Interesting. From the "About Page" on the blog...

"The author of this site has about six years of college, including an incomplete Bachelor of Science degree in Environmental Resource Management...After the class was over, she continued to volunteer at the shelter for several more months. Years later, while homeless herself, she started the San Diego Homeless Survival Guide and also this site."

It's a shame the idea of helping the homeless via a night in a hotel is such a manually intensive effort. The article mentions paying in cash, I assume to avoid liability. Would be nice if there was a way to buy vouchers online that you could hand out.

dawnerd 10 hours ago 4 replies      
I've known some homeless people, and for them, it was mental illness that kept them on the streets. They thought everyone was out to get them or control their life. It's a shame that they all want the handouts, but they don't want to follow any rules that come with it.

What I realized is a lot of these people don't want any responsibilities. They're not willing, or able to accept that life sucks for everyone. The governments need to start offering better mental health care that doesn't involve locking people up in hospitals (something an old friend of mine had happen to him, and it caused him to lose trust in everyone).

I don't think giving people hotel rooms is going to solve any problems - if anything it's just enabling it. A better solution is to expand affordable housing and job programs so people start to get back on their feet while ultimately leaving the decisions up to them. Ultimately if they want to live under a bridge that a choice they've made and there's nothing we can really do about it.

morgante 5 hours ago 0 replies      
> The current shelter system very often warehouses people in conditions that would not be acceptable for any kind of paid accommodations, whether it be temporary accommodations (like a hotel) or permanent (like an apartment or house).

I'm curious how homeless shelters compare to (paid) hostels, which are far more economical than hotels. They're (I assume) both in a dormitory setting.

Hotels are obviously not economically feasible as a long-term solution. I think the risk of this proposal is that they are not a reasonable on-ramp to society, as even if you can get a stable job you almost certainly cannot afford to live in a hotel full-time. On the other hand, some sort of hostel-style accommodation would be reasonable.

saycheese 10 hours ago 1 reply      
Idea that the homeless have nowhere to go is a stretch (really, ask them, don't take my word for it) - and if the author's advice is followed, your basically spending in a single night (few hundred) what it would cost to feed them for a whole month.
s369610 3 hours ago 1 reply      
I think we should shine the spotlight on the number of homeless. Instead of the weather/temperature and the rise and fall of some key stocks at the end of a news broadcast, we should also report the number of homeless by region and whether it has gone up or down. It certainly feels like it should be more important.
gohrt 7 hours ago 0 replies      
It seems like the article is long way around of saying that people want or need nicer accommodations than today's shelters. The difference between a "shelter" and a "hotel" is (a) of/how you pay (minor detail), and (b) the quality of the accommodation (privacy, security, comfort).
scythe 8 hours ago 1 reply      
Interesting idea. Do you know of any case where it has worked? From the way the article was written I was sort of expecting to hear a case study.

Speculation roundup:

There are two economic values of a homeless person, actually any person, but I digress. P is the cost, to the state, of keeping that person alive and medically stable, which includes police and health services, as well as shelter costs, soup kitchens, etc. V is the value generated by the homeless person through their labor, which varies much more than some people expect. Many homeless people have jobs. But for many homeless people it's zero.

Ways of decreasing P include a number of creative policies designed recently as well as "short-term" tolerance of outdoor living. Ways of increasing V by contrast are generally limited to:

* standard inpatient mental health "treatment", has a slim chance of success and a large chance of backfiring and setting V to zero for a long time, also sends P through the roof

* outpatient medication, has a similarly tiny chance of success but a smaller chance of backfiring and is much cheaper

* prison labor, effective but brings to mind immediately The Road to Serfdom and other dystopian fictions (heh)

* ???

Creative ways of increasing V ought to depend on economic fundamentals, i.e. finding out what a homeless person is good for and exploiting that, to wit: homeless people tend to beat normal people at dealing with homeless people. They might also be able to help out with e.g. trash pickup or road maintenance. The typical road-map people envision for increasing V looks like this:

homeless -> [treatment] -> normal

but in reality looks more like this:

very low V -> [treatment] -> low V -> [more treatment] -> slightly low V -> [more treatment] -> mediocre V -> [more treatment] -> with luck normal V

The typical way of dealing with the intermediate stages currently consists of either locking them in a small compound with shitty beds and twelve other crazy people or giving them a bottle of pills and hoping Jesus can handle the rest. This, really, is the problem. Halfway housing for homeless people might look like a situation where housing is provided in return for part-time labor.

I'd also like to point out that while in "treatment" for homelessness it might not be reasonable to demand complete sobriety when you consider that you're preparing them for eventual release into a world where they'll be allowed to drink alcohol and smoke cigarettes and probably marijuana and their drug use will not be so heavily monitored. Supporting the development of self-control means that people have to be able to have a little control in the first place, and the ability to make small mistakes before making big ones

The Music Industry Shouldn't Be Able to Cut Off Your Internet Access eff.org
206 points by z0a  11 hours ago   40 comments top 3
mortenjorck 10 hours ago 4 replies      
I thought this "cut off your internet" thing had died circa 2007 with the draconian, yet endearingly-condescending "three-strikes-you're-out" push from the record industry. In a world where internet access has long since become a basic requirement to function in civil society, how is this still a conversation?
xupybd 10 hours ago 1 reply      
yuhong 10 hours ago 2 replies      
I wonder how Hollywood was taken over by lawyers in the first place. The plaintiffs in the Betamax suit are Disney and Universal.
Bitcoin Startup Adds Former Barclays Chief Antony Jenkins wsj.com
23 points by alcio  8 hours ago   14 comments top 3
imranq 6 hours ago 1 reply      
This guy used to run Barclays tax evasion division, which while apparently completely legal, gave the wrong impression as Barclays accepted taxpayer money...
kylebenzle 6 hours ago 3 replies      
Seeing more and more Bitcoin stories and HN these days, is something "happening" in that space finally?
osrec 6 hours ago 2 replies      
A retail banker, with little understanding of technology and little clout. Having served in the investment arm of Barclays during his tenure, I'm not sure he can add much.
Show HN: Automated Pipelines to Your Kubernetes Clusters distelli.com
72 points by kt9  13 hours ago   44 comments top 6
gkop 13 hours ago 4 replies      
What's the preferred workflow when continuously integrating and deploying in containers? At what step do you run your automated tests? Do you run them in the same image that will go into staging and then production? If using the same image, do you ship to staging and production with test dependencies included, or how do you strip away test dependencies first?
arturoochoa 12 hours ago 2 replies      
How is this product different from hosting your CI like Drone, which can automatically do your CI testing, then with the help of plugins create a Docker image for you based on conditionals if you will, and finally upload the generated image to your public or private registry??

BTW, I can confirm your site still not loading, time again for a kubectl scale.

itajaja 12 hours ago 2 replies      
hi @kt9, in the kubernetes dashboard, when adding an existing cluster, under "Select a Provider" there is AWS. that means that it also support ECS? I am not aware of AWS supporting kubernetes directly. I have a k8s cluster running on an aws autoscaling group, but I guess in that case I should just click "Other". What's the AWS option for then?
kt9 11 hours ago 1 reply      
Hi everyone,

We found that the site was slow because we were getting throttled by a table in DynamoDB which explains why kubectl scale wasn't helping as much as we had hoped. We were adding capacity, just not in the right place.

We've added read capacity to our table and things are faster now.

kt9 13 hours ago 2 replies      
I'm the founder at Distelli. I'm happy to answer any questions.
gkop 10 hours ago 1 reply      
What all does distelli do to make image builds fast? For example, where do you store cached layers so they are most rapidly pulled by the build worker?
Show HN: Pixibot Slack bot that makes text in posted images searchable pixibot.co
74 points by rgbasin  14 hours ago   13 comments top 6
amelius 11 hours ago 1 reply      
May I ask, what OCR engine does it use?
Roritharr 14 hours ago 0 replies      
Thats neat, just installed it to test. We sadly don't post much stuff that needs ocr'ing, although the occasional screenshot here and there is useful to ocr. Sadly I don't see that we would ever get over the 100 uploads a month limit.
edsouza 12 hours ago 1 reply      
is there more information? mobile version of website shows a giant logo, and if you scroll down it shows a static image of a slack channel [1]

Using Android/Chrome

[1] http://imgur.com/n5f6bAG

deckar01 14 hours ago 0 replies      
It might be a better UX to post a snippet since those get folded to a short preview.
faisalhassanx 14 hours ago 1 reply      
Hey Ron. You're on Product Hunt. https://www.producthunt.com/posts/pixibot
cooper12 8 hours ago 1 reply      
In an ideal world this wouldn't even be necessary. This just speaks to the sad state of image format adoption and workflows. Every browser supports basic SVG. SVG text can be read by screen readers, indexed by search engines, found in the page, copied, resized, machine-translated, and everything else you expect from text. Why are we still taking raster images of text? I think the onus should be on the OS and browsers to implement snipping tools that can save in SVG. Regardless I still think this is a cool project for making that lost information accessible.
VimWiki A personal wiki for Vim vimwiki.github.io
454 points by type0  19 hours ago   98 comments top 24
patrickdavey 17 hours ago 3 replies      
It's GREAT to see vimwiki at the top of Hacker News.

I've been using this for years, and keep all my notes in a wiki[0], written in GitHub flavour mark down, with syntax highlighting etc. in the editor. I guess it's not orgmode (which sounds amazing), but it seems really really good to me.

I created the vimwiki_markdown[1] gem which allows you to convert GH style markdown to HTML using the wiki. Now I might just call out to pandoc, but I didn't know about it at the time, and my gem works fine*

I love that you can have project specific wikis etc. It's just so great. I'll also link to the relevant section in my .vimrc[2] if you are going to go down the markdown route.

[0] http://wiki.psdavey.com

[1] https://github.com/patrickdavey/vimwiki_markdown

[2] https://github.com/patrickdavey/dotfiles/blob/682e72e4b7a70e...

lambdasue 19 hours ago 2 replies      
While I never got on terms with orgmode, I cannot praise vimwiki high enough. It's intuitive and simple enough to not have very steep learning curve.

I use it with taskwiki [1] extension, which stores all the tasks in the awesome Taskwarrior [2] CLI task manager. With this, I have my tasks in my text files, searchable just a command away on the CLI or my mobile phone via the Android app.

[1] https://taskwarrior.org/[2] https://github.com/tbabej/taskwiki

JasonSage 18 hours ago 4 replies      
Only looking at this page, I have no clue what this does.

It says...

 organize notes and ideas manage todo-lists write documentation write a diary
I can already do these in vim, with text files or markdown files. What is this actually doing, then?

Projects like this really deserve some high-level explanation like "Do you want to do X, but don't like having to do Y? This addresses that."

grimoald 14 hours ago 4 replies      
Vimwiki maintainer here. Feel free to ask questions.
grimgrin 17 hours ago 0 replies      
The one vimwiki line I have in my vimrc is to set the path to my Dropbox.

 let g:vimwiki_list = [{'path': '~/Dropbox/Public/briefcase/vimwiki'}]
All of the default commands are pretty nice. Here's a nice cheatsheet:


ecthiender 16 hours ago 1 reply      
So its basically like Orgmode (a simplistic version, that is) but for Vim?
kondor6c 19 hours ago 1 reply      
This looks like Org mode for Vim, very exciting! I like the ability to export to an HTML page. It might be useful to export as doku/mediawiki, markdown, or other types of formatting.
Raphmedia 17 hours ago 7 replies      

I really want to learn either Vim or Emacs but I've never been able to decide which one to learn. Every time I've tried to search about this topic I end up in petty wars. Help?

k2enemy 7 hours ago 0 replies      
I'm a huge fan of VimWiki and have years worth of data in my main wiki.

What are people using for their wiki on iOS? Right now I'm syncing with iA Writer. It works pretty well, but doesn't support links. It looks like Bear is promising -- it uses the same style links, but it doesn't sync with a folder of text files. Any others out there?

yellowboxtenant 17 hours ago 1 reply      
Advantage over vim-notes? https://github.com/xolox/vim-notes
jcpst 12 hours ago 1 reply      
Been using org-mode. Trying out vimwiki after seeing this and I like the simplicity of it better. It has enough features for me to remove the overhead of an emacs + evil install just for org-mode.
johnnycarcin 16 hours ago 0 replies      
This is really cool! I wrote a small go app awhile back to solve my personal wiki problem (https://github.com/esell/bookie) but since I am in vim most of the day, this seems like an even better solution. You could even just take the output and toss it up on Github pages or whatever if you wanted to share it.
yoland68 13 hours ago 0 replies      
I use this so much it became my to-go place to create step by step plan (a.plan) for any project I do. I would also bind :w with converting the wiki to html and have a cronjob to sync the html with the server so anyone can easily check the project I m working on and its progress
toisanji 18 hours ago 2 replies      
Love the idea, but why use a custom syntax instead of markdown?
hpincket 15 hours ago 1 reply      
I've used VimWiki for a couple years now. I use the most basic features (Enter, Backspace). The ability to instantly jump to the wiki is the most useful (Leader W W). My wiki is just a folder in my Dropbox so I can get to it on almost any computer.

When I started, VimWiki syntax was better supported than markdown. I've seen lots of markdown related pull requests come through, so maybe that's changed.

I use VimWiki for:

 * Life goals (stretch and short term). I use it almost as a centering tool. * Poetry * Passages from books * Book summaries (that I write) * Lecture/Speech notes * Notes on misc. items I want to explore * Ideas for future science fiction short stories
Last summer I wrote a simple Awk script to extract VimWiki style definitions (Term::Definition) into TSV's for importing into Anki. I was frustrated that I couldn't fully automate this process without modifying Anki. Maybe someone else has figured this out?

ghostwreck 15 hours ago 3 replies      
I have tried using this or orgmode at various points, but I haven't been able to find a good workflow for linking images/screen shots with notes (which I do quite often in Evernote). I use vim as my primary IDE, so I would love to find a solution for image links to let me use VimWiki as well.
pacuna 18 hours ago 1 reply      
I like org-mode but I'm a vim user and I don't want to be switching around constantly. I'll definitely will try this
pencilcode 16 hours ago 0 replies      
I've been using vimwiki for years - it's been an absolute life-saver - but had no idea it now supported markdown. Any one know of a tool to convert vimwiki syntax to markdown or can i just use markdown in new files and keep vimwiki in the old files?
grive 18 hours ago 0 replies      
Very cool, I was just searching this kind of thing.

I'd like to see if it would be easy to support ReST beside markdown. I'd like to reuse my current files instead of trying to convert them haphazardly...

YeGoblynQueenne 16 hours ago 0 replies      
Is this substantially different than:

 a) writing a syntax highlighting script for your notes files, b) using tags to jump to documents and c) using netrw for browsing your files?
Because if it is not, and even if it is just making it easier to do those things yourself, I would personally not even consider using it, having had some really nasty surprises with Vim plugins in the past. Yes, I do mean you, Powerline.

phaemon 15 hours ago 1 reply      
I suppose it was taken, but "Viki" would have been so much better a name.
damaru 14 hours ago 0 replies      
Funny, I just setup my vimwiki + syncthing yesterday night!
galfarragem 16 hours ago 1 reply      
Is there something similar for Sublime Text?
qwertyuiop924 18 hours ago 2 replies      
...But it still can't match org.

Maybe in 10 years?

       cached 13 December 2016 08:02:01 GMT