3 story house, 1 AP AC Pro per floor1 AP AC Pro in the detached office 1 Switch 16 POE in the house1 Switch 16 POE in the office - 2 x Cat6a between switches in a LAG1 Security Gateway 3P1 Cloud Key
I upgraded all the firmware on the complete system as I was typing this message using their app for the iPad.
Run a FreeNAS Mini in the office 2 x 1g in a LAG.
Run Insteon home automation for lights, plugs, HVAC, Camaras, leak & door sensors.
3 TVs with Intel Compute Stick with Kodi plugged into the TV HDMI. Added USB to 1g on the stick and wired to switch. Logitech Harmony remotes (same in every room) for control.
Lots of laptops, phones, pads all wifi. TVs are Wifi for their apps (used Ethernet for Compute Sticks).
Every product here is rock solid and just works (okay Kodi is buggy as is the Windows 10 it runs on).
I love the Ubiquiti gear. We use the APs in the office at work as well. 2 older APs (2 year old models - plan to upgrade soon) with 35+ devices on it at any time (most are developer laptops with lots of traffic to the DC). We use TrueNAS to boot all the servers, etc (FreeNAS commercial version).
So for wifi Ubiquiti is pretty dang good and love FreeNAS (and the IXsystems hardware) if you need a NAS.
For less technical I have recommended Eero to a few people and they all say it is quite good so far.
We've been installing and using their gear in deployments all over the place for over 5 years now and it always just works. I think we've had to replace two faulty units in that time, out of, I dunno, several dozens.
Their software is getting better and better too, and their security camera system is IMO way ahead of most of the competition in the same price range (with the exception of a network disconnection issue that has had us and Ubiquiti tech support tearing our hair out for weeks now).
I hate that their management software requires Java though, it can be fiddly and annoying to install and I think one of our techs finally set up a VM specifically for their management software.
I'm a network admin for a small managed service provider.
Happy to provide more detail if needed:)
Silly. If the author doesn't want to follow good advice, too bad for him.
I'm using WRT1900ACS with DD-WRT. It works like a charm.
> if I buy a product then I expect it to work as advertised and not need to implement hacks to keep it alive.
It's not a "hack". It's installing a better quality OS on the device. Again, if the author doesn't want to do that, there is no reason to complain.
The high-end gear many get for running open firmware can get expensive. It isn't hard to spend less money on dedicated devices and get better performance.
What I spent on an EdgeRouter, AP AC Pro and a good managed switch is less than what the top recommendation from this thread:
https://news.ycombinator.com/item?id=13113766
cost - $250 for the Linksys EA8500 vs $130 + $50 for the UniFi setup and add $50 for a TP-Link switch.
I now want to get a bit fancier with the router - so i'm swapping it out for an eBay sourced Cisco or similar (I want dual WAN and failover, along with routing some traffic over VPN's) - still cost the same but much better (and a setup that is applicable up to 100+ users)
Edge router: amazing
Edge switch: amazing
AC access points: amazing
I've never tried any of the unifi stuff though.
(Nothing against Ubiquiti which I'm sure is great, but I've been a very happy Mikrotik user for years. Recently updated my main AP to their gigabit (wired and wireless) hAP AC and loving it. I use a second Mikrotik as a fully-bridged repeater, and have an IoT wired+wirless virtual network firewalled off from the rest.)
Or does Ubiquity require you to use their switches for anything to work properly?
My phone stays on ac all the time. Not sure how to fix it. That and the stupid java application needed to configure were not my most favorite on boarding experiences. Seems an ok system but not super great for getting set up. Its definitely enterprise though.
I compare it to my Mikrotik switch that while being able to do pretty much anything I could want to do, has such a steep learning curve that I ended up just using it as a slightly fancy home firewall/switch.
I'm considering pulling the trigger on the Ubiquiti switches and another three AC units for my house to cover the last few dead zones. It's been one of my favorite purchases. I really want to play around with VLANs for guest networks.
And they're just a bit more expensive than a good wifi/router combo. For the features it feels like I'm getting the biggest bargain.
It is way more complex than one would think it should be, except that we've seen time and time again how crappy network configurations screw up everything. It is also helpful to have historical data when complaining to the ISP. It is also amazing to see the guest network which has given out 60 leases, sure some of those are the phones of people who came over but a lot of them are things that want to be "online".
are there APs with two radios each - one for backhaul and one for service?
ubnt@ubnt:~$ uptime 06:16:26 up 140 days, 13:15, 1 user, load average: 1.08, 1.03, 1.05
My ideal (budgeted) setup is:
- (1x) MX65 -- 12 GbE with 2 PoE+. PoE+ powers the access points. - (2x) MR33 -- 802.11ac Wave 2 powered via MX65.
Last year I bought my wife and I our first set of "smart" phones. Yes I'm serious, I've been in IT all my life but never felt I had a need for anything other than a flip phone. But I noticed Samsung selling a Galaxy Core Prime for $90 and I bought my wife the LG Stylo for $180 since I wanted her to have a better camera.
For my home network, my modem runs into a linux box with Shorewall where it is natted/firewalled and split into two subnets.
I've been a fan of the netgear prosafe access points for the last 10 years, as I could always find older models on ebay for cheap.
Currently I was using a WN203 (2x2 802n). For the most part it was just my laptop and a Roku box connected. never had problems. But enter in these new smart phones...
Within a few weeks (of buying the phones) I noticed random times of terrible wifi lag. Looking at the AP's management webpage, I noticed during these random times of lag, my wife's phone would be connected at just 1M. I'd tell her to restart her phone and the problem would go away for a day or two. But it kept happening. I wasn't sure what the problem was but I used it as an excuse to get another access point, I was wanting one that had 5ghz anyway. I sniped a netgear WNDAP660 (3x3 802n) off of ebay, new in box for $95. They are normally $350 new. Figured that would solve my problems.
To my horror after a few days of having the new WNDAP660 set up, I started getting the same terrible lag and my wife's phone would be connected at 1M again. This time though the WNDAP660, through the web interface had an option to save wifi traffic packets. During the next time of lag, I saved a few minutes of packet captures and opened them in wireshark.
I was surprised to see that even though my wife's phone was connected at 1M, it was not the issue. My phone (the Core Prime) was spamming pwr_mgt request packets, 100's per second. It was basically using up all the bandwidth. In disgust, I moved everything to the 5ghz band (gave it a different id), and left only my phone on the 2.4ghz. So all was well....
But that was just a couple of months ago. I've since out grown my core prime (which doesn't take much) and bought a Galaxy S6. I turned off the 2.4ghz band on the AP, and now everything (including my new S6) are all connected to 5ghz.
And then you guessed it... I was sitting at the kitchen table and noticed lag while trying to browse the net on my laptop. I looked up and noticed the Roku box playing on TV was also stuck loading. I reached over and picked up my new S6 and put it into airplane mode. Instantly all was well on the airwaves. I haven't actually done a packet dump yet, so I don't know if the S6 is spamming pwr mgt requests or not.
But this is really annoying. I don't know what is at fault either. It seems smart phones don't play nice. But I've also caught my Roku box spewing RTS requests, even after rebooting it. I thought it had been hacked or something and was trying to dos me, but after restarting one of the cell phones all went back to normal. Its as if certain devices don't play well with each other. I mean, in my original lag case, the core prime was spamming packets, yet restarting my wifes phone would solve the problem just as good as restarting my phone. Makes no sense....
So I guess if you get random lag on wifi, try turning off a cell phone or two until you find the culprit. And once you find it, then.... Well actually I don't know what you do then. Any tips? lol
It seems our non-profit Bing maps key was revoked.. Switched to Arcgis imagery instead for now. Too bad, the Bing imagery was really great.
global time machine with http://radiooooo.com/and streaming with http://tunein.com/radio/regions/
Reminds me of that not-uncommon movie intro implying that aliens are listening to Earth, where the camera zooms in on the planet as random stations and static play.
Question to the creator / OP: Are the ads (voice ads) that play injected into the stream? I ask because even though I selected an Indian radio channel, it played a long AT&T get a go phone this holiday season blah blah blah for like 2 minutes, and the voice was American accent and also the address it said to go to was att.com/gophone (which I would think is only US customers). What gives?
At some point, Opendoor will be able to let businesses that simply manage financial risk for a living (i.e. banks) spread the downside risk out over the entire financial system, while Opendoor takes advantage of its customer acquisition and data advantages to capture the bulk upside. It's not a perpetually risk model if they don't want it to be.
If someone could look up what their place is worth and, unlike Zillow and Redfin, actually be told explicitly why it's worth that as opposed to being told a "black box" value, you'll open the top of the funnel by driving folks who aren't yet ready to sell (today's owner is tomorrow's seller). And since those values change frequently the not-ready-to-sell owner could track the changes to value and the underlying data driving that value, which would keep you on the tops of their minds when they're ready to sell down the road.
If you think the pricing model/algorithm is too proprietary to share beyond the "I'm ready to sell today" market, I'll stop by for a coffee to try and convince you otherwise.
Good luck. I'm stoked to see someone trying to fundamentally change the residential market.
Edit: oh, btw, not advocating sharing the actual algorithm, but the results of it (e.g. your house is worth 5 psf less than the comp from 3 blocks over bc your place is directly under the airport's flight path).
When the economy goes down, Airbnb will do just fine. More people will want to do short term hospitality to earn cash and more people will want to drive for Uber. OpenDoor will have to liquidate their debt which will prove difficult. Also if I remember correctly, Keith Rabois said he was talking with Peter Thiel and Peter mentioned how real estate has rarely changed. Later Keith helped create, more like fund, OpenDoor. What I wonder is, why hasn't Peter invested in Keith's new company? It's not because he's too busy with Trump's transition team.
OpenDoor isn't a startup you should emulate. I wholly disagree with this article.
For example, "I paid X so I won't sell below that", "I added a pool plus 100k in renovation so I should get that much of a premium", "I'll just wait a few months and see if prices get better".
A big function of realtors is to console and convince people to sell at a realistic price.
This behavior makes it harder for OD to improve liquidity, and in fact liquidity could take a hit with no realtor therapy for sellers.
I predict a major issue for Opendoor will be seller sticker shock. Many will balk at paying 8-12% in fees even if it's in their best interest.
Willingness to paying more for convenience or speed is not a constant. See mental accounting from University of Chicago, or ask any realtor about seller psychology.
> To that end I hope Opendoor succeeds simply so it can be a role model for tech: taking on big risks for big rewards that create real value by solving real problems is the best possible way our industry can create benefits that extend beyond investors and shareholders;
The only reason (ok one of the big reasons, not the only) Opendoor exists is that the massive amount of capital deployed has the potential to make investors and shareholders a ton of money. It's an all or nothing proposition. Furthermore, I don't think the author has rightly assessed (or assessed at all) the negative externalities associated with Opendoor's model to the economy. Part of the whole reason the housing market revelation of "too big to fail" of banks was exposed was because too much of the real estate market was tied up in too few organizations (Fannie/Freddie, Wells, etc), which means when it crumbles down, our economy cannot sustain the burden. If OpenDoor's risk mitigation model is to basically "own more of the market" it means that if/when the market does go for a downturn and OpenDoor goes bust, then it's not just VC's who lose, but a whole lot of homeowners, or even worse, homeowners and taxpayers.
That's hardly a model I'd admire or try emulate, unless of course, I want to make a buttload of cash (or fail very hard trying).
> Opendoor is creating value as opposed to taxing a few bucks off the top of an existing market or simply trying to be cheap.
Opendoor's arbitrage is no different then an ad network, they're both exploiting inefficiencies. I'm really having a hard time understand why it's so much more "benevolent" as it's suggested here.
With a talent for seeing things anew, Rockefeller could study an operation, break it down into component parts, and devise ways to improve it. In many ways, he anticipated the efficiency studies of engineer Frederick Winslow Taylor. Regarding each plant as infinitely perfectible, he created an atmosphere of ceaseless improvement. Paradoxically, the mammoth scale of operations encouraged close attention to minute detail, for a penny saved in one place might then be multiplied a thousandfold throughout the empire. In the early 1870s, Rockefeller inspected a Standard plant in New York City that filled and sealed five-gallon tin cans of kerosene for export. After watching a machine solder caps to the cans, he asked the resident expert: How many drops of solder do you use on each can? Forty, the man replied. Have you ever tried thirty-eight? Rockefeller asked. No? Would you mind having some sealed with thirty-eight and let me know?34 When thirty-eight drops were applied, a small percentage of cans leakedbut none at thirty-nine. Hence, thirty-nine drops of solder became the new standard instituted at all Standard Oil refineries. That one drop of solder, said Rockefeller, still smiling in retirement, saved $2,500 the first year; but the export business kept on increasing after that and doubled, quadrupledbecame immensely greater than it was then; and the saving has gone steadily along, one drop on each can, and has amounted since to many hundreds of thousands of dollars.
Rockefeller performed many similar feats, fractionally reducing the length of staves or the width of iron hoops without weakening a barrels strength[...]
- Grit - I can see why it's caught on, it's pretty well-written and informative, but it's not one of the stronger books in the genre and I don't think it will stand the test of time. I recommend The Willpower Instinct - by Kelly McGonigal and Peak: Secrets from the New Science of Expertise - by Anders Ericsson in its place.
- The Rent Is Too Damn High - Matt Yglesias is an intellectually dishonest pundit and I recommend staying away from anything he publishes. He's a leading representative of what Nassim Taleb calls the Intellectual Yet Idiot[1]. He deleted 3000 tweets praising Obamacare that look bad in retrospect[2][3]. He's also mentioned directly in the Podesta emails as a pundit to be "cultivated."[4] This article from 2011 [5] points out numerous examples of his sloppy reporting and intellectual dishonesty where he doesn't own his mistakes, deletes critical comments, etc.
1. https://medium.com/@nntaleb/the-intellectual-yet-idiot-13211...
2. https://twitter.com/BuffaloBlueBear/status/79120869059868262...
3. https://twitter.com/JimmyPrinceton/status/791127776388583424 (check the whole thread)
4. https://wikileaks.org/podesta-emails/emailid/31954
5. http://www.chequerboard.org/2011/02/matt-yglesias-the-one-ma...
It has the added bonus of providing an alternative biography of Steve Jobs which in itself is interesting.
It's much more than a story about Pixar. It's a great insight into some of the very problems you deal with as you build and try to maintain a culture.
I can't recommend it enough.
If you want a peek into the books content Ed Catmul did a great talk at Standford.
The Global Minotaur: America, Europe and the Future of the Global Economy by Yanis Varoufakis [1]
The Price of Inequality: How Today's Divided Society Endangers Our Future By Stiglitz, Joseph E. [2]
The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War by Robert J. Gordon [3]
1. https://www.amazon.com/Global-Minotaur-America-Economic-Cont...
2. https://www.amazon.com/Price-Inequality-Divided-Society-Enda...
3. https://www.amazon.com/Rise-Fall-American-Growth-Princeton/d...
The Enemy (mentioned in the list) is a prequel, though. But it is special in the sense that it is narrated in first person. Special because (no, not a spoiler): Reacher, the protagonist, doesn't say much, but the internal thinking is described in a very attractive way through out (so readers naturally long to hear in first person). The most common thing you read in the books is: "Reacher said nothing". Heck, it's so common that there's even a book written with that phrase as the title; it shadows the author, Lee Child, to investigate what it takes to make the popular character -- http://www.penguinrandomhouse.com/books/529959/reacher-said-...
FWIW, other books from the author I enjoyed and recommend: Echo Burning, Die Trying, Tripwire, Persuader. (/me fondly recalls reading 17+ books (even saving money as a student 10 years ago to pre-order) until a few years of years ago; will resist making a comment on the Reacher movies; but makes a sincere please to read the books first, and ignore, as best as you can, the movies).
---
Related author: Robert Crais (characters: Elvis Cole and Joe Pike).
The edition pictured, translated by Edith Grossman, is extremely approachable. It uses mostly modern language which makes the original humor of the book really stand out. It's incredible that a book written over 500 years ago can still be funny and engaging. I'd recommend it to everyone, after reading this translation it moved from 'boring book I couldn't finish' to 'one of my favorite novels ever'.
[0] http://www.goodreads.com/book/show/4839382-the-first-tycoon
It's almost like a horizontal scroll version of this blog post. I guess it's the small sample size.
The latter half of the book, however, was by far the weakest. There he attempts to recommend fixes for the issue from a libertarian perspective. His beliefs are not surprising given his current employment in a Thiel hedge fund. This part of the book had little insight and seemed to be an ideologically driven argument by, ironically, a newly minted financial elite that his own community distrusts.
1) http://www.motherjones.com/politics/2016/08/trump-white-blue...
2) http://www.nytimes.com/2016/09/25/books/review/strangers-in-...
3) https://www.washingtonpost.com/news/book-party/wp/2016/09/01...
https://www.goodreads.com/list/show/106375.YC_s_Winter_Readi...
I'd recommend Marryat's Mister Midshipman Easy over Forester's Mister Midshipman Hornblower. Marryat speaks from authority when he speaks of the sea and of naval warfare. Neither Forester nor Patrick O'Brian sailed.
"Whiplash: How to Survive Our Faster Future"https://www.amazon.com/gp/product/1455544590
"What the Luck?: The Surprising Role of Chance in Our Everyday Lives"https://www.amazon.com/gp/product/1468313754
"Shrinking the Earth: The Rise and Decline of American Abundance"https://www.amazon.com/gp/product/019984495X
The ideas explored in these books are fascinating and could not be more timely. The historical notes are interesting. The reading is fun! "Dictator" is the third book in the trilogy & its Wikipedia page links to the rest: https://en.m.wikipedia.org/wiki/Dictator_(Harris_novel) The narration on the Audible editions is fantastic.
"Infomocracy" -by Malka Order.
+++ some other favorites:
When Breath Becomes Air. -by Paul Kalanithi.
Arkwright. -by Allen Steele.
The God's Eye View. -by Barry Eisler.
Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley. -by Antonio Garcia Martinez.
Ego Is the Enemy. -by Ryan Holiday.
It traces the history of advertising and attention capture from billboards through Facebook.
Ed Catmull, the co-founder of Pixar with Steve Jobs and John Lasseter, on how they built a culture of openness, honesty, self-reflection, and risk-taking that protects new ideas and creativity instead of squashing them. Aaron Epstein
I wonder if it comes with any helpful pointers on how to execute long-term, systematic wage-fixing[0] schemes.
The top one-star review[1] on Amazon sums it up nicely.
[0] http://www.cartoonbrew.com/artist-rights/ed-catmull-on-wage-...
[1] https://www.amazon.com/gp/aw/review/0812993012/R1CW8GBYEH3UQ...
It feels like most people here read books to acquire knowledge and philosophy to apply to real life.
Most fantasy books are read for entertainment and imagination. There's no hidden message to parse and put toward your next start-up project. That doesn't mean Fantasy books are a waste of time though if they're engrossing and entertaining. That's why I read them.
Some fantasy recommendations:
The Name of the Wind, by Patrick Rothfuss
The Lies of Locke Lamora, by Scott Lynch
The First Law series, by Joe Abercrombie (especially the standalone books #4, #5 and #6)
The Way of Kings, by Brandon Sanderson
Link to original post: https://news.ycombinator.com/item?id=13117521
The summer reading list is also available here:http://shelfjoy.com/shelfjoy/ycombinators-summer-list-of-201...
Thanks for the wonderful recommendations to everyone at YC!
Basically it is just some type of feedback so that you don't overload subsystems. One of the most common failure modes I see in load balanced systems is when one box goes down the others try to compensate for the additional load. But there is nothing that tells the system overall "hey there is less capacity now because we lost a box". So you overwhelm all the other boxes and then you get this crazy cascade of failures.
Ignoring Elixir and Erlang - when you discover you have a backpressure problem, that is - any kind of throttling - connections or req/sec, you need to immediately tell yourself "I need a queue", and more importantly "I need a queue that has a prefetch capabilities". Don't try to build this. Use something that's already solid.
I've solved this problems 3 years ago, having 5M msg/minute pushed _reliably_ without loss of messages, and each of these messages were checked against a couple rules for assertion per user (to not bombard users with messages, when is the best time to push to a a user, etc.), so this adds complexity. Later approved messages were bundled into groups of a 1000, and passed on to GCM HTTP (today, Firebase/FCM).
I've used Java and Storm and RabbitMQ to build a scalable, dynamic, streaming cluster of workers.
You can also do this with Kafka but it'll be less transactional.
After tackling this problem a couple times, I'm completely convinced Discord's solution is suboptimal. Sorry guys, I love what you do, and this article is a good nudge for Elixir.
On the second time I've solved this, I've used XMPP. I knew there were risks, because essentially I'm moving from a stateless protocol to a stateful protocol. Eventually, it wasn't worth the effort and I kept using the old system.
GenStage has a lot of uses at scale. Even more so is going to be GenStage Flow (https://hexdocs.pm/gen_stage/Experimental.Flow.html). It will be a game changer for a lot of developers.
How many is a few? It looks like the buffer reaches about 50k, does a few mean literally in the single digits or 100s?
[1] "We've raised over $30,000,000 from top VCs in the valley like Greylock, Benchmark, and Tencent. In other words, well be around for a while."
Unfortunate that the final bottleneck was an upstream provider, though it's good that they documented rate limits. I feel like my last attempt to find documented rate limits for GCM/APNS was fruitless, perhaps Firebase messaging has improved that?
The average per minute only gets to be used because many systems have so little load that the number per second is negligible.
So... get 100 firebase accounts and blast them in parallel.
http://www.azcentral.com/story/opinion/op-ed/ej-montini/2016...
This seems to be just searching the clip title. For example, if you search 'curry 3' ('curry three' returns nothing), it'll return things like "Curry 2' Finger Roll Layup (6 PTS) (Iguodala 3 AST)" or "Curry REBOUND (Off:0 Def:3)". If it could match the search query with play-by-play data, now THAT'd be cool.
e.g., if I search "westbrook dunk", I probably don't want the normal dunks first, or the time he passed to someone else who then had the dunk. Show me the great Westbrook dunks first, then the normal ones, then the assists to other dunkers.
http://www.nba.com/sitemap_videos_0001.xml
However it appears outdated since the dates only go up to 8/4/2016.
every letter search becomes a new route.. yikes that is awful for going back
also, I searched for ginobili assist but it found all plays with ginobili and any assist.
good start and the video quality is pretty strong on mobile
I've heard that this is a particularly American attitude, that the mentally ill should be isolated with a clinician before doing anything else in life. Can any non-Americans say what their cultural attitude is? I've also heard that Indian culture is pretty much the opposite in this respect.
I managed to graduate high school and college and was working a good job until I developed schizoaffective disorder in 2001. It is like bipolar with schizophrenic cycles. Less than 0.5% of the population suffers from it and it is very rare so not everyone knows about it.
In 2003 I ended up on disability as I could not find a job, and could not hide that I was mentally ill. ADA says one cannot be discriminated against because of a mental disability or mental illness, but I was called overqualified or any other reason to reject me. As I grew older I also have age discrimination.
I wish I can say it gets better, but it is like fighting with demons in your head to keep the negative thoughts away. The thought that say you are worthless, or that you will fail, nobody likes you anymore, you are past your prime, etc. The medicine helps treat a chemical imbalance but not all mental illnesses are due to chemical imbalances. Getting a good doctor is hard as well.
I'm 48 and have been suicidal about 14 times in my life. I'm still alive to talk about it, so I survived somehow.
Generation-X is called the suicide generation because of so many of my generation killing themselves or getting suicidal. It is like life is so hard to live, you are on expert mode and struggling just to wake up in the morning and get ready for work, and by the time you get to work you used up most if not all of your mental energy to get up and get ready.
http://boingboing.net/2016/12/11/insiders-americas-largest-c...
Mental Hospitals as shown in that above link, are not always for helping mentally ill people and some are run like prisons and they keep you there until your insurance runs out to maximize their profits.
A mental illness is an invisible disability, when companies think of disabled people, they think of someone in a wheel chair, a deaf person, a blind person, someone missing body parts, etc. They never consider a mentally ill person or accommodating them. If the stress is too much and it is making you sick, other employees will do things to you to see how angry they can make you. Just because you can't smile due to flat effect paralyzing your face muscles, some employees think you might be up to something because you have a 'poker face' and didn't wave back to them when you walked in during the morning because you are too busy fighting demons in your head to see people wave at you.
There is no cure, no magic, it does not go away, you just try to learn skills to cope with it and find ways to screen out negative thoughts and maybe try a different medicine and see if it works better.
I am not 100% recovered, but I am making progress and trying to get back into programming. I am writing this to let people know that there is no magic bullet to kill the mental illnesses, and that we are not all violent like those public shooters they call mentally ill but they are sociopaths and most of us are not, we are just dysfunctional in some way.
Edit: Also, I like Clinton, but her being the Concordet winner doesn't mean much in terms of that world either. When almost all of the ballots give the top rank to one of two candidates, whichever of the two gets more votes is the Concordet winner. But that's only the case because of first-past-the-post, both because there aren't any good third-party candidates (neither centrist nor extremist), and because the structure of the race strongly encourages voters to sign up for one bandwagon or the other .
I'm still thinking of how to explain score voting to an illiterate voter. It requires a level of sophistication that is orders of magnitude more involved than a simple checkbox next to a candidate.
Take the second diagram in the article and move the candidates around. What's the optimal strategy? To move as close to your opponent as possible, while staying closer to the centre.
Now take the diagram with three candidates, and move them around. What's the optimal strategy? To move as far away from both opponents.
This works even if you consider an n-dimensional space for every conceivable issue. Two-party system encourages both parties to move towards the centre. Three- (or more) party system encourages them to spread out.
This also makes the ballot less ambiguous.
In the election, if a candidate gets 50% + 1 vote, they win. If not, anyone getting less than the turkey is tossed and cannot run in the runoff. No candidates win? Enroll a new slate.
From what I've seen of constructed scenarios that have this situation, they have 'left', 'compromise', and 'right' candidates, with the majority of voters tending to prefer left>compromise>right, or right>compromise>left.
If 'compromise' is the weakest candidate, either left or right ends up winning. But if left or right become popular enough, and make left/right the weakest candidate, than compromise wins out over both extremes.
Frankly, this actually doesn't seem like that much of a problem to me. If we end up with everyone's second-favorite choice, everyone is at least second-most happy.
For example, in the USA - Without commenting on Gary Johnson's politics or experience, I think the USA would have been happier had he been chosen. Half the population is pulling their hair out over Trump. Had Clinton won, the other half would likely feel the same way. Most of the population would be less-excited to see Johnson in office than their preferred candidate, but relieved that at least opposing candidate didn't get in.
The most famous "spoiler" was Perot, not Nader. Bush v. Gore was an extremely tight election, but there were third parties that drew more votes from Bush than Gore, and wer within the margin of error, possibly counterbalancing Nader's effect. Whereas with Perot, he received 19% of the popular vote, most of which were clearly drawn from George Bush, ensuring a Clinton win.
[1] http://explorableexplanations.com/ [2] https://github.com/ncase/ballot
Why do we need to place all this power in the hands of a single person anyway? Switzerland has an executive branch with 7 members from 5 different parties and a presidency that rotates annually.
PR also makes it much harder for lobbyists to influence lawmakers. Candidates don't need to convince everybody in order to represent those who identify with their party. And a candidate that represents one of many parties has to do a good job representing that party in order to keep that job.
We could move the House of Representatives to PR and keep the Senate as is.
That's most of the benefits of score voting (which is acknowledged as superior in the study here) with a runoff stage to address strategic voting.
I think a better tactic to actually use a new system is to share a vision to the general population of what voting under a new system would actually be like. Once the public at large is in favor of the general idea of moving to a new system, actually picking the best system should be more of an implementation detail.
In other words, walk the voter through a simulated ballot casting and show them what the results of the election might be under such a system.
I gave an effort to do this here: http://asivitz.com/voting/I'm not sure how well I succeeded though.
In terms of ballot designs, they are basically all just restricted subsets of the "score" voting ballot. That is, any voter preference that can be expressed in an "approval", "ranked choice", "ranked choice with ties", or traditional "single choice" ballot, can also be expressed with a "score" ballot.
That means every voting system is a "score ballot" system with some restrictions applied to the ballot. This means that, for example, you could have an election where you allow the voter to choose whichever ballot they are most comfortable with. Then you just interpret the ballot as a score ballot.
There are multiple ways to choose a winner from a set of score ballots. But debating between them is counterproductive to getting better voting systems adopted. Just start off with one that's easy to understand (i.e. "sum of ratings", or "only the first choice counts"), and worry about improving it later.
The important thing is to give the voter the option to use a more expressive ballot. Whichever one they feel most comfortable with. You could even make it so that initially, all ballots are converted to traditional "single choice" ballots for tallying, but let voters know how the vote would have turned out under other evaluation methods (like "sum of ratings" and Schulze). I think voters would quickly realize the value of counting all of their expressed preferences.
...
But that is a very cool site. Probably the kind of site the web was intended for, don't you think?
[1] https://news.ycombinator.com/item?id=12950566#score_12952384
At least for parliamentary / congressional elections it is both more proportional than FPTP and simpler to count than all the alternatives mentioned. You simply provide the voters with two ballot papers: one to select a local representative (counted and decided in the same way as a traditional FPTP election), and a second ballot paper where the voter can choose which party they want to have more power at the parliamentary / congressional level.
The trick is that these second votes are totalled across the whole nation and used to calculate the ratio of support for each party nationally, then those ratios are used to normalise the voting power of the representatives in the legislature. So if 10% of MPs elected are from the Triangle Party, with 20% of the national vote, then each MP gets effectively a double vote on bills, relative to a nominal MP with a proportionally correct amount of national support.
https://en.wikipedia.org/wiki/Tyranny_of_the_majority
To aggravate: many elections lately are win by tiny margins and with not all citizens casting a vote. Hence, the majority per se is not really setting the course of nations.
On a side note, I wonder why close results happen. I do not think it is an accident (some that I can remember now that are almost split in half: the popular vote in 2016 US elections, the yes/no vote for the Peace Treaty with FARC in Colombia, the 2016 presidential elections in Peru). Maybe there is no pattern either, but it seems odd that when facing an important decision, voters split in half. (my own conspiracy theory is that given the lack of grey area options - a raking effect as proposed by the OP - voters MUST pick a side and they are manipulated to veer in one direction by smart and sophisticated communications)
Duverger's law tells us that we will not see electoral diversity in the U.S. until we change the way we vote.
Some might say that any support for a better system is good, despite the motivations, but I disagree. All this support will vanish as soon as our current system chooses the "right" (well, the left) candidate.
sigh.
> ...will be moving his nation towards a better voting system in 2017.
well, you know, maybe. If the people want it in their fancy questionnaire. They might not though, since clearly FPTP is the most effective now that it resulting in a liberal majority.
- get rejected right away, or
- the manuscript gets distributed to fellow scientists (who reviews for free). The reviews get collected and manuscript rejected, or
- we get a chance to address the reviewer's concern, resubmit and gets rejected, or
- The editor does some proof-reading and publish the paper behide a paywall. I lose all the rights and I may need to ask the journal for permission to use part of it in my thesis, otherwise I risk plagirizing my own writing.
Sometimes when I read preprints in computer science/physics/bioinformatics etc. I feel in those disciplines researchers are a big happy family, and we biologists are locked in a prisoner's dilemma because we can't communicate. Then we fight each other and the publication companies are selling tickets for others to watch.
[0]: http://www.idea.org/blog/2011/04/04/fees-and-other-business-...
[0] https://www.eff.org/deeplinks/2016/08/stupid-patent-month-el...
What is the value that Elsevier is adding to have the de-facto monopoly on the entire enterprise of scientific publishing in so many scientific disciplines?
> For what its worth, I think the fiduciary responsibility argumentwhich seemingly gets trotted out almost any time anyone calls out a publicly traded corporation for acting badlyis utterly laughable. As far as I can tell, the claim it relies on is both unverifiable and unenforceable. In practice, there is rarely any way for anyone to tell whether a particular policy will hurt or help a companys bottom line, and virtually any action one takes can be justified post-hoc by saying that it was the decision-makers informed judgment that it was in the companys best interest.
a) plead with Uber's customer service to do so
or
b) add another payment method (like another credit card)
This, of course, is horribly bad practice. I can only imagine that they arrived at this very peculiar arrangement after extensive A/B testing - Uber has hired plenty of FB folks and those people tend to be really into that kind of thing. I haven't seen this kind of outright customer-hostility from a large Internet company.. well, ever, before.
So, no, I'm not surprised that this company is doing other unethical things - it sort of seems interwoven into their DNA.
If true, that is fantastically ludicrous.
It seems I wasn't paying attention, in 2014 - as this "God view" news passed me by. I will be keeping a closer eye on this as it plays out.
Uber obviously seems to be in a strong position, but going only by this article, Uber might fare poorly in a multi-region privacy-legislation legal battle (war?).
Side note: consider the value to foreign (or domestic) intelligence agencies of this weakly-guarded pot of gold.
[1] http://www.theverge.com/2016/11/30/13763714/uber-location-da...
Now I'm from a third-world country and can't afford to buy a $1000 phone every year, so I have to be careful with the life of my phone.
The turnaround this, I found, is to disallow location to the Uber app when not using the app and allowing access only when I use the app. This, however, is a pain and the Uber app behaves weird if I do so (the previous trip does not end after hours of it actually ending).
Very poor UX from Uber, potentially dangerous, definitely unethical. This is definitely a trend -- startups start with being caring of its customers, but once they grow big, they become callous and even malicious when it comes to users (I don't ask of them to give every customer personal support, but not mis-using customers is the least I can expect).
> Individual users' data is very closely guarded internally. It's immensely difficult to look at user data without specific access. Overwhelmingly, this data is queried in aggregate and fed into machine learning systems. The risk of abuse is exceptionally low.
Obviously this doesn't add up. What gives?
I can even silo whatever service I want into its own browser to limit tracking, and all location/permissions/etc are all sandboxed by the browser.
A huge bonus is battery life + ad blocking.
Mods should probably change the OP to link there.
Do they still have "Ride of Glory" detection?
Without locking down such access, you get incidents like these (and this was even when Google purportedly had strong auditing): http://www.pcmag.com/article2/0,2817,2369188,00.asp
> Google this week confirmed that it fired an engineer who accessed the Gmail and Google Voice accounts of several minors and taunted those children with the information he uncovered.
The public sector has its fair share of these too: http://articles.orlandosentinel.com/2013-01-22/news/os-law-e...
Here's a URL to the plaintiff's declaration: https://www.documentcloud.org/documents/3227535-Spangenberg-...
Lots of tidbits there...including how all payroll information is apparently contained in an "unsecure Google spreadsheet"
It works for writers, celebrities, etc - why not the rest of us.
EDIT to clarify: this is a serious comment, you can read it literally.
There are a million places to talk negative about everyhing. Here, we're trying to build things. We know no one is perfect. Lets make this place a bastion of positivity instead of negativity.
The MI8 card's HBM has a great power and performance advantage (512 GB/s peak bandwidth) even if it's on 28 nm. NVIDIA has nothing that has even remotely comparable bandwidth in this price/perf/TDP regime. None of the NVIDIA GP10[24] Teslas have GDDR5X -- not to surprising given that it was rushed to marked, riddled with issues, and barely faster than GDDR5. Hence, the P4 has only 192 Gb/s peak BW; while the P40 does have 346 GB/s peak, it is far higher TDP, different form factor and not intended for cramming in into custom servers.
[I don't work in the field, but] To the best of my knowledge inference is often memory bound (AFAIK GEMV-intensive so low flops/byte), so the Fiji card should be pretty good at inference. In such use-cases GP102 can't compete in bandwidth. So the MI8 with 1.5 the Flop rate, 2.5x bandwidth and likely ~2x higher TDP (possibly configurable like the P4) offers an interesting architectural balance which might very well be quite appealing for certain memory-bound use-cases -- unless of course the same cases are also need large memory.
Update: should have looked closer at the benchmarks in the announcement; in particular the MIOpen benchmarks [1] MI8 clearly beating even TitanX-Pascal which has higher BW than the P40 indicates that this card will be pretty good for latency-sensitive inference as long as stuff fits in 4 GB.
[1] http://images.anandtech.com/doci/10905/AMD%20Radeon%20Instin...
I'm looking forward to the day that Nvidia gets some competition in the GPUs-for-deeplearning market. Further, doing some smaller Deep learning experiments on my MacBook Pro with AMD discrete GPU is another benefit I'm looking forward to ;)
I had to ship out a high-end gamer GPU with a dummy HDMI adapter for this purpose recently. But it's obviously not very efficient. It would also be nice to be able to run multiple screens in parallel, not just one per GPU.
I doubt there will ever be a product for my use case, but one can dream...
That said, these are cool. I think they're lower power than the Nvidia equivalent, but I could be mistaken (I just recall the Tesla models being power hungry.. enough to cause a real problem in a datacenter rack).
That said, I would love to be proven wrong. Healthy competition such as this fosters much better results. Also CUDA is not without issues in certain matters.
I am all for choice, but AMD has a lot of catching up to do.
What is especially noticeable to me is the back which assigns that sheet to Leonardo da Vinci, being most likely original but would hardly sell for a price that high.
The front on the other side is - almost too perfect - too unbelievable, to good to be true.
You want to see what's inside, but you haven't decided if you care, however by clicking it you're tricked into thinking you do (after all, you are the one clicked on it, now you're invested, it's not like it was the carefully crafted headline /s).
Do I exaggerate?
He was merely advising the first author, who actually wrote the code. Source: (requires Google login) https://groups.google.com/forum/#!topic/word2vec-toolkit/XC7... and https://groups.google.com/forum/#!msg/word2vec-toolkit/Q49FI....
This highlights something people on HN don't appreciate about machine learning: how hard it is to actually trust results, and how likely it is that the results were affected by bugs in the code or how the dataset was handled. In this case the second author was only able to replicate if he didn't shuffle the dataset. Graduate students almost never write tests for their code.
Distributed Representations (1986) http://stanford.edu/~jlmcc/papers/PDP/Chapter3.pdf
"Each entity isrepresented by a pattern of activity distributed over many computingelements, and each computing element is involved in representingmany different entities."
Full Book: http://stanford.edu/~jlmcc/papers/PDP/
Bonus points for recognizing the bullshit parade that is the current startup world. e.g.: NodeJS has value, but it's mostly the same wheel we've had for 20+ years. Or that MongoDB's changelog has consisted of standard SQL features for the past five years and that pgsql would have been just fine (had people read some boyce-codd anyhow).
ability to assess tech/architecture risks in apps
experience in devops automation ("secdevops" if you will)
proven skill in communication regardless of depth
The ideal candidate would have all three, but I could settle with any two of these and still be happy.
I am not currently hiring, but I'll gladly keep any CVs I receive and prioritize follow-ups with anyone who reaches out to me directly. Austin/DC for curious souls.
---
p.s. the web appsec space is in ludicrous demand. If you've got a breaker mindset, you'll probably come out ahead if you read up on it. If you're a developer right now and want to dip into it, I'd suggest: https://www.amazon.com/Web-Application-Hackers-Handbook-Expl...
Trust me, us security folk will thank you. Heck I'd suggest it to non-hackery devs too. It's a good way to find out how us security types see the world.
I look to hire people who just need a job. People who are qualified, but not overly qualified. People I know will depend on the job for a long time, but not looking to make it their lives. Hard workers - getting there on time, but also leaving at the stroke of 5. Ivy league schools are a red flag. Huge resumes are a red flag. These people will constantly question whether every decision is optimal, prod incessantly at company strategy, continuously try to impress, and are always hungry for praise, recognition, and "interesting work." When they get bored after 6 months, they quit and go somewhere else (remember they can easily do so because of their pedigrees), often to a competitor, bringing company secrets with them.
I need someone loyal, who knows how to take orders without question, and is prepared to do the work that needs to be done day in and day out because they want the paycheck. Reading the above, you might think I'm a terribly demanding boss, but using this hiring strategy has produced a 100% employee retention rate and by all accounts we are all quite happy.
Aside from the obvious interest in building container orchestration systems, I look for a passion to solve real user problems, not only building a piece of tech.
Bonus points for knowing about Docker or containers or clouds or Golang or security.
More points for meeting users where they are. And the most bonus points for leadership and initiative.
We're particularly looking for someone to lead and/or manage our software eng team building security features into Kubernetes and GKE.
We hope to expand our team in early 2016 and have a mainly java micro-services with some PHP and native apps on the front. Will likely add to the java team in addition to an IOS dev.
Nice atmosphere, nice people. We try to select for people who don't like to be micromanaged (but are still friendly) and assign responsibility not tasks wherever able. Varying degrees of success but overall happy with the approach.
Looking for at least one highly skilled person with java experience and ideally a fin-tech background. Not sure the salary would be competitive with SF but cost of living is small and its a great lifestyle (for those who like daily excitement/challenges and learning new cultures). On site. Other roles would likely be unsuitable (read: cheap!) for the HN audience.
For 2017, I want to hire more engineers with Kubernetes, CoreOS, and Go experience. My team has deep Linux systems administration experience but we've automated ourselves out of most of the day-to-day admin work of yesteryear. Our future hires will be heavily focused on automation. We've already automated builds, testing, deployment, monitoring, and metrics in a Kube/Docker pipeline. I expect to automate load balancing and hardware deployment in 2017. I also expect that we will adapt many of our non-Kubernetes data services for running containerized in Kube.
Also backend and data engineering roles (C++/Java/Go/Kafka/etc) are in high demand here.
SoundHound is hiring in SF/Santa Clara/Toronto.
+ for new web platform things like Service Workers, advanced SVG.
Could care less about whatever franework is hot this week.
Advice for senior engineers: brush up your practical programming. If you've been in an architect/leadership role, you may be rusty. Make sure you're comfortable on both whiteboard and keyboard.
If you spent the last 5 years writing iPhone apps, we expect you to know iPhone development pretty well. Memory management is the obvious area here.
Be ready to explain the most recent projects on your resume. Think outside the box - if you wrote code to process messages from a black box, how do you think the black box worked? If you consumed JSON messages, how much can you explain of JSON and JSON parsers? Many projects are so narrow in scope that we can't have a meaningful conversation about them, so be prepared to broaden into adjacent areas.
Advice for new grads and early-career engineers: have some solid, non-trivial code on github (or equivalent) and make sure we know about it. Be prepared to discuss it and explain design decisions. Few do this.
This post is my take on the question - what follows is especially subjective and not representative of shopkick:
Don't put stuff on your resume that you don't know. Or, brush up the skills featured on your resume.
Learn a scripting language, especially if you're a server engineer. People who only know Java/C++ are at a big disadvantage if they have to write code in an interview. How big? Turning a 5 minute question into 35 minutes is typical - and it gets worse. One very smart, very experienced man took 45 minutes on such a question. Of course, don't just port Java idioms to Python; learn Python idioms. Good languages are Python/Ruby/Perl. I think a HN reader probably doesn't need to be told this, but just in case. Properly used, scripting languages teach techniques which carry over to compiled languages.
Server engineers should be comfortable with either vi or emacs. And with basic Linux. Personally I find it astounding that a server candidate would be unfamiliar with ls and cat, but it happens.
I hope this is helpful and doesn't sound arrogant.
We may also need a strong lead for a new business unit, a role akin to 'founder lite' you run a business unit with two others, you have your own burn rate, your own P&L, etc. The strongest skills someone can have for it are former founder experience (aka: broad experience doing lots of things, moving quickly, MVP, etc).
Palo Alto, San Francisco, Seattle.
This shows up in a resume in lots of different ways. For some people it is a rich Github profile. For others it is that they paid their way through college by building websites or apps.
We primarily hire Ruby on Rails developers who work remotely. Seeing in someone's Github profile that they like to contribute to open source and know how to collaborate with other developers are really important.
- Developers: We use mostly java, swift, and JS (Angular 2) but we always look for polyglot developers, full stack developers, or whatever you want to call someone that see the language as a mean to achieve a goal and not the goal itself.
- DevOps: Deep ec2 knowledge and experience. AWS certification is a plus
Our stack is node.js/React/Postgres so knowing any/all of those is a bonus, but we don't specifically target those skills we instead look for a diverse, intelligent set of engineers who have a strong technical background or a newer technical background but heavy experience in a non-programming field (mathematics, economics, architecture, teaching, customer support, etc; they all have their benefits). Interest in being "full stack", participating heavily in the product management process (strong opinions loosely held!), and a belief in the critical importance of design & UX (unfortunately still heavily undervalued in the Enterprise space...) are important.
Hiring in San Francisco & Washington, DC by the way.
troy.goode@lanetix.com
Since it's a jr role I'm looking more for evidence that they want to learn than examples of accomplishments.
Interviews have practicals where you work on problems you'll see regularly with skills we expect you to have (like writing code, debugging, and task breakdown). Good communication, pairing skills, quick learning, and taking responsibility for your circumstances stand out.
Since we also do private consulting and project-based work in addition to our workshops, we have recently got to talking with our clients about helping them get full-time employment. So I think this post is pretty timely and very relevant to us. Here are a few reasons why we think React is important for the job market.
Lots of companies are choosing React for their front-end these days. It allows your front-end devs to embrace the full power of JavaScript for the front-end -- no more messing around with jQuery and tons of plugins. Sure, there's a bit of a learning curve, like all new things. But there is now a large and devoted community to React and it's only growing. A personal friend of mine convinced his boss to greenfield their entire app with 10,000 lines of jQuery, and rewrite it entirely in React. He was a new hire (and also a great communicator/salesman).
Coding bootcamps are embracing React as well. Since most of these institutions survive year-to-year based on how well their placement numbers are for graduates, they are paying close attention to the trends in development. One could argue that since they are probably more technical than the average recruiter, they may even have a better grip of the pulse. FullStack Academy, of New York and Chicago, recently wrote a blog on why they're moving their curriculum from Angular to React (https://www.fullstackacademy.com/blog/angular-to-react-fulls...). App Academy (SF & NYC) has had React in its curriculum for a number of months (https://www.appacademy.io/immersive/curriculum). And I've personally spoken with alumni of Hack Reactor in SF who said that most students built their capstone project in React (or attempted to).
Is React the best solution? That's arguable, as all things are. It also depends on what you want to accomplish. But for the relevancy of this post -- asking what tech skills people will be hiring for in 2017 -- I would argue that React is going to be one of the top skills. And with that includes...
ReduxWebpackImmutableRxJS
As far as backend, the top three technologies that we've seen with our clients are:
PythonGoDocker
But of course, all of this is moot without the foundation of strong JavaScript skills. Our students who have strong JS skills pick up React quickly -- those who don't only get confused.
Anyways, if you are skilled in React and other related technologies and you are looking for work, you can always email me: ben at realworldreact dot com with some info about yourself and/or your resume.
I'm not a hiring manager, but as the CTO I do review a lot of resumes incoming for technical positions we are hiring for.
The vast majority of applicants do not appear to be taking any time at all aside from selecting their resume to upload and clicking submit. It doesn't seem like they even read the job requirements, since 90% of them do not meet the minimal requirements we post. Some of them are not even developers, but they apply for a developer position.
If someone does appear to be relevant and did also include a cover letter relevant to the position, I will respond, regardless if they're a fit or not.
For me the biggest pain is the sheer amount of irrelevant submissions, which makes you numb after a while. This is why I don't believe in job postings anymore and mostly do headhunting.
Hope this helps!
Dear [First Name],
Thank you for your time and interest in a career at Snap Inc. At this time, our team has decided to evaluate other candidates for the [role]. However, we encourage you to apply in the future for positions matching your goals because our needs change frequently. Thanks again!
Best wishes,Snap Inc.
They must receive an enormous amount of applicants from all over so even though I didn't make it anywhere in the interview process, I'm appreciative of receiving a response and getting closure.
When I was employed, our HR department used Monster's ATS. They found it difficult to use and didn't bother to inform candidates of their application status.
I have been in the situation before where replying to everyone with anything meaningful is simply not feasible. Maybe for a recruiter whose full-time job is that but not for a hiring manager who also has to balance their regular duties as well.
I have spent much more time on the applicant side of things than the hirer side so I understand the goal. It can be frustrating to not get anything. If it is a job you really want you may be inclined to hold everything else off until you hear something just on the hope that maybe they haven't gotten to your resume yet. So a little closure would be nice.
So maybe a better question for you is what are you trying to accomplish by getting hiring managers to reply to all candidates? Give them closure or provide feedback? If the former than maybe a simple "no thanks" will do.
By the way, I am speaking clearly to the scenario where a candidate sends in a resume and doesn't hear anything back. In my opinion, even if the hiring manager or recruiter does a phone call the candidate deserves a clear "no" email at a minimum.
By filling out the application form on our website, you load all the information into the form for me, and are guaranteed that a recruiter will follow up on your entry. If you want to send an email to the hiring manager as well to explain why you are so awesome, that's fine, but it's probably not going to help your chances of getting a job any more than just applying.
Target a small handful of companies strongly relevant to your experience and interests, and start informally chatting with people who work there. Ask about the culture. Get coffee. Ask how they like working there. Talk about what you've been working on that's related. Ask some questions about interesting problems they're trying to solve. Be interested and interesting. Points for going straight to an Eng VP or CTO -- even if they don't have the time to talk to you, they'll pass it to one of their underlings who does, and when your VP/CTO tells you to follow up with someone, you do.
The resume should be mostly a formality AFTER they've expressed some interest in your skills and have invited you to formally interview.
And if it doesn't pan out, you've already made personal connections with people there. Get coffee again for feedback.
We will respond to everyone that gets past this first round. And if you get a phone / in person interview we will definitely call you back to say 'no sorry'.
(That's at resume review stage. If a candidate has actually talked to you, including any kind of interview, then they deserve a response, and I do follow up with everyone who gets to that stage.)
They sent out a mass email about 3-4 days later saying they had 550 applicants they were trying to sort through- so hold tight basically.
Now I pretty much know I'll get a mass email "no" if they don't decide to interview me. Which is nice.
https://www.hireloop.io/how-does-it-work
Goes to 403 Forbidden. Atleast put something in there???
403 Forbidden
Code: AccessDenied
Message: Access Denied
RequestId: 4XMR36267413GRGBC72
HostId: BGu7DieumfZVCvftdpMIhXeFm2Qyyy2TyJ+P9jpQr3csSyYNIZBoGKhush8nMc4rHSj6+HighM=3p-
All other pages, including Pricing page, work tho ;) https://www.hireloop.io/#pricing
Boards could be designed to generate magnetic fields via embedded current loops. Instead of having a wire connect two components in a straight line (shortest path approach), it could be done in a way that meanders around, intentionally creating large curls. Since we're talking about scales of 1e-9m, these fields would probably be pretty strong.
Now, I don't know too much about superconductors, but vacuum tends to be pretty fucking cold (2K -- surely, lower than the critical temperature for many superconducting materials). It might even be possible to create a Meissner cage around the important components, in a way that protects our components from self-harm, while still protecting them from external charged particles.
Has this theory been tested? After all, it works for the Earth. I am afraid doing that might also be detrimental to our electronic components (unless we can somehow create a diamagnetic cage to selectively protect our components).
So currently we are able to reach Alpha Centauri in 100 years? And now this magical chip can travel at 20% the speed of light? I'm either a horrible reader or this article left some really important details out!
[1] http://spectrum.ieee.org/tech-talk/semiconductors/devices/se...
[2] http://arstechnica.com/science/2016/04/breakthrough-starshot...
[3] http://arstechnica.com/science/2016/08/could-breakthrough-st...
I think that's a typo - unless the light actually has a frequency of 1khz, or I misunderstood the article.
My Pebble Time Steel broke (actually just the band), just before these announcements and I got a refund (spendable only at the company I bought the Pebble, but that's ok.) But I miss it! I miss the the notifications, I had gotten completely used to putting my Phone somewhere, anywhere withing BT range for the duration of the day. 99% of notifications do not require immediate attention (I also strictly filter what was allowed through to the watch) but some do and the ability to see that on your wrist is gold to me.
For now, sadly, there is no replacement that even comes close. I really want an always one screen and at least 5 days of battery life.
I still wear my original Pebble. It's reliable with very long battery life, and is one of the very few wearables that works well while wearing gloves.
that's depressing :( i was hoping that they had just misexecuted, and someone else would step in and fill the niche of "e-ink watch with long battery life that is geared towards displaying things your phone sends it"; i have no use for fitness tracking and biometrics, and pebble's featureset and reasonably open ecosystem was ideal for me.
the saddest thing is that even buying used pebbles on ebay won't help me much with their servers going offline :(
1. Resemble an actual, reasonably sized watch.
2. Display notifications from my phone along with their actions (such as marking an email as "Done" in Inbox, or liking a Tweet).
3. Allow me to respond with my voice for notifications that support quick replies.
The Pebble Time Round was great at all three, and (as far as I know) was the only watch to have the features that I wanted in a form factor that resembled a reasonably-sized watch. The only other alternatives currently are Android Wear watches, all of which don't look like the kinds of watches I like to wear (they're thick, with large bezels and superfluous embellishments).
If I knew for sure that a Pebble Time Round would continue to be useable for the next six months, I'd buy one in a heartbeat, but the uncertainty makes me hesitant to spend the $100+ on one.
Perhaps this is true, but hopefully not any longer among HN readers:
- Antonie van Leeuwenhoek: world first microbiologist, huge improvements in the microscope
- Martinus Beijerinck: discoverer of the virus
- Produces the Nuna, which won the world solar challenge in Australia 5 times
I honestly don't understand this line of thinking. Why not, you know, sell something for more than it costs?
The screen may not be as gorgeous as a phone screen, but it is on all the time.
It's also very developer friendly. Heck, you can even create watchfaces in Javascript.
Before the acquisition, I would not trade it for the first generation Apple Watch. Maybe the second one, just maybe, if the community does not find a way to keep the current Pebble devices running.
Other than the fact that he would need to fire everybody, what is wrong with reducing costs to make it profitable like this? The problem seems like there wasn't enough profit to support a staff of 120 people. I can't imagine the VCs would object. They already lost their money.
This shift was primarily motivated by two factors:
1. I lost interest in the activity tracking features. 2. Even worrying about the battery once every 6 months was too much.
And supplemented by a third, which is that the fancy watch wasn't very readable and lacked a second hand. That made it less capable at the main thing I use it for: keeping track of the time.I looked hard at a Pebble at one point before deciding that, since my phone is almost always in my pocket or on the table in front of me, getting it into position for viewing information would take only nominally more effort and probably gets me to a place where I can act on whatever information the device is telling me much more efficiently. Also, having a non-user-replaceable battery means that the device will only live for so long, and I'm really trying to limit my consumption of disposable technology.
I think that, for now, my most optimistic case for smartwatches is that they're at about the same phase as handheld computing was 15 or so years ago. The technology is really interesting, but there needs to be more technology development and ironing out of subtle details before the idea is quite ready to take over the consumer market.
Over the last few years, I noticed store catalogs giving much space to fitbit, and little to pebble (or apple watch).
Perhaps this is a crude exoscope, for viewing outside the bubble/RDF? Like Buffett observing people still actually using American Express, outside Wall Street's gloom.
Those catalogs now include iPhone 5s and Samsung Galaxy S5 alongside the flagships, suggesting "good enough" and flagships have overshot...
Where will the tech, talent and investment go, if smart phones and watches are good enough, and VR/AR is a wash?
Pebble's problem is the lack of features, and specific to users of iOS like myself, relative instability and weirdness when it comes to important things like notifications. The monochrome screen is also less attractive though IMO bearable.
Apple Watch's problem is battery life. It's absolutely unattractive to me to have yet another thing I need to charge once a day. It wins on pretty much everything else but such a low battery life is crippling to this sort of device.
I feel like between the two you have a rough approximation of the laptop offerings of the early 80's. Yes, they did exist and some people used them, but by and large they were terrible for the functions they were built for. I have a feeling in not even that much time, we'll have proper smart watches with good integration across platforms that will have a screen like Apple's and the life of a Pebble, but for now, laying out $250 for what's basically a bleeding edge prototype is unattractive to the mainstream consumer.
Edit: Question for HN: Would you all consider a Fitbit to be a smartwatch? I mean it's a watch-esque device that does more intelligent things than just a regular watch but I feel like that's more of a wearable monitoring device.
I'd be interested to know what he did get out of the deal. Sold for "south of $40M" and with the statement, "Hes not leaving Pebble as a wealthy man."
---
EDIT: Thanks for the downvote. Can I have the summary, please?
The standard git workflow (and this github feature) seems to promote resolving the conflicts alongside all of the other changes in the merge working copy. This make me nervous, as there's no way to differentiate the new lines that were introduced to resolve merge conflicts from the thousands of lines of (previously reviewed) code from the feature branch.
If you're not careful, completely unrelated working-copy code and behavior can be introduced in a "merge commit" and neither you or any of your reviewers will notice. "Looks good to me."
I recently demonstrated a really simple bagged-decision tree model that "predicts" if the scanned part will go on to fail at downstream testing with ~95% certainty. I honestly don't have a whole lot of background in the realm of ML so it's entirely possible that I'm one of those dreaded types that are applying principles without full understanding of them (and yes I do actually feel quite guilty about it).
The results speak for themselves though - $1M/year scrap cost avoided (if the model is approved for production use) in just being able to tell earlier in the line when something has gone wrong. That's on one product, in one factory, in one company that has over 100 factories world-wide.
The experience has prompted me to go back to school to learn this stuff more formally. There is immense value to be found (or rather, waste to be avoided) using ML in complex manufacturing/supply-chain environments.
One of the products the company I work for sells more or less attempts to find duplicate entries in a large, unclean data set with "machine learning."
The value added isn't in the use of ML techniques itself, it's in the hype train that fills the Valley these days: our customers see "Data Science product" and don't get that it's really basic predictive analytics under the hood. I'm not sure the product would actually sell as well as it does without that labeling.
To clarify: the company I work for actually uses ML. I actually work on the data science team at my company. My opinion is that we don't actually need to do these things, as our products are possible to create without the sophistication of even the basic techniques, but that battle was lost before I joined.
In our space, the recent AI / ML advances have made things possible that were simply not realistic before.
That being said, the hype around Deep Learning is getting pretty bad. Several of our competitors have gone out of business (even though they were using the magic of Deep Learning). For example, JustVisual went under a couple of months ago ($20M+ raised) and Slyce ($50M+ raised) is apparently being sold for pennies on the dollar later this month.
Yes, Deep Learning has made some very fundamental advances, but that doesn't mean it's going to make money just as magically!
We use ML/Deep Learning for customer to product recommendations and product to product recommendations. For years we used only algorithms based on basic statistics but we've found places where the machine learned models out perform the simpler models.
Here is our blog post and related GitHub repo:https://aws.amazon.com/blogs/big-data/generating-recommendat...https://github.com/amznlabs/amazon-dsstne
If you are interested in this space, we're always hiring. Shoot me an email ($my_hn_username@amazon.com) or visit https://www.amazon.jobs/en/teams/personalization-and-recomme...
1. Course Recommendations. We use low rank matrix factorization approaches to do recommendations, and are also looking into integrating other information sources (such as your career goals).
2. Search. Results are relevance ranked based on a variety of signals from popularity to learner preferences.
3. Learning. There's a lot of untapped potential here. We have done some research into peer grading de-biasing [1] and worked with folks at Stanford on studying how people learn to code [2].
We recently co-organized a NIPS workshop on ML for Education: http://ml4ed.cc . There's untapped potential in using ML to improve education.
[1] https://arxiv.org/pdf/1307.2579.pdf
[2] http://jonathan-huang.org/research/pubs/moocshop13/codeweb.h...
The same coworker also used decision trees to analyze query performance. He trained a decision tree on the words contained in the raw SQL query and the query plan. Anyone could then read the decision tree to understand what properties of a query made that query slow. There's been times we're we've noticed some queries having odd behavior going on, such as some queries having unusually high planning time. When something like this happens, we are able to train a decision tree based on the odd behavior we've noticed. We can then read the decision tree to see what queries have the weird behavior.
The primary advantage for customer is easier to use and troubleshoot faster.
https://www.sumologic.com/resource/featured-videos/demo-sumo...
We work with B2B and B2C SAAS, mobile apps and games, and e-commerce. For each of them, it is a generalized solution customized to allow them to know which end users are most at risk of churning. The amount of time range varies depending on their customer lifecycles, but for longest lifecycles we can, with high precision, predict churn more than 6 months ahead of actual attrition.
Even more important than "who is at risk?" is "why are they at risk?". To answer this we highlight patterns and sets of behavior that are positively and negatively associated with churn, so that our customers have a reason to reach out, and are armed with specific behaviors they want to encourage, discourage, or modify.
This enables our customers to try to save their accounts / users. This can work through a variety of means, campaigns being the most common. For our B2B customers, the account managers have high confidence about whom they need to contact and why.
All of this includes regular model retraining, to take into account new user events and behaviors, new product updates, etc. We are confident in our solution and offer our customers a free trial to allow us to prove ourselves.
I can't share details, but we just signed our biggest contract yet, as of this morning. :)
For more http://appuri.com/
A recent whitepaper "Predicting User Churn with Machine Learning" http://resources.appuri.com/predicting_user_churn_ml/
One way we're applying this is automatic creation of panoramic tours. Real estate is a big market for us, and a key differentiator of our product is the ability to create a tour of a home that will play automatically as either a slideshow or a 3D fly-through. The problem is, creating these tours manually takes time, as it requires navigating a 3D model to find the best views of each room. We know these tours add significant value when selling a home, but many of our customers don't have the time to create them. In our research lab we're using deep learning to create tours automatically by identifying different rooms of the house and what views of them tend to be appealing. We are drawing from a training set of roughly a million user-generated views from manually created guided tours, a decent portion of which are labelled with room type.
It's less far along, but we're also looking at semantic segmentation for 3D geometry estimation, deep learning for improved depth data quality, and other applications of deep learning to 3D data. Our customers have scanned about 370,000 buildings, which works out to around 300 million RGBD images of real places.
Generally speaking, I think if you know your data relationships you don't need ML. If you don't, it can be especially useful.
We also support linear regression in the product itself - it was actually an on-boarding project for one of the engineers who joined this year, and he wrote a blog post to show them off: https://www.periscopedata.com/blog/movie-trendlines.html About 1/3rd of our customers are using trendlines, which is pretty good, but we haven't gotten enough requests for more complex ML algorithms to warrant focusing feature development there yet.
Not really a new application though...
We use "real" ML for sentiment classification, as well as some of our natural language processing and opinion mining tools. However, most of the value comes from simple statistical analysis/probabilities/ratios, as other commenters mentioned. The ML is really important for determining that a certain customer was angry in a feedback comment, but less important in highlighting trending topics over time, for example.
We also use ML to classify bittorrent filenames into media categories, but it's pretty trivial and frankly the initial heuristics applied to clean the data do more of the work than the ML achieves.
On the other hand I found an internal fraud costing us 2-3 M /year applying only the weak law of big numbers. Big corp, big numbers.
Now I build a similar system for a smaller company. I think we will stick mainly to logistic regression. I actually use "neural networks" with hand-crafted hidden layers to identify buying patterns in our grocery store shopping cart data. It works pretty well from a statistical point of view but it is still a gimmick used to acquire new b2b partners.
[1] https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn...
There are a number of other statistical techniques you can use for this but scikit-learn makes this very very easy to do.
Trad learning for many applicatons : fault detection, risk management for installations, job allocation, incident detection (early warning of big things), content recommendation, media purchase advice, others....
Probabilistic learning for inventory repair - but this is not yet to impact, the results are great but the advice has not yet been ratified and productionised.
The first pass is usually a regex to find names, then for what's left run a natural language tool to find candidate names, and then manual entry.
The bulk of what we do is anomaly detection.
[1] https://skymind.io/case-studies[2] insights.ubuntu.com/2016/04/25/making-deep-learning-accessible-on-openstack/
I would classify something like this blog post as ML, would you? http://stackoverflow.blog/2016/11/How-Do-Developers-in-New-Y...
Marketers create their messages and define their goals (e.g., purchasing a product, using an app) and it learns what and when to message customers to drive them towards those goals. Basically, it turns marketing drip campaigns into a game and learns how to win it :)
We're seeing some pretty get results so far in our private beta (e.g., more goals reached, fewer emails sent), and excited to launch into public beta later this month.
For more info, check out https://www.optimail.io or read our Strong blog post at http://www.strong.io/blog/optimail-email-marketing-artificia....
https://skillsmatter.com/skillscasts/9105-detecting-antisoci...
(We're still beginners as will be apparent from the video but it's proving useful so far. I should note, we do have 'proper' data scientists too, but they are mostly working on audience analysis/personalisation).
Wrote a system for automatically grading kids' essays (think the lame "summarize this passage"-type passages on standardized tests). In that case it was actually a platform for machine learning - ie, plumb together feature modules into modeling modules and compare output model results.
Also, some GPU goodness for 10-100X visual scale, and now we're working on investigation automation on top :)
LEGAL INDUSTRY
Aka e-discovery [2]: produce digital documents in legal proceedings.
What was special: stringent requirements on statistical robustness! (the opposing party can challenge your process in court -- everything about way you build your datasets or measure the production recall the has to be absolutely bullet proof)
IT & SECURITY
Anomaly detection in system usage patterns (with features like process load, frequency, volume) using NNs.
What was special: extra features from document content (type of document being accessed, topic modeling, classification).
MEDIA
Built tiered IAB classification [3] for magazine and newspaper articles.
Built a topic modeling system to automatically discover themes in large document collections (articles, tweets), to replace manual taxonomies and tagging, for consistent KPI tracking.
What was special: massive data volumes, real-time processing.
REAL ESTATE
Built a recommendation engine that automatically assembles newsletters, and learns user preferences from their feedback (newsletter clicks), using multi-arm bandits.
What was special: exploration / exploitation tradeoff from implicit and explicit feedback. Topic modeling to get relevant features.
LIBRARY DISCOVERY
Built a search engine (which is called "discovery" in this industry), based on Elasticsearch.
What was special: we added a special plugin for "related article" recommendations, based on semantic analysis on article content (LDA, LSI).
HUMAN RESOURCES (HR)
Advised on an engine to automatically match CVs to job descriptions.
Built an ML engine to automatically route incoming job positions to hierarchy of some 1,000 pre-defined job categories.
Built a system to automatically extract structured information from (barely structured) CV PDFs.
Built a ML system to build "user profiles" from enterprise data (logs, wikis), then automatically match incoming help requests in plain text to domain experts.
What was special: Used bayesian inference to handle knowledge uncertainty and combine information from multiple sources.
TRANSPORTATION
Built a system to extract structured fixtures and cargoes from unstructured provider data (emails, attachments).
What was special: deep learning architecture on character level, to handle the massive amount of noise and variance.
BANKING
Built a system to automatically navigate banking sites for US banks, and scrape them on behalf of the user, using their provided username/password/MFA.
What was special: PITA of headless browsing. The ML part of identifying forms, pages and transactions was comparatively straightforward.
--------------
... and a bunch of others :)
Overall, in all cases, lots of tinkering and careful analysis to build something that actually works, as each industry is different and needs lots of SME. The dream of a "turn-key general-purpose ML" is still ways off, recent AI hype notwithstanding.
[1] http://rare-technologies.com/
[2] https://en.wikipedia.org/wiki/Electronic_discovery
[3] https://www.iab.com/guidelines/iab-quality-assurance-guideli...
I told her she could stay with me for a bit and she said no, I offered to put her up in a hotel and she said no, I offered money and she said no. She did let me buy her lunch. I saw her at different corners around town and she would never take money. But I would sometimes swing by some place and pick up a breakfast biscuit for her.
I haven't seen her in a while now. I hope she got help.
There's no easy way to do this. It's hardly ever convenient. It isn't a foolproof means of turning situations around. It is, however, extremely powerful and desperately needed.
Do I practice what I'm saying? Sadly, not enough.
Shouldn't society and/or the government do more to increase awareness of these mental Heath issues and make information and treatments more widely available?
It doesn't take systemic or policy level changes to make someone's day/week better.
"The author of this site has about six years of college, including an incomplete Bachelor of Science degree in Environmental Resource Management...After the class was over, she continued to volunteer at the shelter for several more months. Years later, while homeless herself, she started the San Diego Homeless Survival Guide and also this site."
It's a shame the idea of helping the homeless via a night in a hotel is such a manually intensive effort. The article mentions paying in cash, I assume to avoid liability. Would be nice if there was a way to buy vouchers online that you could hand out.
What I realized is a lot of these people don't want any responsibilities. They're not willing, or able to accept that life sucks for everyone. The governments need to start offering better mental health care that doesn't involve locking people up in hospitals (something an old friend of mine had happen to him, and it caused him to lose trust in everyone).
I don't think giving people hotel rooms is going to solve any problems - if anything it's just enabling it. A better solution is to expand affordable housing and job programs so people start to get back on their feet while ultimately leaving the decisions up to them. Ultimately if they want to live under a bridge that a choice they've made and there's nothing we can really do about it.
I'm curious how homeless shelters compare to (paid) hostels, which are far more economical than hotels. They're (I assume) both in a dormitory setting.
Hotels are obviously not economically feasible as a long-term solution. I think the risk of this proposal is that they are not a reasonable on-ramp to society, as even if you can get a stable job you almost certainly cannot afford to live in a hotel full-time. On the other hand, some sort of hostel-style accommodation would be reasonable.
Speculation roundup:
There are two economic values of a homeless person, actually any person, but I digress. P is the cost, to the state, of keeping that person alive and medically stable, which includes police and health services, as well as shelter costs, soup kitchens, etc. V is the value generated by the homeless person through their labor, which varies much more than some people expect. Many homeless people have jobs. But for many homeless people it's zero.
Ways of decreasing P include a number of creative policies designed recently as well as "short-term" tolerance of outdoor living. Ways of increasing V by contrast are generally limited to:
* standard inpatient mental health "treatment", has a slim chance of success and a large chance of backfiring and setting V to zero for a long time, also sends P through the roof
* outpatient medication, has a similarly tiny chance of success but a smaller chance of backfiring and is much cheaper
* prison labor, effective but brings to mind immediately The Road to Serfdom and other dystopian fictions (heh)
* ???
Creative ways of increasing V ought to depend on economic fundamentals, i.e. finding out what a homeless person is good for and exploiting that, to wit: homeless people tend to beat normal people at dealing with homeless people. They might also be able to help out with e.g. trash pickup or road maintenance. The typical road-map people envision for increasing V looks like this:
homeless -> [treatment] -> normal
but in reality looks more like this:
very low V -> [treatment] -> low V -> [more treatment] -> slightly low V -> [more treatment] -> mediocre V -> [more treatment] -> with luck normal V
The typical way of dealing with the intermediate stages currently consists of either locking them in a small compound with shitty beds and twelve other crazy people or giving them a bottle of pills and hoping Jesus can handle the rest. This, really, is the problem. Halfway housing for homeless people might look like a situation where housing is provided in return for part-time labor.
I'd also like to point out that while in "treatment" for homelessness it might not be reasonable to demand complete sobriety when you consider that you're preparing them for eventual release into a world where they'll be allowed to drink alcohol and smoke cigarettes and probably marijuana and their drug use will not be so heavily monitored. Supporting the development of self-control means that people have to be able to have a little control in the first place, and the ability to make small mistakes before making big ones
https://en.wikipedia.org/wiki/Copyright_(Infringing_File_Sha...
BTW, I can confirm your site still not loading, time again for a kubectl scale.
We found that the site was slow because we were getting throttled by a table in DynamoDB which explains why kubectl scale wasn't helping as much as we had hoped. We were adding capacity, just not in the right place.
We've added read capacity to our table and things are faster now.
I've been using this for years, and keep all my notes in a wiki[0], written in GitHub flavour mark down, with syntax highlighting etc. in the editor. I guess it's not orgmode (which sounds amazing), but it seems really really good to me.
I created the vimwiki_markdown[1] gem which allows you to convert GH style markdown to HTML using the wiki. Now I might just call out to pandoc, but I didn't know about it at the time, and my gem works fine*
I love that you can have project specific wikis etc. It's just so great. I'll also link to the relevant section in my .vimrc[2] if you are going to go down the markdown route.
[1] https://github.com/patrickdavey/vimwiki_markdown
[2] https://github.com/patrickdavey/dotfiles/blob/682e72e4b7a70e...
I use it with taskwiki [1] extension, which stores all the tasks in the awesome Taskwarrior [2] CLI task manager. With this, I have my tasks in my text files, searchable just a command away on the CLI or my mobile phone via the Android app.
[1] https://taskwarrior.org/[2] https://github.com/tbabej/taskwiki
It says...
organize notes and ideas manage todo-lists write documentation write a diary
I can already do these in vim, with text files or markdown files. What is this actually doing, then?Projects like this really deserve some high-level explanation like "Do you want to do X, but don't like having to do Y? This addresses that."
What are people using for their wiki on iOS? Right now I'm syncing with iA Writer. It works pretty well, but doesn't support links. It looks like Bear is promising -- it uses the same style links, but it doesn't sync with a folder of text files. Any others out there?
When I started, VimWiki syntax was better supported than markdown. I've seen lots of markdown related pull requests come through, so maybe that's changed.
I use VimWiki for:
* Life goals (stretch and short term). I use it almost as a centering tool. * Poetry * Passages from books * Book summaries (that I write) * Lecture/Speech notes * Notes on misc. items I want to explore * Ideas for future science fiction short stories
Last summer I wrote a simple Awk script to extract VimWiki style definitions (Term::Definition) into TSV's for importing into Anki. I was frustrated that I couldn't fully automate this process without modifying Anki. Maybe someone else has figured this out? a) writing a syntax highlighting script for your notes files, b) using tags to jump to documents and c) using netrw for browsing your files?
Because if it is not, and even if it is just making it easier to do those things yourself, I would personally not even consider using it, having had some really nasty surprises with Vim plugins in the past. Yes, I do mean you, Powerline.
GraphQL lets consumers (Web UIs, Mobile Apps etc) define the shape of data they want to receive from a backend service. Theoretically, this is awesome since at the time of writing the backend service you don't always know which fields and relationships a consumer might require. The problem arrives when you add security. There are plenty of things that a client is not supposed to see, often based on the roles of the requesting user. That means that you can't pass a GraphQL query directly to an underlying DB provider without verifying what is being requested. You'd end up writing about the same amount of code that you'd have written with a standard REST-style (or other) interface.
I also considered using it for Server to DB data fetches, where my backend Web Service would request an underlying GraphQL aware driver for some data. I did not find it particularly more expressive than using SQL or using an ORM.
One good thing about GraphQL is that it sort of formalises the schema and the relationship between entities. You could perform validations on incoming query parameters and mutations. It also helps clients understand the underlying schema and would serve as excellent documentation. This might be useful, but a REST API also offers some level of intuitiveness, and ORMs (especially in strongly-typed languages) offer some level of parameter and query validation.
These are probably early days, but it'd be nice to see some real world use cases for GraphQL which lie right in the middle of simple todo-list type of apps, and the unique needs of FaceBook.
The automagical nature of this software seems great but any relatively complex application would require not only CRUD manipulation but also side effects to go along with it as well.
With express I suppose you could have a middleware fire off before hand to parse the incoming query, figure out what it is, and take any extra action as necessary such as denying a request or making some side effect of the query happen. This would be a by default open policy however for those queries which you have a postgres scheme for but lack the express javascript to parse the incoming request.
I can't remember where I stumbled upon that, but it implements an object database over PostgreSQL and offers a GraphQL superset for querying. Don't know if it's still in active development, but it looks interesting.
Now, if you combined this with a good RBAC security model, particularly if you baked that model into the GraphQL -> SQL conversion layer, so it sends SQL queries that work on an allowed subset.. that'd be very cool.
The security mechanisms in Firebase might serve as an inspiration :)
Lot's of confusion here on what GraphQL is actually trying to accomplish. In no way were the authors of GraphQL attempting to replace or enhance SQL with the GraphQL language; they are two completely different entities. GraphQL sets out to solve annoyances that people run into while building and maintaining large RESTful services.
Are there any negatives that might not be obvious from just reading the docs?