Lee Sedol is playing brilliantly! #AlphaGo thought it was doing well, but got confused on move 87. We are in trouble now... Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87 When I say 'thought' and 'realisation' I just mean the output of #AlphaGo value net. It was around 70% at move 79 and then dived on move 87 Lee Sedol wins game 4!!! Congratulations! He was too good for us today and pressured #AlphaGo into a mistake that it couldnt recover from
Of course, a plausible alternate explanation is that AlphaGo felt like it needed to make risky moves to catch up.
Demis Hassabis said of Lee Sedol: "Incredible fighting spirit after 3 defeats"
I can definitely relate to what Lee Sedol might be feeling.Very happy for both sides. The fact that people designed the algorithms to beat top pros and the human strength displayed by Lee Sedol.
Congrats to all!
Of course, what's obvious to a human might not be so at all to a computer. And this is the interesting point that I hope the DeepMind researchers would shed some light on for all of us after they dig out what was going on inside AlphaGo at the time. And we'd also love to learn why did AlphaGo seem to go off the rails after this initial stumble and made a string of indecipherable moves thereafter.
Congrats to Lee and the DeepMind team! It was an exciting and I hope informative match to both sides.
As a final note: I started following the match thinking I am watching a competition of intelligence (loosely defined) between man and machine. What I ended up witnessing was incredible human drama, of Lee bearing incredible pressure, being hit hard repeatedly while the world is watching, sinking to the lowest of the lows, and soaring back up winning one game for the human race.. Just incredible up and down in a course of a week. Many of my friends were crying as the computer resigned.
Toward the end AlphaGo was making moves that even I (as a double-digit kyu player) could recognize as really bad. However, one of the commentators made the observation that each time it did, the moves forced a highly-predictable move by Lee Sedol in response. From the point of view of a Go player, they were non-sensical because they only removed points from the board and didn't advance AlphaGo's position at all. From the point of view of a programmer, on the other hand, considering that predicting how your opponent will move has got to be one of the most challenging aspects of a Go algorithm, making a move that easily narrows and deepens the search tree makes complete sense.
Another interesting thing I noticed while catching endgame is that AlphaGo actually used up almost all of its time. In professional Go, once each player uses their original (2 hour?) time block, they have 1 minute left for each move. Lee Sedol had gone into "overtime" in some of the earlier games, and here as well, but previously AlphaGo still had time left from its original 2 hours. In this game, it came down quite close to using overtime before resigning, which is does when the calculated win percentage falls below a certain percentage.
I tried to estimate it mathematically. Using a uniform distribution across possible win rates, then updating the probability of different win rates with bayes rule. You can do that with Laplace's law of succession. I got a 20% that Sedol would win this game.
However a uniform prior doesn't seem right. Eliezer Yudkowsky often says that AI is likely to be much better than humans, or much worse than humans. The probability of it falling into the exact same skill level is pretty implausible. And that argument seems right, but I wasn't sure how to model that formally. But it seemed right, and so 90% "felt" right. Clearly I was overconfident.
So for the next game, with we use Laplace's law again, we get 33% chance that Sedol will win. That's not factoring in other information, like Sedol now being familiar with AlphaGo's strategies and improving his own strategies against it. So there is some chance he is now evenly matched with AlphaGo!
I look forward to many future AI-human games. Hopefully humans will be able to learn from them, and perhaps even learn their weaknesses and how to exploit them.
Depending how deterministic they are, you could perhaps even play the same sequence of moves and win again. That would really embarrass the Google team. I hear they froze AlphaGo's weights to prevent it from developing new bugs after testing.
On one hand, we have racks of servers (1920 CPUs and 280 GPUs)  using megawatts (gigawatts?) of power, and on the other hand we have a person eating food and using about 100W of power (when physically at rest), of which about 20W is used by the brain.
Edit: here's another great one on MCTS: https://gogameguru.com/alphago-4/#comment-13479
Lee Sedol won because he played extremely well. But when AlphaGo was already losing it made some very bad moves. One of them was so bad that it's the kind of mistake you would only expect from someone who's starting to learn how to play Go.
It was amazing to see how Lee Sedol found the right moves to make the invasion work.
This makes me think that if the time for match was three hours instead of two, maybe a professional player will have enough time to read the board deeply enough to find the right moves.
The author thinks that Lee Sedol was able "to force an all or nothing battle where AlphaGos accurate negotiating skills were largely irrelevant."
"Once White 78 was on the board, Blacks territory at the top collapsed in value."
"This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves."
"Weve talked about AlphaGos bad moves in the discussion of previous games, but this was not the same."
"In previous games, AlphaGo played bad (slack) moves when it was already ahead. Human observers criticized these moves because there seemed to be no reason to play slackly, but AlphaGo had already calculated that these moves would lead to a safe win."
Which, I add, is something that human players also do: simplify the game and get home quickly with a win. We usually don't give up as much as AlphaGo (pride?), still it's not different.
"The bad moves AlphaGo played in game four were not at all like that. They were simply bad, and they ruined AlphaGos chances of recovering."
"Theyre the kind of moves played by someone who forgets that their opponent also gets to respond with a move. Moves that trample over possibilities and damage ones own position achieving less than nothing."
And those moves unfortunately resemble what beginners play when they stubbornly cling to the hope of winning, because they don't realize the game is lost or because they didn't play enough games yet not to expect the opponent to make impossible mistakes. At pro level those mistakes are more than impossible.
Somebody asked an interesting question during the press conference about the effect of those kind of mistakes in the real world. You can hear it at https://youtu.be/yCALyQRN3hw?t=5h56m15s It's a couple of minutes because of the translation overhead.
I wonder if Lee Sedol can find a way to replicate that in Game 5.
At the end, Lee asked to play white in the last match, and the Deepmind guys agreed. He feels that AlphaGo is stronger as white, so he views it as more worthwhile to play as black and beat AlphaGo.
Conference over, see you all tomorrow.
> This was when things got weird. From 87 to 101 AlphaGo made a series of very bad moves.
It seems to me, that these bad moves were a direct result of AlphaGo's min-maxing tree search.
According to @demishassabis' tweet, it had had the "realisation" that it had misestimated the board situation at move 87. After that, it did a series of bad moves, but it seems to me that those moves were done precisely because it couldn't come up with any other better strategy the min-max algorithm used traversing the play tree expects that your opponent responds the best he possibly can, so the moves were optimal in that sense.
But if you are an underdog, it doesn't suffice to play the "best" moves, because the best moves might be conservative. With that playing style, the only way you can do a comeback is to wait for your opponent to "make a mistake", that is, to stray from a series of the best moves you are able to find, and then capitalize that.
I don't think AlphaGo has the concept of betting on the opportunity of the opponent making mistakes. It always just tries to find the "best play in game" with its neural networks and tree search in terms of maximising the probability of winning. If it doesn't find any moves that would raise the probability, it picks one that will lower it as little as possible. That's why it picks uninteresting sente moves without any strategy. It just postpones the inevitable.
If you're expecting the opponent to play the best move you can think of, expecting mistakes is simply not part of the scheme. In this situation, it would be actually profitable to exchange some "best-of-class" moves to moves that aren't that excellent, but that are confusing, hard to read and make the game longer and more convoluted. Note that this totally DOESN'T work if the opponent is better at reading than you, on average. It will make the situation worse. But I think that AlphaGo is better in reading than Lee Sedol, so it would work here. The point is to "stir" the game up, so you can unlock yourself from your suboptimal position, and enable your better-on-average reading skills to work for you.
It seems to me that the way skilful humans are playing has another evaluation function in addition to the "value" of a move how confusing, "disturbing" or "stirring up" a move is, considering the opponent's skill. Basically, that's a thing you'd need to skilfully assess your chances to perform an OVERPLAY. And overplay may be the only way to recover if you are in a losing situation.
AlphaGo made a mistake and realized it was behind, and crumbled because all moves are "mistakes"(they all lead to loss) so any of them is as good as any other.
Im very suprrised and glad to see Humans still have something against AlphaGo, but ultimately, these kind of errors might dissapear if AlphaGo trains 6 more months. It made a tactical mistake, not a theory one.
78 could come to symbolize humanity.
What a special moment.
But after watching the summary video of AlphaGos win... I'm fascinated.
I'm sure there are thousands of resources that can teach me the rules, but HN; can you point me to a resource you recommend to get up to speed?
Also seems in-line with the way Demis was "rooting" for the human this time they already won so now they focus on PR.
AlphaGo plays some unusual moves that go clearly against any classically trained Go players. Moves that simply don't quite fit into the current theories of Go playing, and the world's top players are struggling to explain what's the purpose/strategy behind them.
I've been giving it some thought. When I was learning to play Go as a teenager in China, I followed a fairly standard, classical learning path. First I learned the rules, then progressively I learn the more abstract theories and tactics. Many of these theories, as I see them now, draw analogies from the physical world, and are used as tools to hide the underlying complexity (chunking), and enable the players to think at a higher level.
For example, we're taught of considering connected stones as one unit, and give this one unit attributes like dead, alive, strong, weak, projecting influence in the surrounding areas. In other words, much like a standalone army unit.
These abstractions all made a lot of sense, and feels natural, and certainly helps game play -- no player can consider the dozens (sometimes over 100) stones all as individuals and come up with a coherent game play. Chunking is such a natural and useful way of thinking.
But watching AlphaGo, I am not sure that's how it thinks of the game. Maybe it simply doesn't do chunking at all, or maybe it does chunking its own way, not influenced by the physical world as we humans invariably do. AlphaGo's moves are sometimes strange, and couldn't be explained by the way humans chunk the game.
It's both exciting and eerie. It's like another intelligent species opening up a new way of looking at the world (at least for this very specific domain). and much to our surprise, it's a new way that's more powerful than ours.
Go, unlike Chess, has deep mytho attached to it. Throughout the history of many Asian countries it's seen as the ultimate abstract strategy game that deeply relies on players' intuition, personality, worldview. The best players are not described as "smart", they are described as "wise". I think there is even an ancient story about an entire diplomatic exchange being brokered over a single Go game.
Throughout history, Go has become more than just a board game, it has become a medium where the sagacious ones use to reflect their world views, discuss their philosophy, and communicate their beliefs.
So instead of a logic game, it's almost seen and treated as an art form. And now an AI without emotion, philosophy or personality just comes in and brushes all of that aside and turns Go into a simple game of mathematics. It's a little hard to accept for some people.
Now imagine the winning author of the next Hugo Award turns out to be an AI, how unsettling would that be.
There's 10^50 atoms in the planet Earth. That's a lot.
Let's put a chess board in each of them. We'll count each possible permutation of each of the chess boards as a separate position. That's a lot, right? There's 10^50 atoms, and 10^40 positions in each chess board so that gives us 10^90 total positions.
That's a lot of positions, but we're not quite there yet.
What we do now is we shrink this planet Earth full of chess board atoms down to the size of an atom itself, and make a whole universe out of these atoms.
So each atom in the universe is a planet Earth, and each atom in this planet Earth is a separate chess board. There's 10^80 atoms in the universe, and 10^90 positions in each of these atoms.
That makes 10^170 positions in total, which is the same as a single Go board.
Chess positions: 10^40 (https://en.wikipedia.org/wiki/Shannon_number) Go positions: 10^170 (https://en.wikipedia.org/wiki/Go_and_mathematics)Atoms in the universe: 10^80 (https://en.wikipedia.org/wiki/Observable_universe#Matter_con...)Atoms in the world: 10^50 (http://education.jlab.org/qa/mathatom_05.html)
I should also say that it's somewhat clear that Sedol made one suboptimal move, and AlphaGo capitalized on it. Interestingly, the English commentator made the same mistake as he was predicting lines of play. This involved play in the center of the board, in a very complicated position. Prior to this set of moves, the game was almost a tie. Afterwards, it was very heavily in AlphaGo's favor.
You try and share this story with a non-technical person and they will likely say "Well, duh..it's a computer".
Especially for Lee, the whole world is looking at him. An "ordinary" human like me won't be able to make the right decisions under this pressure.
A great respect to Lee and the Developers of AlphaGo. Good Game!
The situation with Go is different. (I wrote the Go program Honninbo Warrior in the 1970s, so I am a Go player and used to be a Go programmer.) Still, I bet the AlphaGo, and future versions, will strongly impact human play.
Maybe it was my imagination, but it sometimes looked like Lee Sedol looked happy + interested even late in the two games when he knew he was losing.
EDIT: clarified to what I originally meant: "end of midgame"
In particular, I'm wondering if a computer scientist with access to the alphaGo source code and all the weights of the network could trick alphaGo in order to win games automatically (cf. the papers that show a neural net can be tricked to classify a plane as any other class).
If a human with the knowledge of the source code and the weights can do this, it is scary. Imagine a similar algorithm runs your car. An attacker that knows the source code and the weights may trick the algorithm to send your car in a wall!
Great game btw, a pleasure to watch.
Nonetheless, AlphaGo takes a minute and a half to play its next move. Can anyone explain what on earth is going on during those 90 seconds?
You can hold out for a few thousand years, but eventually the uncontrollable and amoral technological imperative will catch on and crush you.
It's kind of poetic and sad. It feels like technology will render everything un-sacred eventually.
good to know they'll play all 5 games no matter what the result is though
People seem to think Lee knew he lost and was just playing to learn more. Hope he learned enough to take the overlord down in the next three games
I wonder if Lee Sedol will have an interest in studying deep learning after this =)
Tonight's game was beautiful. Last night's was a fighting game way too high level for me to really grasp (I have no idea how to play like that, all those straight and thin groups would make me nervous). I'm expecting Sedol to win Friday since I imagine he's going to have a great study session today, but I'm no longer confident he'll win the last two.. Still rooting for him though. :) (I also want to see AlphaGo play Ke Jie (ed: sounds like from the other submission on Ke's thoughts that may happen if Sedol is soundly defeated), and for kicks play Fan Hui again and see whether it now crushes weaker pros or is strangely biased to adopt a style just slightly stronger than who it's facing.)
1. As the game of go progresses, the number of reasonable moves decreases, so that as the game progresses, players on average play closer and closer to optimally. By the end of the game, even weak amateurs can calculate the optimal move. Logically, I would guess that stronger players are able to play optimally earlier than weak ones. Lee Sedol is known for his strong middle and endgame, often falling behind early on and making it up late in the game. He is so strong at this that he has driven an entire generation of go players to developing very strong endgame. But AlphaGo, running Monte Carlo simulations, almost certainly can brute force the game earlier than Lee Sedol can. Lee Sedol is playing AlphaGo on its own turf. A player known for their opening prowess, such as Kobayashi Koichi in his heyday, might have had an advantage that Lee Sedol doesn't. (Note: I'm not strong enough to analyze Lee Sedol or Kobayashi Koichi's play styles; I'm repeating what I've heard from professionals.)
2. I hoped that when an AI beat a pro at go, it would be with a more adaptive algorithm, one not specifically designed to play go. If my understanding of AlphaGo is correct, it's basically just Monte Carlo: the advances made were primarily in improving the scoring function to be more accurate earlier, and the tree pruning function, both of which are go-specific. It's not really a new way of thinking about go (at least, since Monte Carlo was first applied to go). It's just an old way optimized. The AI can't, for example, explain its moves, or apply what it learned from learning go to another game. It's certainly a milestone in Go AI, and I don't want to downplay what an achievement this is for the AlphaGo developers, but I also don't think this is the progress toward a more generalized AI that I hoped would be the first to beat a professional.
Especially when AlphaGo capitalized on just one suboptimal move of Lee Sedol.
There is something unnerving about a computer that can answer in 0.01 seconds and still have the move be better than any human would come up with in an hour. At that point a robot playing simulatenous bullet chess would wipe the floor with a row of grandmasters, beating them all without exception.
This is a milestone in modern informatics.
Then suddenly a computer comes along and takes that title from you. But it takes it in such a way that you are never in your life able to re-take it because of how the AI works.
A game will likely just be the first field. My girlfriend is working in translation and interpretation which is another area already in the crosshair of neural networks. AIs will step by step become more efficient than people and that is terrifying.
So if the first comment in this thread (about how it's a completely non-human approach) is true, it's really interesting that humans can enable computers to come up with non-human ways of solving complex problems.
Seems like a big part of this story, if I'm not being completely dumb here.
Slack channel for discussion if anyone's interested. We're using it for commentary while the games go on. Was created by AGA people.
But AlphaGo showed us what AI is really capable of doing in an eerie sort of way and I think interest in AI will soon become mainstream which is a good thing for the development of AI.
Now it's at least easier to comprehend the context of all those doomsday warnings about AI destroying humanity which I never took seriously.
I wonder what would be interesting games (intellectual sports) where computers have yet to defeat humans that you would probably be interested in learning?
I'm looking at the DeepMind channel on Youtube:https://www.youtube.com/channel/UCP7jMXSY2xbc3KCAE0MHQ-A
What happens if the Go master tries to deceive the oponent? As in purposefully play a counter-intuitive position, or even "try to lose"? Will the AI's response be confused as it is expecting rational moves from its oponent?
This has a powerful consequence: we have not seen AlphaGo pushed to the limit, he is lowering the distances as if it were playing a teaching game.
Lee Sedol I think came to this conclusion, and the only human strategy left is to take a lead big enough to maintain the rest of the game. And that might be the last strategy to play to show the computer is already unbeatable, because it will be pushed to its limits to win a game and it might overcome humans.
"The idea of using ConvNet for Go playing goes back a long time. Back in 1994, Nicol Schraudolph and his collaborators published a paper at NIPS that combined ConvNets with reinforcement learning to play Go. But the techniques weren't as well understood as they are now, and the computers of the time limited the size and complexity of the ConvNet that could be trained. More recently Chris Maddison, a PhD student at the University of Toronto, published a paper with researchers at Google and DeepMind at ICLR 2015 showing that a large ConvNet trained with a database of recorded games could do a pretty good job at predicting moves. The work published at ICML from Amos Storkey's group at University of Edinburgh also shows similar results. Many researchers started to believe that perhaps deep learning and ConvNets could really make an impact on computer Go.
Clearly, the quality of the tactics could be improved by combining a ConvNet with the kind of tree search methods that had made the success of the best current Go bots. Over the last 5 years, computer Go made a lot of progress through Monte Carlo Tree Search. MCTS is a kind of randomized version of the tree search methods that are used in computer chess programs. MCTS was first proposed by a team of French researchers from INRIA. It was soon picked up by many of the best computer Go teams and quickly became the standard method around which the top Go bots were built. But building an MCTS-based Go bots requires quite a bit of input from expert Go players. That's where deep learning comes in.
A good next step is to combine ConvNets and MCTS with reinforcement learning, as pioneered by Nicol Schraudolph's work. The advantage of using reinforcement learning is that the machine can train itself by playing many games against copies of itself. This idea goes back to Gerry Tesauro's NeuroGammon, a computer backgammon player that combined neural nets and reinforcement learning that beat the backgammon world champion in the early 1990s. We know that several teams across the world are actively working on such systems. Ours is still in development.
This is an exciting time to be working on AI."
In future, it will be interesting to see AlphaGo playing against itself!
This is exactly the mechanism that gets people in trouble going to China for manufacturing. They say "I want you to build widgets" and they get a good price quote, and say "Wow, this is awesome!" because they have in their mind that "making things in China is cheap" but in reality its that if you cut a lot of corners you can make things really cheap, and since the contract doesn't say you can't cut corners, it is all "perfectly" legal. But the manufacturer knows what the buyer doesn't, and exploits that information asymmetry to make money at the buyer's expense without the buyer having any true recourse.
The hotel in question could have said in the RFQ, "System will be impervious to network traffic snooping and at no time will systems or a guest supplied computer be able to access the controls in another room."
Had they said that, the price quotes would have gone up and had the system the author speaks of been delivered, the Hotel could recover the costs of installing it from the vendor. But they hotel didn't even know they needed to ask for that since they no doubt would assume, "nobody would make something that shoddy would they?"
I learned about this when I saw one of the rules in a NetApp hardware contract that said "Manufacturer will install all components shown on the schematic on the final units in their designated locations." That seemed really odd. I learned that before that clause had been part of the standard contract, there had been a manufacturer who decided unilaterally that half of the noise suppression capacitors in the schematic were "unneeded." Units from that manufacturer started failing in odd ways in the lab.
The implementation felt like they'd asked a VB6 dabbler to implement it in Java. Then stuck it in the cheapest 600mhz tablet they could find.
The UI was purely a button grid with distorted graphics, and dodgy typography. Button presses took about 1/2 a second to respond, and every 5th press caused the app to crash (adding a good 30 s to the experience).
My room had 4 tablets* in, and all of them behaved exactly the same way.
* the idea of a tablet to control the room is neat if it could be moved around. Like a remote-control. But for security (and using Ethernet) they were all fixed down. Making them far more useless than plain switches
The original Modbus was designed to communicate with factory devices controlled by logic controllers over serial and eventually over a custom token ring network. Modbus got moved to TCP at some point when I stopped paying attention. Modicon rejected TCP when I was there because the OSI model 7 layer network stack was going to be the next big thing.
Which takes us to this: "Any sufficiently advanced technology controlled by a miscreant is indistinguishable from a possessed object in a Stephen King Novel."
I was in my friend's Honda Pilot the other day, which has the new trendy big screen interface to replace the radio. I'm sure it is insecure junk, but more importantly it is a nightmare for humans.
I have a BS in CS, have developed some enterprise apps, run major complex tech programs successfully, and could program my dad's VCR in the early 80s. And... It took me nearly 10 minutes to figure out how to turn off the radio on the weird touchscreen.
To turn the radio on requires 4 clicks, and the key button is on the corner of the screen, where it is least responsive to touch. I would probably be safer driving with my knees and texting with two hands than controlling that radios.
OEMs moving to XXX over TCP protocols which have zero security by default and documenting this in the datasheets.
VAR installers switching to the newer products because CAT5 cable is cheaper and easier to pull than what they used to use.
The previous solution was just as insecure but harder to hack because you needed more specialised equipment.
I'm not sure how we are going to fix this without getting the OEM industry and the industry bodies behind xxx over TCP to understand that they need to bake a security model in.
If anyone interested, cross scan its default IP interface port 3671 and, say German telecom ISP IP range (and there is CSV available on www), with efficient penetration test tool like masscan, challenge it with 0x0205, look for 0x0206 on response.
Thousands of home and factories and commercial buildings welcome you with real time datagrams on all their switches/appliances/presences/sensors/cams/... Bonus point: writable!
The problem is that when a software engineer goes to the front desk of a hotel and complains about the security of the brand new Android-Powered Hi-Tech system that they just put in, the person working the desk thinks, "Haha wow! That nerd was a real Sheldon Cooper, like on the television!" and they don't care at all. If you live in a bubble where programming and computer work is black magic, well then of course it is completely inevitable that someone so nerdy and so smart would be able to hack everything on the planet. So they don't really think there's anything to be done.
When it's a group of annoying little 15 year olds that sneak out in the middle of the night to wake up all of your guests, it's a lot bigger of a deal.
Can anyone recommend a good reference / tutorial for learning basic network-fu in unix ?
Even then, and with the limited 'damage' that could be done, each and every single room got its own VLAN. That was certainly a little ugly to manage at times, especially in a 1200 room hotel, but yes.
There used to be party lines in villages where the whole village could listen in to anyone's phone call.
Never mind the operator could also have a sticky beak.
Now if they can change your sound system to play Kanye West... that truly is a problem worth worrying about.
Now get off my lawn!
* It maintains its position across power cycles. It can even be adjusted when the car is off. So you can lower the volume knob before you turn the car on and blast loud rock into your grandmother's ears.
* It does not require you to look at a touch screen to find the volume buttons. Tactile feedback is enough. You can operate it while maintaining the other 99% of your attention on the road.
* It physically stops at the lowest and highest possible volumes. Again, no need to look at some display.
Even better would be a physical slider instead of a knob. That would let you feel out the exact position of the volume without looking. The downside would be the limited space on a car stereo dashboard. But please, a touch screen is the worst and most dangerous interface while driving.
The same goes for radio presets. In a car with physical buttons for the presets, I can switch between my favorite stations without having to look. Try doing that with a touchscreen. How is this progress?
Maybe it's just a symptom of an industry that's often more about selling status symbols than selling functional products.
Is this solely to look "fancy"? If so, then at least get the tech right otherwise you look incompetent.
The 'Internet of Things' or whatever you want to call it controllable peripherals, ubiquitous connections, stuff like that is a pretty cool concept. I want to be easily able to do things like ask 'when will my laundry be finished?', or have my central heating come on when I start heading home. Not because it's massively beneficial, but because it removes some minor annoyances.
The technology is there, and has been for a while. But the proliferation of mindless, unforgiveable security flaws, pervasive surveillance, proprietary cloud-based networks, shitty software and bad UX generally it's really mad. It really makes it difficult to want to use any of these devices.
I'd love some kind of proper, non-half-baked-and-riddled-with-holes solution for home automation, but I reckon I'd probably have to build it myself.
 Brillo - Embedded Android - https://developers.google.com/brillo/
 Weave - Communications - https://developers.google.com/weave/
But since hacking is cool, we like this stuff.
Weird thing also is that using WiFi years ago was basically giving your data, when SSL websites where so rare. And we didn't even cared for it...
But do we really need our lights to do all kinds of funky things and be controlled from around the globe?
Don't we really just need our lights when we're in the room? And don't we just need to them to be on/off or at most dimmable?
Help me here.
1) Leave the phone in the home, you'll never be able to get in!2) Wireshark the WiFi3) Hijack the signal
I'm sure the dark side is waiting for us all to adopt IoT in our homes. I prefer my mechanical locks, thank you.
Although I'm a bit disappointed mjg59 didn't play Blinkenlights with the rooms on his floor.
Hahahahahahah! "Asking for a friend."
But really, folks are talking about the nuisance of waking people up in the middle of the night and that's true. However, controlling channels could be a more significant nuisance.
Seriously, I fail to see the ROI of such endeavor.
Should be fairly simple to setup remote blackmail-material-collection. :(
Its not just this light switch Android refrigerators, Android ovens, Android washing machines are all using a wildly inappropriate operating system for single-purpose devices. The problem is likely that its a lot easier to develop for Android than it is for a proper embedded OS: Its faster, the commodity hardware is easy to procure, licensing fees are minimal to none, and its easier to hire developers.
The first company to bring to market a more IoT-appropriate, yet accessible combination of operating system and SoC reference designs stands to become a massive player when IoT goes mass-market.
It smacks of deliberate incompetence to sell hardware.
iOT on top of this just smacks, again, of deliberate incompetence, to either sell hardware or raise the attack vector profile (the NSA loves you!)
why on earth is there still no reasonable competition to either android or ios?
Complexity is costly in many, many ways. There is zero justification for adding it to anything unless the payoff is some multiple of the complexity cost being added. I just don't see it here.
For any new tech, I always ask "what super power will this give me?" For much of IoT I can't answer that question. There are a few nice-to-haves but nothing compelling, no must-haves or genuine wows. Then you add in all the unbelievably creepy security and privacy implications and any lukewarm interest goes away. I can't shake the obviously crazy idea that some of this stuff is being pushed because certain people (advertisers, intelligence agencies) want as many sensors out there watching us as possible. Imagine every light switch, thermostat, etc. with an Internet connection and then think about the meta-data correlation capabilities with mobile sensor and location data and other Internet traffic.
We're really talking about a total surveillance society where literally every single thing you do is stored in a database somewhere. Anyone able to correlate your phone's approximate location and/or your web browsing history with, say, light switch data really will know every single time you use the bathroom and for exactly how long.
Do you stop moving and kneel every day at the correct time? Then you're praying to Mecca-- you're a Muslim. Do you leave the lights on late? That might say something about your personality profile. Do you work with the lights off? That says something else. Is there ambient sound but no light and are a male and a female present? They might be having sex. Two men in the bedroom? Gay sex! And that's just the easy low-hanging fruit I can imagine. Throw some theory-agnostic deep learning at it and I can imagine unbelievably spooky stuff that makes this look tame:
But mostly I think the driver is tech industry wishful thinking. Everyone is looking for the next catapult capable of tossing unicorns to billion dollar valuations in 1-2 years.
Mobile has IMHO been a bit of a disappointment. It's been big but not quite as big as everyone predicted. It's failed to displace desktop or achieve "convergence," and the limitations of the UI and the walled garden model have kept "serious" apps off mobile platforms for the most part. The collapse of app stores as a commercial software sales platform with prices spiraling down to $0 and clutter making new apps un-discoverable has further destroyed any incentive to push the boundaries of the platform beyond a "portable dumb terminal."
It's also been an architectural disappointment. It was supposed to be a clean slate where we could escape some of the cruft and bloat of desktop, but we're doing iOS and Android around here and the development experience on both is as bad or worse than Windows, Linux/Qt/GTK, and the web. It's not the promised land by any stretch. We took a lot of bad ideas with us from desktop and then added walled gardens and more resource constraints. Woohoo!
So now everyone's hoping IoT will be the next unicorn flinger. I'm skeptical so far. The Blackberry and the iPhone had immediate killer apps: maps, portable chat/email, portable books, music, and movies, etc. Those are real benefits that are worth the cost and the downsides. They're "super powers." Where's the super power in an internet connected light switch?
They still want to know how you proceed round the store, because that helps them optimise shelf layout, identify hard-to-find items, and so on. So yes, they might use the standard in-store CCTV to observe your journeys, and when they figure that you and people like you always have difficulty finding the eggs (seriously - why is it always so hard to find the eggs?), they'll move the eggs somewhere more prominent, so they can sell more eggs and you can buy what you came to buy.
But that's as far as it goes. They don't follow you out the store, let alone into your bedroom. They don't match anything with third-party data, let alone your mobile phone number. The store just wants to know where to put the eggs.
Unfortunately, your bouncers have simply been told to "hurt them if you have to, Ive really had enough of it". So last time they came in, they smashed the CCTV cameras. The store-owner remonstrated with them a bit but the whole debate around bouncers has become so polarised that there was really no point arguing.
And if this metaphor seems a little obscure, this is why it is irresponsible, populist and ultimately self-defeating for uBlock and chums to block self-hosted Piwik and other such internal analytics tools. Because some of us are trying to do the right thing and your bouncers are still beating us up.
Take for example how the FBI wants to have automatic access to the data in all iphones through a backdoor. Would that be considered OK if they asked lockers makers to make their locks accept a master key so they would be able to enter in anybody's house, so they could monitor further people they suspect to be terrorist?
Of course that would cause an uproar, but the general public being so uneducated with technology, I guess they don't see how the two are related.
In the world of ads, I'm constantly reminded that I don't have the perfect body and that my blender does not look as good as the latest model - I really don't want that, because my blender works fine and looks ok.
So yeah, I block ads and I don't really see why I should feel bad about that, the non-tracking feature is a nice bonus.
So the web will go back to sites that either require payment to enter or are run by people who post stuff out of enthusiasm. Sounds like a nice place to me.
If a store has policy of "If you come into our store, we'll have employees follow you home" and you don't like that policy, then don't go to that store. That simple. It doesn't make sense to go into the store and have your goons beat up their employees. That might mean that you can't go to the stores you want to go to, but that's how it goes. It seems as clear online as it does in the physical world.
(tldr without the analogy: The overwhelming majority of people don't care about being tracked online because there are no obvious ill effects. The problem with ad blockers is that it makes more sense to just avoid sites that show ads, but most people don't want to do this because it would exclude their favorite sites.)
For instance I can draw a little cat in my agenda to remind myself to call a particular friend that day. The police will tell me: "what? you have not written that in plain english? You must tell me what it means and if you don't you will go to prison". (In the UK one can go to jail for refusing to decrypt one's own data)
I go buy the Telegraph at my local newsstand and the guy will tell me: "can I see your papers please?" "But I just want to buy a newspaper" "yes but I must report to the police every day who reads what, by the way I must also know which pages you intend to read" (the UK is passing a law that would force all ISP to record what websites their customers view)
So Metiix Blockade was born out of this frustration... Now I have "bouncers" protecting my whole network for every one of my devices.
I hate when a web page decides what ads and trackers it wants to pull down from the Internet. With Blockade, I have taken back control of that process and I get to dictate when and where I want to provide my information.
I love feeling like I have the real internet back. No more of these ads and trackers taking over every place I go.
They made an (anecdotal) video by promising a free cup of coffee in exchange for your contact list on your phone:
https://www.youtube.com/watch?v=AYXM56YJWSo Dutch unfortunately)
I've been operating browser separation (Google in Chrome, social in Chrome incognito, and everything else in a locked-down privacy mode only Firefox - all with uBlock) for a while, and also use anonymising VPNs for anything I really don't trust, and my own VPN with streisand and Dnsmasq (with a hosts very similar to https://github.com/StevenBlack/hosts/ ).
On my mobile every link I click in any app I open in Dolphin Zero (still on that DNS blocking VPN - which blocks all trackers in apps too), and I only keep apps I actually use and trust the publishers of on my device.
It feels like a chore (manually copying links from one browser to another depending on trust level), I wonder whether it's worth it sometimes... but then I occasionally get to see someone else's experience of the web and it's so incredibly and perniciously been invaded by advertisers that I am glad I do all of this.
It's become so bad that I even had to change my uBlock origin rules for my online bank ( https://banking.smile.co.uk/SmileWeb/start.do ) to block even first-party scripts... because they use Adobe, Omniture and Tealium tools to measure stuff and for A/B testing of their online banking features.
I now block absolutely everything and tell others to do so too, but unfortunately there is collateral damage.
The very sites I care about may not require advertising revenue, but do value tracking data that helps them spot errors, debug things, find out what screen resolutions they should cater for. Their analytics, client-side debugging, this is all now rendered useless to them.
PS: If you happen to work on Firefox for Android, please enable browser.privatebrowsing.autostart to be configured via about:config. I would love to default enable private browsing in a UA capable of running uBlock on my mobile.
Oh you want location data here it is, this morning I've been all over the planet. Want to know all the websites I'm visiting, sure, here's a million of them.
Just based on the fact that they keep trying to sell you the thermometer after you already don't care kind of points out that they're being had, and I'm all for helping it happen
Some will disagree, but I think the comparison was spot on.
I already have adblock plus on my computer.
They didn't need credit cards or scores because they could identify your store credit account by your face, and your creditworthiness by your family's reputation.
If you were buying something out of the ordinary, you better believe your parents/spouse/church/friends/entire town would hear about it from the shopkeeper, who knew them all as well as he knew you.
A juicy conversation on a party line telephone shared with neighbors, interesting metadata on the postal mail also handled by people who know you and your business, a sighting in public with someone not your spouse, a visitor at an odd time of night, a strange car in your driveway - all these things could quickly become a public affair.
Technology is not bringing us a particularly new invasion, but it is helping at least that side of the "tight-knit communities" of old scale to modern population size and density. I think this is a horrific development, and it's certainly quantitatively unprecedented, but not qualitatively.
I used to help lead the paid search group at a top search agency and had a real birds-eye view of where things were moving in that role.
Everything is moving towards audiences. While keywords and search queries are signals that highlight intent, ultimately the audience piece is what the advertiser cares about--that's just one component of it. This is why FB, Google and everyone else under the sun wants companies to upload their CRM data, and then they use that for retargeting (1st party, or 1P data), or building lookalikes.
Then you have Adobe and other companies trying to get companies to sell this data on a marketplace as 2nd party (2P) audience data for retargeting.
There are also companies like LiveRamp and others that try to get companies with login data to provide cookie matches against hashed email addresses to keep cookies fresh and prevent them from just being deleted once and forever. I've been approached by these companies, and always turned them down because it just felt dirty.
That said, this thread seems to draw the usual crowd of everyone who hates anything related to advertising. I'm not going to try to change your opinions because I know that is not going to happen. However the reason all of this data gets shared is because it allows better targeting which leads to more relevant ads, which leads to more purchases.
Think about that for a second.
People are purchasing more when the content is more relevant to them. Nobody is holding a gun to their head making them take out their wallets and hit "Purchase." They are saying "this product/service is relevant to me and I want to buy it."
In that manner, advertising is helping people who want to purchase said thing. The issue comes in with the fact that because targeting isn't perfect (and I doubt anyone wants the level of tracking needed to make it so), and because a lot of advertising is building awareness (not simply retargeting and reminding you to buy something you initially displayed interest in), it becomes intrusive in a manner people dislike.
Unfortunately, because of the data available, there's still plenty of people who say "hmmm, I didn't know about this, but it seems interesting, I'll check it out" and then they purchase. So from an advertiser's standpoint looking at a spreadsheet of data they see "this audience segment had a conversion rate of X and an ROI of Y" and they keep doing it if it is profitable because that is what they are optimizing for.
I actually enjoyed Jacques piece, and I do think that there is some very questionable stuff going on in the ad space. The example of a random app tracking and selling data totally unrelated to said app is a great example. Companies are finding that they can monetize their data without visibly degrading the user experience by showing ads, and still get paid on a CPM rate for it, so expect to see more of that.
At the end of the day, I say all of this to highlight the fact that often is left out of pieces like this, which is that things are the way they are now because it works. Advertisers wouldn't be doing it if it didn't work, which means consumers are voting with their wallets in large enough numbers to keep fueling this behavior. In Jacques restaurant example, he was put off by the restaurant special promoted on his phone. I'd probably behave the same way because I've developed an aversion to the more invasive aspects of my industry and I'm overly sensitive to it now. But Joe Consumer? They see a relevant deal that will save them money and say "hmm, I like what they are offering, and it is a fair price, I guess that just made my decision easier" and they go eat at the restaurant. So the restaurant sees that of all the Jacques that see the ad and keep walking, for the pittance they pay they get enough Joe's in the door to make it profitable, and they keep doing it.
The positive feedback loop created by more targeting leading to higher profits means that it is working and we'll see more of it until the feedback loop is broken. Ad blockers are one avenue towards attempting to break it, and legislation is another. The question is whether pulling on those two levers will be enough to reduce the efficacy of the feedback loop to the point where advertisers stop doing this.
And a final note to those who might respond to my post. Please note that I'm not trying to paint an overly rosy picture of what advertising does or in any way trying to defend some overreaching aspects of it. I think people should own their data and be entitled to controlling how it is used. That is not the reality of the world we live in though, and so I'm simply making observations about how it impacts the various parties involved beyond just the protagonist of Jacques' story. I think there are more "clean" ways of doing advertising, that rely on a strong creative message, etc. Or viral ads that get shared because they are creating great content. But at the end of the day the media person's job is to take that ad/content and get it in front of the audience they are targeting.
People mention there's no choice anymore. Wrong! It's still there, just like it was 10 or 15 years ago. Stop sharing your personal information online and the whole tracking thing doesn't matter anymore.
This analogy seems completely flawed imho. Nobody can get inside my home, or force my door or any of that nonsense, unless I specifically allow them when they ask!
I fail to understand how all these trackers can read my browsing history without me installing <popular plugin> and allowing it access to my browser? Or how are they going to read my contact list from my Android phone, or the one from my Thunderbird? Through thin air?
Nobody took the choice from us, we just happened to open wide our front and back doors, and then complain that random people come in and look through our stuff.
Would also be nice to see these reactions on the issue list so you can get a feel for the issues at a glance without digging deep into each one.
> Have feedback on this post? Let know on Twitter.
Not everyone uses Twitter. It would be awesome to give feedback using the one account I'm guaranteed to have: a GitHub account. Otherwise I have to ask my question on HN...
Or please don't. Part of the problem with the +1s is that they add noise. How are reactions going to cut down on the noise? Telling people to go ahead a +1 an issue (increase noise) is the opposite of what the "Dear Github" maintainers want.
Many projects do not use +1 or any other voting scheme to illicit priority from the general public. +1 comments and reactions provide little value. I have seen Github issues where people +1 already closed issues because they do not bother reading.
These are the following reactions:
1. +1 2. -1 3. smile 4. thinking_face 5. heart 6. tada
It'd be cool if they added a way to search through your list of reactions. This would allow you to effectively comment on an issue in an OS project, while simultaneously bookmarking it, so that you can go back and commit a fix when you have a free moment.
- Don't allow a user to rate his own posts.
- Don't allow a user to issue contradicting votes like +1 and -1 at the same time.
- Use image emoji like everywhere else on the site for compatibilty.
*Or at least the first time I've seen them used as an important feature.
I don't really get how I should "love" an issue, or "this issue makes me happy".Or the relevance of a "thinking face".The ui would be simpler with only 1 or 2 icons.
At least there's no "this issue makes me sad/angry" buttons.
I wonder if they will allow for repository owners to select which reactions they will allow? I think that would help with the limited selection but still allow owners to select what they consider useful to them.
This way you get better feedback.
That said, it'd be interesting to see a breakdown by age and background in terms of supporting or not supporting this addition.
One thing I don't like is that you're able to add multiple reactions to the same item, it sends mixed signals.
cleans up the existing issue threads.
This is something that's in your mouth a lot and constantly exposed to saliva.
The Dimension 1200es mentioned doesn't appear to be specific to medical applications. The product page lists the only compatible thermoplastic being ABSplus-P430. The MSDS for that basically says the stuff is dangerous in molten form, and beyond that there's very little data. The same company makes "Dental and Bio-Compatible" materials for use with their other products, and these appear to have considerably more safety data.
>The aligner steps have been printed, in addition to a riser that I added in order to make sure the vacuum forming plastic (sourced from ebay) ...
As another commenter pointed out, the vacuum forming plastic is probably the primary concern because the 3D printer was just used to create the molds. The specific type of vacuum plastic isn't mentioned.
Regardless, very neat project.
The animation definitely seems the most difficult (and subjective), but also the most cool! Body hacking via computed geometry!
Invisalign (align technology) uses almost the same workflow. Market cap $5.89B.
If you could move the workflow over to something based on WebGL / three.js - you could make this accessible to dentists in developing countries. Could be an awesome open source project.
I think "allowing" it to be used in the US would open yourself up to too much liability though :(
I'm going to send this to my dentist (who's cool enough to appreciate it).
Anyway, the future is promising and the issues could be solved taking into account all the factors.
Anti-vaxxers are idiots and it is obvious that vaccines don't cause autism (original study was a fraud). The health benefit of vaccines is as undeniable as the lack of correlation to autism.
That said, dental amalgum is a chunk of mercury in your mouth. FDA says it is safe for people over 6yrs old, but I personally will stay away from it for any future dental work.
My grandfather used to make dentures, and that casting in the 4th photo looks exactly like the impressions my GF would make. They also used these hinges so they could mate the upper to the lower, so they could adjust any collisions that occurred while opening and closing the mouth.
But on a serious note, I had braces, after the were remove a wire was placed behind my teeth to keep them in place. It didn't stick to one of my ceramic teeth I had from an accident in my youth. The wire was removed and after some months my front two teeth were as far apart as ever. Ok, the overbite didn't return but things will move back at least to some degree over time.
As mentioned before, I myself would never just put any plastic material in my mouth with all the bad things known about plasticisers, bpa/bps, etc.
Had two teeth done for under $500 10 years ago.
It's a stop gap until braces are an option financially.
I guess this would work better with those with gaps or very mildly crowded teeth.
Often crowded teeth result in pulling teeth to make room.
First, teeth and their movement is more complicated than it might first seem. You have to think about the entire masticatory apparatus, for example:
There's more root than crown, how does the root move in relation to the tooth? Root resorption is a common problem in orthodontic treatment.
Is there / will there be enough bone surrounding the tooth to support the intended movement?
How will the patient's occlusion (how the teeth fit together) be affected? Part of the Invisalign process is to take a bite registration that shows the upper and lower teeth in relation to each other. This is important, and ignoring it can potentially lead to other complications:
- stress fractures
- supraeruption of opposite tooth
- TMJ pain
Does the patient display any parafunctional habits that will affect the new tooth positions? For example, do they grind, clench, or have abnormal chewing patterns?
Many Invisalign techniques require the placement of anchors, holds, and various other structures attached to the teeth themselves. They allow for more complex movement than the insert itself would be able to provide.
Adjustments are often required mid-treatment. Not everybodys anatomy and biology is exactly the same, so you have to adjust accordingly.
Now, does every general dentist take this into account 100% of the time? No, but theyre at least trained to recognize these situations and compensate for them.
That said, many simple patients dont require any more thought than the OP put in. Its a good thing he looked in a text book and realized that theres a limit to how much you should try to move a tooth at each step before youre likely to run into problems. And if you do run into problems do you think a professional is going to come anywhere near your case?
A few issues I have with his technique:
Unless he poured his stone model immediately after taking the impression, its likely there was a decent loss in accuracy. Alginate is very dimensionally precise, but only for about 30 minutes. The material that most dentists use, PVS, is dimensionally stable for much, much longer (not to mention digital impressions).
Vertical resolution of the 3D print does matter. You might be moving teeth in only two dimensions, youre applying it over three dimensions.
Again, I think it is awesome that someone gave this a shot, and did a fairly good job as well. Im all for driving the cost of these types of treatments down, as well as promoting a more hacky/open approach to various treatments. Just know theres more than meets the eye.
* I decided to go back to tech, theres too little collaboration in dentistry for me to make a career out of it.
Not sure what I would do if we didn't have a dental school.
When I go there I am always surprised to find people who actually have insurance who still go there despite all the hassle.
> Perhaps the most chilling quote of the Soviet era came from Lavrentiy Beria, Stalins head of the secret police, who bragged, Show me the man, and I will find you the crime. Surely, that never could be the case in America; were committed to the rule of law and have the fairest justice system in the world.
> This should make everyone fearful. Silverglate declares that federal prosecutors dont care about guilt or innocence. Instead, many subscribe to a win at all costs mentality, and there is little to stop them.
> The very expansiveness of federal law turns nearly everyone into lawbreakers. Like the poor Soviet citizen who, on average, broke about three laws a day, a typical American will unwittingly break federal law several times daily. Many go to prison for things that historically never have been seen as criminal.
> John Baker, a retired Louisiana State University law professor, made a similar comment to the Wall Street Journal: There is no one in the United States over the age of 18 who cannot be indicted for some federal crime. That is not an exaggeration.
Do you even know the 134 laws passed by the current Congress are? I know I don't and you just have to fall afoul of 1.
The insinuation is the no-fly list should be expanded to catch domestic criminals. You know, the no-fly list that you can't be removed from and don't have to have committed a crime to get yourself listed on. The no-fly list that is unconstitutional, yeah that one.
.. Another Quote.
> First they came for the Socialists, and I did not speak outBecause I was not a Socialist.
> Then they came for the Trade Unionists, and I did not speak out Because I was not a Trade Unionist.
> Then they came for the Jews, and I did not speak outBecause I was not a Jew
> Then they came for meand there was no one left to speak for me.
I guess i'm on that list now.
Are you really trying to tell me that surveillance of ~the rest of the world~ is somehow less bad?
I suppose we'll just have to start assuming that anything said or written or looked up online will eventually be accessible to anyone who's interested.
This country is so f#cked now.
What are the remaining options?
1. Those who accept the new reality will be forced to behave like sheep.
2. Those who speak up will be silenced even before they will have organized some kind of serious movement (see: OWS + parallel construction).
Don't use electronic messaging to tell people you're holding cash, bring a cheque at least. The cops are going to find out and your cash is going to be confiscated.
Are there crowdfunding sites that help fund civil rights cases?
Edit: In other words, I'm beginning to think the only way to chip away at this sort of massive government overreach is to crowdfund the hell out of a ton of legitimate civil rights cases. Eventually maybe our government(s) (since I imagine US is not the only one to be concerned about) will get the hint.
However, if the complete 'graph' is just handed over then even the most discreet of networks could be uprooted, all chains and links identified, geolocated too. This could be done. All it would take is a Donald Trump to take it up another level. He could get a lot of votes by promising to use NSA data to eradicate drugs from America once and for all, with no drug-dealer left behind...
Aside from the 'where next' aspects, as it is, I found this article to be quite shocking. So much for the 'land of the free'.
We are literally one economic crisis or major terrorist attack away from some form of significantly more authoritarian if not outright totalitarian government. Whether it would be "left" or "right" is sort of up in the air, and might depend on which side is able to produce a more compelling demagogue at the right time. In any case if history is any guide it doesn't matter much. Totalitarianism is totalitarianism.
If that comes to pass, we're going to find out what "turn-key totalitarian state" means. The infrastructure is in place. The only barriers are legal and social/cultural.
Now various agencies can trawl back through decades of data collection and see what people can still be prosecuted for, extorted with, etc.
These actions were happening anyway. Aren't we better off now that they're acknowledged and can be challenged?
The question will become relevant soon enough.
"All lives matter" gets you harassed and possibly fired.
As a Go player, I'm really excited to see what kind of play will come from that!
I definitely did not expect that.
Major credit to Lee Sedol for toughing that out and playing as long as he did. It was dramatic to watch as he played a bunch of his moves with only 1 or 2 seconds left on the clock.
(or something like that)
At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for probability of long-term victory rather than playing for points, the fight against Sedol generates boards that can falsely appear to a human to be balanced even as Sedol's probability of victory diminishes. The 8p and 9p pros who analyzed games 1 and 2 and thought the flow of a seemingly Sedol-favoring game 'eventually' shifted to AlphaGo later, may simply have failed to read the board's true state. The reality may be a slow, steady diminishment of Sedol's win probability as the game goes on and Sedol makes subtly imperfect moves that humans think result in even-looking boards...
The case of AlphaGo is a helpful concrete illustration of these concepts [from AI alignment theory]...
Edge instantiation. Extremely optimized strategies often look to us like 'weird' edges of the possibility space, and may throw away what we think of as 'typical' features of a solution. In many different kinds of optimization problem, the maximizing solution will lie at a vertex of the possibility space (a corner, an edge-case). In the case of AlphaGo, an extremely optimized strategy seems to have thrown away the 'typical' production of a visible point lead that characterizes human play...
In 1978 chess IM David Levy won a 6 match series 4.5-1.5 - he was better than the machine, but the machine gave him a good game (the game he lost was when he tried to take it on in a tactical game, where the machine proved stronger). It took until 1996/7 for computers to match and surpass the human world champion.
I'd say the difference was that for chess, the algorithm was known (minimax + alpha-beta search) and it was computing power that was lacking - we had to wait for Moore's law to do its work. For go, the algorithm (MCTS + good neural nets + reinforcement learning) was lacking, but the computing power was already available.
But maybe this is all just human prejudice... i.e. what this really goes to show is that in the final analysis all board games we humans have inveted and played are "trival", i.e. they are all just like tic-tac-toe just with a varying degree of complexity.
This is our Deep Blue moment folks. a history is made.
The one solace was that Lee Sedol got his ko =) however, AlphaGo was up to the task and handled it well.
I'm no where near a strong player but it seems like AlphaGo is far ahead of Lee Sedol.
It would be fascinating to see how early AlphaGo assigned very high probability of it winning. It would also be interesting to see if there were particular moves which changed this assignment a lot. For instance, are there moves that Lee Sedol made for which the win probability is very different for the AlphaGo moves before and after?
Our model of representation of Go fails at expressing the game/strategies of AlphaGo is showing, we are communicating in the board in different languages, no wonder everyone looking the at games is stomped by the machine "moves".
Our brains lack the capacity of implementing such algorithms (understanding such languages), but we can still create them. We might see in the future engine A played against engine B and enjoy the matches.
No one is surprised by a machine doing a better job with integer programming/operational research/numerical solutions etc.
"Demis Hassabis, Google DeepMind's CEO, has expressed the willingness to pick Ke as AlphaGo's next target."
(It also includes the videos of the first 2 matches)
In 2 years? In 1 year? In 3 months?
If you provide terrain(elevation etc.) information, AlphaGo can be used to corner opponents into an area surrounded by mountains where AlphaGo is sitting on the mountains. We all know what happens after that.
Don't want to kill the party but I am completely surprised with the lack of chatter in this direction.
Anyway, I think AlphaGo is a great training companion. I think Lee felt he's learning.
Finally, I also feel that while experience is crucial, the older generation would flush out by the younger generation every decade. I wonder if age really play a role in championship - not that AlphaGo isn't considered a 1000 years old "human" given it has played thousands of games already.
If you ever enjoyed Tekkit (minecraft) or the more automated part of Dwarf Fortress, you will like this game a lot.
My favorite part is that basically any action or building (up to the far late game) usually pushes you to build it by hand once or twice, and then automate the process forever.
It is extremely satisfying to build your first solar panel array while fighting for every resource, and then a few hours later have your 5000 strong robot army assemble a blueprint of that same array in 5 seconds while you watch.
The game also has a really great mechanic where you are constantly unbalanced for what resource you need to build up, but only because you decided to expand and create, so that your plans always digress into other plans and other problems to solve.
Also, the multiplayer is great and works really well, though it has a weird setting where you can set your own latency, and it seems like the host and clients are in lockstep not allowed to skew (so if you have a laggy client its a problem.)
I have one warning to you which I am not aware of seeing elsewhere for it: something about the color palette gives me severe eye strain, to the point of "physically painful to play", in a way I have never experienced before or since.
There exist several other games on Steam these days with the same core build-a-(semi-)autonomous-factory mechanic. My favorite of those that I've played so far is Big Pharma. It's substantially less advanced in terms of factory mechanics than Factorio [+], but the strategy is very, very deep, much deeper than you'd expect from looking it it. (For spoilers on that score, see my Steam review, which is the topmost one on the page. Capsule non-spoiler summary: best $20 I spent last year.)
[ + ] A fairly key skill for Factorio, which is present in Big Pharma but not relevant except at the highest levels of play, is timing the production of multiple subcomponents (which might happen in different quantities, at different rates, at variable distance from where they are consumed) such that one's production line never starves, blocks, or overproduces. It's Totoya Factory Simulator 2016 in this respect. When you get it right you get visual feedback (absence of congestion on your production line as e.g. coal stacks up because your furnaces aren't burning it because they're blocking on insufficient supplies of ore because you have insufficient electrical capacity because...). It feels like you're playing a symphony of borglike capitalist efficiency.
This is one of those games where it starts innocently enough, a few hundred hours of gameplay accrue, and the next thing you know you're wondering where the last two months went.
Part of the reason for this is related to the decline of the SimCity franchise over the years. Cities: Skylines was pretty, but it didn't really hit the same notes in terms of gameplay. The last real SimCity game was 2004, so it's been well over a decade of waiting for the next truly addictive builder/simulation fix to arrive. Based on the amount of people I know having sunk triple-digit hours into Factorio, it's the closest we've come to a real SimCity game since.
That said, it seems like a fantastic game and I look forward to playing it some day.
When playing with others, it's comparable to programming with others. It's about communication, people will try to optimize stuff that already works, people will argue about how to build stuff or when to refactor stuff. Sometimes they will build something in a weird way and argue why this is the best way etc.
It can be a nightmare with the wrong people or a lot of fun with intelligent people who can control their emotions.
Also, the community is amazing and there are tons of mods.
You might also like our Minecraft Modpack, Resonant Rise. You can grab it on the ATLauncher (just search for ATLauncher). It's designed around complex and interesting engineering challenges.
There's a similarly REALLY cool Minecraft mod on the scene for Minecraft 1.8 called "Psi" that you can try (available here: http://psi.vazkii.us/). It lets you use a visual dataflow language and trigonometry to create "magic spells" that are very technical in nature. It's a very fun exercise, and it's neat to write a program (with almost NO flow control!) that does things like dig a tunnel or build a bridge or throw zombies skyward, all by magic.
It just works, and it's smooth as silk. It seems like they've concentrated heavily on the game engine and now are focussing on content, but there is plenty of content already to play with.
PROTIP: Try multiplayer. Unexpectedly, my girlfriend loved it and we had a lot of fun there.
Also, they are hiring: http://www.factorio.com/jobs
By the way, thanks for supporting this on Linux! Wouldn't play it otherwise.
This has been the most popular game at my LAN parties (http://kentonshouse.com) for the last year and a half. Here's a review I wrote in December 2014 that still applies (original at https://plus.google.com/+KentonVarda/posts/YHayo6sj42n):
My new favorite game is Factorio (http://factorio.com). It's like a cross between Minecraft, SimCity, and Civilization, and the result is massively better than any of them. The game is currently in "alpha", but I'm not sure why; it's far more polished and less buggy than many finished professional games I've played.
Overhead view. Like Minecraft, you start out punching trees for wood to craft a pickaxe with which you can then mine some ore to craft other things. But soon, you are building an automatic mining drill, then a conveyor belt to bring the ore to a smelting furnace, then robot arms to insert the ore into the furnace and take the smelted bars out, then more conveyor belts to bring those to other places where thy can be used. Eventually you can build power plants, labs to research new technologies, walls and turrets to defend against attackers, oil refineries, robot delivery drones, trains, and more.
The game is incredibly addictive (especially for programmers?). But what really impresses me is how the game illustrates the complexity of the real world. Factorio is a lesson in how logistics trump tactics and strategy ("strategy is for amateurs, logistics are for professionals"), and in how to build a complex system for changing requirements. The lessons are broadly applicable to the real world.
It's fairly easy to analogize Factorio to city planning. In your first game, you will quickly discover that the city you built for the early game is all wrong for the late game -- and then you realize: every real-life big city is a horrible mess and this is exactly why.
I also find myself comparing Factorio to software, especially distributed systems and networks. I find myself constantly using phrases like "buffer", "flow control", "back pressure", "throughput", "refactor", "under-utilized", etc.
One transition I find particularly interesting: around the middle of the game, you research the ability to build "logistics drones", which are basically like Amazon's quadcopter delivery drones. They can transport materials from point to point around your base -- you set up "request" points and "supply" points, and the drones pick up whatever items land in the supply points and bring them directly to whichever requester is requesting that item.
Up until this point, you mostly use conveyor belts for this task. When you first get logistics drones, you think "These are WAY more expensive than conveyor belts and have much lower throughput. Why would I ever want them?" But you quickly realize that the advantage of drones is that they are rapidly reconfigurable. Once your base is entirely drone-based, you can switch factories to build different items on a whim -- no need to re-route any conveyor belts. This gets more and more important in the late game as the number of different types of things you are building -- all with different input ingredients -- increases, and maintaining a spaghetti of conveyors becomes infeasible. This is tricky to grasp until you do it.
For a while, of course, you'll have part of your base running on drones while another part is still based on conveyors. It's like using Google Flights in your browser to search for airline tickets, while on the back end it is integrating with 60's-era mainframe-based flight scheduling software.
I can't help but imagine that conveyor belts and logistics drones represent two different programming languages (or, maybe, programming language paradigms). Choosing your programming language based on how easy it is to do something simple is totally wrong. The true measure of a good language is how it handles massive complexity and -- more importantly -- reconfiguration over time.
Another thought: In 10-20 years, when we have everything delivered to our houses via drones and self-driving taxis populating every major street, will we be able to just get rid of small residential side-roads? You won't need to drive a car up to your house anymore: it's easy enough to walk a couple blocks to the nearest major street and hop in a cab, or better yet to a train station. You don't need to carry cargo since it's delivered by drones. Delivery trucks: also replaced by drones. Will we suddenly be able to reclaim a ton of inner-city space? What will we do with it?
In any case, thanks to +Michael Powell and +Brian Swetland for introducing me to this game!
PS. Factorio is multiplayer! We've been having a lot of fun with it at LAN parties, and I just completed a coop game with +Jade Q Wang, who is also addicted. We tend to forget to do things like eat or sleep when we're playing.
I think a very good extension would be to provide new technology for sustainable energy generation and even further the ability to harness and excel the planet as a super living thing (as Asimov could imagine in its Foundation series).
This would provide an even more constrained and challenging environment as to maintain planet's equilibrium on one hand and progress on the other.
Maybe, those aren't really valid concerns from the beginning of a game. But afterwards, it'll be important as massive industrialization reaches ecological relevance.
Or hit me up: email@example.com (Founder/CEO)
This looks right up my alley, downloading it now.
I'm wondering if playing a game like this can help train a habit of automation. Thoughts?
I've made two mod that suits well the engineering mindset: static difficulty stop time based increase of enemy difficultyhttps://forums.factorio.com/viewtopic.php?f=87&t=6433
endless resource makes deposit endless (with diminishing return) so all your railroads don't suddenly vanishhttps://forums.factorio.com/viewtopic.php?f=94&t=3130
check them out :)
I... may have consumed copious amounts of hotel wifi playing it over VPN one night. ;)
Static types allow for much, much better tooling, particularly autocomplete and the ability to check whether your code is valid on some basic levels. I consider avoiding it to be a big waste of time. I've had a good experience getting the definitions files going for the libraries I use.
He proposes a fairly simple stack (and for the sake of argument he assumes you're needs are beyond the 'static html and a touch of jQuery' stage). He spends time explaining them and makes a fairly good attempt to avoid the overly-new or overly-complex.
We've had all the obvious reactions:
1. This isn't my stack of choice
2. React is flawed
3. Don't use frameworks at all
4. I hate dynamic typing
6. It will have changed by next week
All of these are valid discussions to have but they get wheeled out every time and - maybe with the exception of point 1 - they are only indirectly related to the topic of this post.
So every js discussion becomes a meta-discussion. Same thing with Google posts ('oh they'll close that down next week'), ORMs ('they suck'), Python ('Python 3 was a mistake') etc.
HN comments needs an on-topic vs off-topic filter. Or a "yes we already know that" filter...
(The irony of the above statements when this is also a meta-post is duly noted)
My own feeling is that everyone should avoid jumping on complex frameworks until they are really needed. jQuery, Pjax or intercooler.js can take you a long way and save a lot of headaches. But when you do need a proper MVC-like framework then this article is a valuable guide of the sort that people have been asking for for months.
This just doesn't make any sense. None of this is nice. It's all ugly and complicated. There's no beauty to these tools. There are no fundamental tools either. Everything is evolving too quickly. You've a choice of 25 frameworks, libraries, tools that change every day and break between versions. The complexity is growing and the ecosystem is getting extremely fragmented.
Instead of just writing your application, you're in despair trying to choose the right tools, panicking about not being able to understand them, and then then you spend weeks learning them. You end up writing application that is glitching and you've no idea how to fix it because you depend on a dozen different tools. Worst part is that you don't even need them. You've been tricked by peer pressure into using them.
When you've a complex, fragmented ecosystem, and developers are in stress because they can't understand and learn tools quickly enough, then the only logical conclusion that it will collapse, and only a few technologies will survive that will get mass adoption, and everything else will be forgotten.
I also think the author is a bit too strongly in favour of `mocha`; I don't think `ava` should have been some easily dismissed, and I've recently run across a pretty nice framework called `painless`. And even if you do use `mocha`, I find `expect` to be a better assertion library than `chai`. I think a better answer here might be "use whatever works for you, so long as its not Jest". (The shoutout to `enzyme` was on point though; great library if you need to test React components.)
Seems to be way faster, and easier to learn than any of those other framework/libs.
For example: How is Mithril Different from Other React:Source: https://lhorie.github.io/mithril/comparison.html
Another difference is that Mithril, being an MVC framework, rather than a templating engine, provides an auto-redrawing system that is aware of network asynchrony and that can render views efficiently without cluttering application code with redraw calls, and without letting the developer unintentionally bleed out of the MVC pattern.
Note also that, despite having a bigger scope, Mithril has a smaller file size than React."
If you work for a software company and your company has patents then keep in mind that by using React you are giving Facebook a free license to your entire patent portfolio.
More info on weak vs strong retaliation clauses: http://www.rosenlaw.com/lj9.htm
1) This is the author's favorite setup in 2016. With all due respect, what is the lasting value of this information?
2) There is no "best" architecture. It depends on what problem one is solving. The author does not specify that, making their conclusions likely completely wrong in most cases. Yet, the language they use is in absolute terms.
4) It curiously evangelizes the "latest and greatest" frameworks, thus incurring in novelty bias -- new frameworks have less apparent problems because many side effects become apparent years later, once the codebase is mature.
PS Ramda it eats lo-dash and its imperative API for lunch. It's for power users, everything curried, higher levels of abstractions. Pick it up and learn it, it'll make you a better programmer.
Next stop Clojurescript. Om next is a library where you can get a feel for a Falcor + Relay stack in like 70 lines of code all without the specific tech bloat. David Nolen is a UI profit, just follow him.
Use typescript and just write it to do whatever you want it to do. You don't need React, or any of this stuff. Chai is good for testing. But as far as deploying a production application, there should be literally no dependencies. You don't need them.
For us contractors, we have to answer to clients we had 2 years ago about why their app is in Backbone.
I mean, damn; we have to build software here and we aren't all Facebook. You might get warm and fuzzies from constantly starting over and feeling like you've chosen the right framework, but it's immature.
Oh, we got it right this time! React is a paradigm shift! We've quickly forgotten we were saying this with Angular bindings. Oh your model based stuff is crap, this has TWO WAY BINDING, it's a paradigm shift!
Now, I'm using Angular. I could recite the Backbone source code, we had a few small libraries and we built huge apps and they worked (and they were built with Grunt and it worked fine, but hey, move it all to Gulp! now! Paradigm shift!). In this case I was expecting it. I waited six months and Web Pack came along.
We're going to go ahead and build our app in Angular 1.x with TypeScript and Web Pack and test it with Jasmine.
This article is NOT correct. This hasn't been 'decided', there is no clear winner. You can't simply list the features of something as "amazing" and "where it's at". You are arguing finality here and you main data point is "coolness factor". It's not correct, it's not objective, and it isn't high quality, long term; well thought out software development practice.
Build your app first. Then when you are unhappy, see if these tools make your life easier. But play with them first, don't make upfront commitments.
This argument against CoffeeScript isn't very objective.One of CoffeeScript's best features is the minimalistic and expressive syntax.
"CoffeeScript is #1 for consistency, with an IQR spread of only 23 LOC/commit compared to even #4 Clojure at 51 LOC/commit. By the time weve gotten to #8 Groovy, weve dropped to an IQR of 68 LOC/commit. In other words, CoffeeScript is incredibly consistent across domains and developers in its expressiveness."
Using the author's train of thought I could state:"Avoid Bluebird. Most of its better features are now in ES6 promises, a standard."
Yes promises are in the ES6 standard, but that's not the best feature of bluebird. There were and are many promise based libraries, but bluebird was built for unmatched performance. One will use it if performance matters.
Even today it's faster than the native implementations of promises .
> Tooling (such as CoffeeLint) is very weak.
Maybe because, as it turns out, in CoffeeScript you don't need a lot of tooling. Why would that be a bad thing?
> Electron is the foundation of the great Atom editor and can be used to make your own applications.
Atom is written in CoffeeScript .
It's a huge pain to configure and understand all this tooling, but man is it nice once it's all working. It definitely gives me hope for the web as an application platform.
Re: Flow, it's good but nowhere near as developed as TypeScript, but it's getting better. 0.22 of Flow (released just a week ago) brought massive speed improvements that bring it decently on par with any other linter. I found I could finally re-enable it in Sublime Text after this release. It catches all kinds of things linters won't, and the upstart cost is relatively low. On the other hand, TS has the benefit of years of type files available for almost any library; don't underestimate how great that is.
Using React has been great as well. It's what I wish Backbone views had been from the beginning.
We certainly aren't wanting for choice. Between the dozens of Flux implementations, React alternatives like Mithril, interesting languages like ClojureScript (with Om if you want to keep using React) or Elm, multiple typing systems, and even WASM on the horizon - web development is an exciting field. It's also overwhelming, and I say that as someone who keeps his head in it over 12 hours a day.
Re: CSS, the story still feels incomplete. I want a way to write a library that will effortlessly import its own styles, so consumers can simply import the component and go to town. Most solutions are severely limited, slow, or both, mostly because they rely on inline styles. Nothing's wrong with inline styles, until you want to do :hover, :active, pseudo elements or the like.
See the React material-ui project for a very real example of how this can go wrong - note how every component has a dozen or more extension points for styles. I built a project with this recently and it was intensely frustrating that I couldn't simply add some CSS to the app to fix styles across the board - I needed to keep a giant file of extensions and be sure to apply them every time I used a component. And, of course, component authors can't possibly anticipate every possible use, so some of my selectors were ugly "div > div > ul > li:first-child" monstrosities.
CSJS (https://github.com/rtsao/csjs) is one of the few solutions I like. I would be very happy to see it, or something like it, go mainstream.
Go to terminal
ember new app
start writing code.
> I like to keep it simple and just use fetch. Its promise based, its built in Firefox and Chrome, and it Just Works (tm). For other browsers, youll need to include a polyfill.
I laughed. Is this satire? There's no reason to have X, therefore you need Y and Z to replace X. It just works.
My only observation of note I guess is the sheer marvel that all of this is nearly entirely supported by the Open Source community, and the amount of engineering effort being poured into the JS ecosystem at this point in time is nothing short of astounding. I can't wait to see where it goes in the next few years.
One difference though is that I've read Relay and GraphQL will eventually win out over Redux. Thoughts?
Anywhere in your code where you are using identifiers, there should be a method to check for typos. That holds for css classes and qnames too.
You can check here with this tutorial, how easy to write a complex application with Ember.js: http://yoember.com
So if the graph's x axis is time and y axis is the amount of stuff you learn, then a flat learning curve means that you gain very little added knowledge as time goes by. And a steep learning curve means that you learn a lot in a small amount of time.
It always confused me why is it backwards.
I don't get the whole React love? I always learnt not to mix view and business logic? React does this?
The interesting part is maybe that most apps look like this:
It does seem a bit stupid to load the browser just for the canvas element though, so if someone know a better solution, please post!
- Core: React
- State container: Redux
- Language: ES6 with Babel
- Linting: ESLint
- Dependency management: NPM, CommonJS, and ES6 modules
- Build tool: Webpack
- Testing: Mocha and Chai
- Utilities: Lodash
- Fetching: Use what-wg or isomorphic fetch rather than jQuery
- Styling: Sass, PostCSS, Autoprefixer, CSS Modules
the whole function composition has a clean feeling and LS removes a lot of cruft
Then a friend of mine told me about Clojure/ClojureScript. I never understood hoisting.
Is this what it means to look for a job in the startup scene in 2016? It seems overwhelmingly likely that you'll be dropped into some unholy, frankenstein-esque work of "art"
While JS has a few pain points that should be addressed by tooling, I'd draw the line at React and jQuery and a bespoke, well-kept utility library
> The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitiveyou merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same program.
It's now company policy that built-in microphones have to be disabled, and only external ones are allowed to be used when necessary.
One side effect I've noticed is that they seem to have tried to account for it, which has made the Echo less responsive to actual requests; a few times I've stood in front of it yelling 'ALEXA' trying to get it to stop and it does not respond.
It's repeatable, too. One time it happened right as I was parking, on an episode of This American Life. (Or Serial. Or Planet Money. Yeah, yeah, I listen to a lot of NPR shows.) So I kept rewinding back over that part, and it kept triggering Siri.
Had to stop it and change the wake word back to "Alexa".
The show went through the opening sequence, then announced "Previously on Battlestar Galactica" at which point the xbox rewound back to the beginning of the show.
"Siri us xm..."
with the iphone plugged in to charge while driving to work hilarity ensues as it cuts out the audio to speak of whatever it thinks was asked.
Steve Blank's Innovation Corp initiative is now working with the NIH, NSF and Defense to help companies development product-market-fit once basic funding has been achieved as well.
Disclaimer: My company GrantIQ.com helps companies find federal funding opportunities & helps large companies identify breakthrough technologies being developed through these programs.
I think making a hard tech product for consumers is still very hard, and those who succeed should be commended. People wanting to dip their toes in the water might consider things like building a factory management portal like Splunk that aggregates sensor data, or something that involves sensing and reporting rather than actuation. Actuation can be dangerous and therefore legally risky.
And that isn't an insult to Cruise. The problem space is still difficult, and they created more value tackling that than the vast majority of startups do on much easier problems. I happen to think that the opportunities for making the cutting edge more accessible vastly outweigh the opportunities for cutting new edges, and nobody should be discouraged from doing it because it isn't as glamorous as the research on the ground floor.
This article is not exactly re-assuring because the noted interests - AI, biotech, and energy - are all extremely crowded fields more than likely competing with large institutions, research organizations, and let's be honest, probably a TON of regulatory hurdles to consider if planning to do business in the US.
For context, one of my primary inventions is a mobility / utility device that would have residential, commerical, and industrial applications. By design it can be applied to a variety of uses. I've done basic patent research and the pathway looks extremely good, quite tempting for me to just go ahead and file ASAP. It's hard tech. It's not glamorous, but it's a huge market opportunity on prima face.
I've got a co-worker pal who finally got his beverage and branding invention patented and now he's in the marketing to local schools, catalogs, and if early indications are correct, he may have an genuinely lucrative future with it. In my view, he's a more likely investor target than anybody out on the West Coast or in a VC room. I simply base this perspective on personal experience and how this post sounds promising yet concludes with a short list of massive goals. Might as well wrap it up with "We just want to invest in a better mousetrap" considering the scope. Cottage industry isn't runaway freight train profit creation, I get that, but I'd also counter that if one wants to get into the next frothy bubble of biotech then good luck with that.
In relative contrast, it is trivial and risk-free to make a lean MVP for a generic Uber-for-X and send in a YC application.
A technology providing that could help make our politics more democratic. It could also lower the need for communication hierarchies like in big companies.
I'm doing research in this area and built several prototypes. I strongly believe we can achieve the goal, but doing it with only one teammate researcher makes the process very slow. We are two master degree computer scientists and need a bigger team.
Contact me if you are interested.
Cruise Kyle Vogt's third acquired company. It's still ambiguous as to whether your first startup should be "hard tech" or not. I think in the case of self-driving cars it makes sense because automobiles are a trillion dollar market and the anxiety to stay competitive with the rapidly changing landscape is intense.
Plenty of room in this market yet. What is the biggest competitive advantage and who has it? Data & Tesla. Tesla is the only company with the appropriate data to make self-driving a reality faster than anyone else (tell me when Google starts collecting data in Norway). Data collection equipment is relatively inexpensive. Maybe figure out how to get people to attach sonar or lidar or tap into their car's computer to access that data. What else is a problem, even for Tesla? How about predicting pedestrian behavior and making eye contact -- or identifying if a pedestrian is disabled and sending a signal to those pedestrians that it is OK to walk.
At the end of the day the market being targeted has huge weight to the likelihood of a successful outcome.
Incidentally, if you want to get into YC: https://twitter.com/hunterwalk/status/708304772055441408
It is therefore extremely encouraging and important to hear this message clearly and directly from Sam, that this is indeed a lucrative area worth pursuing and that there are investors who have the risk appetite to continue to fund such ventures.
Wow. Passive aggressive much?
I'm not critical of Silicon Valley. Silicon Valley includes many companies working on hard problems, or investing money into long-term moonshot programs.
The criticism is levied towards, for example, the social media giants that pull in top engineers to work on social media problems exclusively. The criticism is also directed at Silicon Valley VCs, who lure smart young people to work on semi-trivial problems because it's the quickest path to profit.
It's not necessarily fair criticism. VCs have an incentive, first and foremost, to fund successful businesses. If their surest path to success in today's economy involves building semi-trivial apps, that's what they'll pursue. The same can be said for the finance industry, where the most successful players are employing our nation's top mathematicians and scientists to extract money from public markets using high-frequency trading. We can't expect them to self-regulate. But how can we incentivize smart people to work on something less lucrative?
Unlock water from the oceans, and a lot of people can be helped. I'm ready to start this company, just need a few dozen engineers/chemists and a boatload of funding.
It seems like companies who build products that have clear and significant business value with excellent market timing are successful. Perhaps you can extrapolate to saying hard tech is now a hot field but this is the first successful exit that comes to mind recently, rather than private investors putting more money on the table.
The combination of a few trends (Predominately smart phones, social networks & data analytics) enabled a whole new wave of technical possibilities. It was relatively quick to monetize this via App development, so that was the first wave to take advantage. Hard Tech can take longer (though from Cruise's point of view, not always that much longer) so these companies are only coming to fruition now.
When I look at AlphaGo's success, I think we are a new dawn of amazing things from Hard Tech.
Amazing to believe 2 years later they got bought for a billion dollars.
If the invention is good but the team seems incapable that would be a reason for non-investment.
Also - "hard tech" seems like the wrong thing to be aiming for. There are lots of hard problems to solve with simple tech and disrupting markets. Are these not valuable targets?
So I guess that's
Ginkgo Bioworks $54.12M (engineers new organisms) and
Helion Energy $12.11M (fusion) ?
Blunt. True, but blunt.
Hats off to Sam.
Hard tech has always been "back" (when did it go away, Elon Musk?), but it is quite the pivot from the YC of yesteryear that capitalized on things like better UX for AJAX calendars, web hosting, and drag-and-drop file storage.
The really hard tech though really evokes that initial visceral reaction of "too risky" for investors, especially if there's no real indicators in the form of traction or an MVP ("well just wait a sec there, professor, the problem is hard, so we haven't solved it yet").
As an investor, I certainly wouldn't feel comfortable shoveling stacks over to some guys who told me they were going to build true AI with a decade-long outlook. Yikes!
It's also a tougher proposition for founders. You're basically betting 5-10 years of your life on a problem that you don't even know you can solve with no revenue/exit strategy in sight. Meanwhile, that guaranteed salary at Google sure is looking more and more appealing.
I would almost recommend graduate school for these types of people looking to leave their mark on the world in solving a really hard problem where any real contribution only inches the world closer to solving it.
The struggle is real for all actors here.
For founders, the trick is finding that sweet spot where a problem sounds hard on paper (such as self-driving cars or VR headsets, wooo!), but actually is feasible using current technology (e.g. stick some lenses in a piece of cardboard), but due to timing or market forces or whatever, nobody is currently paying attention to it yet.
Then at least, you can execute just like an AJAX calendar app would and obtain the same outcome (to vastly understate the challenges involved!).
And would most engineers consider Cruise really hard tech? I don't know (I mean that honestly, not passive agressively).
Don't pretend that Silicon Valley is not superficial in many regards. Yes, some of those silly, fluff apps have gone on to make money and that's your metric so you defend it.
But for many people, Silicon Valley embodies a different principle, like the one found at Xerox PARC, of people trying to make the future better and not just drive a nicer car.
Sama sounds like Donald Trump with the "I told you so". Unlike real estate, science can't be bullied to success. This makes me increasingly bearish on YC's future.
Sam channels Baghdad Bob for a minute there - "there are no American tanks in Baghdad. Especially not the one that just rolled by on-camera"
Lost all interest in saltman's point when I read this line.
It's a valid criticism, regardless if the people who are making it are "builders" or not.
It was a turn off, because it makes the author seem bitter about (what I think is) a valid criticism.
Why can or should criticism only come from  people that are building things themselves? What about the press (as only one example) or people who write books? What about people teaching in colleges? What about people leaving comments on HN or any other forum?
Criticism only valid if coming from someone building something themselves? Don't agree with that. 
 Which to me by the choice of words is what is meant by this statement.
 If that is the case companies should never solicit any feedback from their customers about their product or their business model.
i thought Ycombinator self-posts were not "votable".
this is hardly hard-tech since the same thing is being done in a garage with off the shelf parts http://www.bloomberg.com/features/2015-george-hotz-self-driv...
I have to take issue here. I don't need to be a "builder" to see that what a given startup is building is either worthless or an attempt at solving a non-problem.
I'm not a helicopter pilot, but if I see a helicopter stuck in a tree, I don't need to be one to know that the guy screwed up.
Jeez, can we please move on and do something that actually benefits this world with our talents?
Big ass problem: Found one, where athe first good solution is amust have for nearly everyonein the world with Internet access.
"Hard tech" solution: A bunch ofapplied math, with advanced prerequisites,some original, fora unique, really good, fun to use, interactive andaddictive, by far the best in theworld solution -- Did that.
"Hard"? Silicon Valley hasmore hen's teeth than entrepreneurswho could understand the theoremsand proofs of my math even ifI explained it to them. Why?They didn't take the prerequisitepure/applied math courses ingrad school. Neither didmore than a tiny fractionof computer science profs.
Code: 80,000 lines of typing,running, in alpha test -- did that.
"We hope to hear from you."
You did, and you ignored it.
Using my HN UID, look up my submission in your records.If you are interested now,then let me know.
I used to have an rfid chip implanted in my hand. I had all these ideas of what I was going to do with it - log on to my computer by putting an rfid reader in my keyboard, build a magic 'touch the wall and music starts playing' thing etc. All of them turned out to be very boring and useless in practice. Logging into my machine - meh, turns out that it takes longer for monitors to come back on than to type the password. Building (useful) user interfaces based on rfid is very hard and doesn't add anything over a regular proximity sensor.
The only idea I have left is that in my current house, I have a place where I could put an rfid reader so that if I'd get a chip implanted in my gluteus maximus (my butt, essentially) I could activate an automatic door opener by bumping into it when I have my hands full. I just can't motivate myself to set this up and discovering again that it's a dud in real life.
Edit: said death meant birth.
My main experience with RFID is cloning tags onto T5557 chips, and I don't think I've ever come across a writable tag in the wild. It doesn't seem to make economic sense to spend an extra penny or two on every towel to put in a tag you are never going to change.
That tag is about the right level of security for towel inventory. The big win in this is managing outsourced laundry costs. Knowing how many items went to the laundry, and how many came back, rather than just counting linen carts, matters a lot. ABS Laundry Solutions overview (with ominous music) 
I guess I'll wait for Apple to reveal the new MB Pro line before making a decision, but it seems that for the first time in 10 years my next laptop will not be a Lenovo/IBM.
I bought one of the first or 2nd gen XPS 13s and loved it. However the experience of buying from Dell was awful and customer service was so intractable as to be useless too.
This really annoyed me years ago when I spent a small fortune on a beautiful TV that had "COMPANY" in white letters on the otherwise perfect dark bezel.
Anyway the extra complexity that come with it doesn't makes me comfortable...
Any experiences so far ?
Also slight physical tremors can cause complete system crashes. I would stay away from it.
This applied to both the Broadcom and Intel wifi. Any chance these models are better in this regard?
I got the X1 Yoga one month after it came out, installed an alpha version of Ubuntu 16.04 on it and everything just works, including the touch screen.
It is unfortunate that Dell chose to use small arrow keys and at the same time overload the arrow keys with the 'Home-End-PgUp-PgDown'. Hard to believe this layout was chosen for their Latitude and Precision lines too.
Tux Penguin sticker solved my problem on my XPS 13, but would be nice to see it coming out of the box.
I did look for it in the review but maybe I missed it.
it's the laptop that flipped me to mac. wont go back.
That chiclet keyboard and phone-sized pad nonsense is very limiting.
Lots of it is obviously totally unenforceable and wouldnt stand in court, but they put it in there anyway just because they can get away with it.
Does no one do reasonable eulas/tos?
I have no experience with preinstalled Linux, but similar to Android, I would be afraid of presinstalled crabware. Just remove the windows and make a clean install.
I have Dell XPS/Precision 11 and 13, the problem is the Windows 7 have difficulty to boot from UEFI, and you will stuck because AHCI is not supported by these DELL's BIOS.
It looks very much as if we're going to lose all of that to vertical data silos that will ship you half-an-app that you can't use without the associated service. We'll never really know what we lost.
It's sad that we don't seem to be able to have the one without losing the other, theoretically it should be possible to do that but for some reason the trend is definitely in the direction of a permanent eradication of the 'simple' web where pages rather than programs were the norm.
Feel free to call me a digital Luddite, I just don't think this is what we had in mind when we heralded the birth of the www.
If anyone would like to get involved with helping us prepare, please see https://internals.rust-lang.org/t/need-help-with-emscripten-...
EDIT: See also asajeffrey's wasm repo for Rust-native WebAssembly support that will hopefully land in Servo someday: https://github.com/asajeffrey/wasm
Since the last time WebAssembly hit HN, we've made a lot of progress designing the binary encoding  for WebAssembly.
(Disclaimer: I'm on the V8 team.)
: http://webassembly.github.io/: https://github.com/WebAssembly/design/blob/master/BinaryEnco...
- Better sharing of code between different applications (desktop, mobile apps, server, web etc.)
- People can finally choose their own favorite language for web-development.
- Closer to the way it will be executed which will improve performance.
- Code compiled from different languages can work / link together.
Then for the UI part there are those common languages / vocabularies we can use to communicate with us humans: HTML, SVG, CSS etc.
I only hope this will improve the "running same code on client or server to render user-interface" situation as well.
Right now things are a mess in Web Worker land, and have been for quite some time.
 http://docs.racket-lang.org/pollen/  http://practical.typography.com
On the top of the page, there is a horizontal menu containing "App Dev Cloud Data Center Mobile ..."
When I position my cursor above this menu and then use the scroll wheel to begin scrolling down the page, once this menu becomes aligned with my cursor, the page immediately stops scrolling and the scroll wheel functionality is hijacked and used to scroll this menu horizontally instead.
It took a few seconds to realize what was happening. At first I thought the browser was lagging - why else would scrolling ever abruptly stop like that?
I closed the page without reading a single word.
On the other hand we have lock-in ecosystems, closed silos, that are detrimental to the commons.
The only consolation I have is that if WebAssembly provides a bytecode instead of machine code then we still have the ability to perform reverse engineering.
In the end, we have ALL have to do the hard task to inform every single person why Apple/FB/MS/Google are harmful to us and why we should boycott their programs/services.
1) 'Simple' web pages will stick with jquery, react, angular, etc type code. Where you can still click view source and see whats going on. Where libs are pulled from CDNs etc.
2) 'Complex' saas web apps, where you need native functionality. This will be a huge bonus. I'm in this space. I would love to see my own application as a native app. The UI wins alone make it worth it!
It is just a technology, to make things brought through the web, faster.And it is open. And no less secure, than js.So I think it's great.
Good technology does exactly, what the creator wants.And if people don't like some of the things, that gets created with it, then it is not a problem of the technology itself.
So people can do good things, or bad things with it.But in the web, we have the freedom to choose, where we go.
And if we don't like ads for example, we should be aware, that Web-Site creators still want money for their work, so maybe we should focus and support a different funding model. I like the pay-what-you-want or donation model the most, Wikipedia shows, that this is possible on a large scale ...
Is this an old picture?
Helpful research keywords: Itanium RISC Alpha WAP Power EPIC Java ARM Pentium4 X.25
So in the future, when you visit a website they'll be able to Eg: open windows, pop up unblockable modals, webGL, bytecode loaded spam/ads, etc. The end users option will be to block everything, or live with it.
I do not like this bold new world we're entering.
Add Lua to the browser, add Perl 6 to the browser, etc. There are plenty of decade old W3C specifications that never made it to the browser properly, like XSLT 2.0, XQuery 1.0, XForms, never mind the latest versions of the specs.
Heard about it on a podcast recently, haven't had a chance to try.
Thanks for nothing.
It started as volunteer project and some projections put savings at around 10% of total budget after it will become mandatory in April.
Probably world changing, when considering that even semi-technical folks can cook up tools to dig into things like this.
I know this tool was by a developer, but scrapinghub has web UI to make scrapers.
For developers and managers out there, do you prefer to build your own in-house scrapers or use Scrapy or tools like Mozenda instead? What about import.io and kimono?
I'm asking because lot of developers seem to be adamant against using web scraping tools they didn't develop themselves. Which seems counter productive because you are going into technical debt for an already solved problem.
So developers, what is the perfect web scraping tool you envision?
And it's always a fine balance between people who want to scrape Linkedin to spam people, others looking to do good with the data they scrape, and website owners who get aggressive and threatening when they realize they are getting scraped.
It seems like web scraping is a really shitty business to be in and nobody really wants to pay for it.
Lobbyists have to follow registration procedures, and their official interactions and contributions are posted to an official database that can be downloaded as bulk XML:
Could they lie? Sure, but in the basic analysis that I've done, they generally don't feel the need to...or rather, things that I would have thought that lobbyists/causes would hide, they don't. Perhaps the consequences of getting caught (e.g. in an investigation that discovers a coverup) far outweigh the annoyance of filing the proper paperwork...having it recorded in a XML database that few people take the time to parse is probably enough obscurity for most situations.
There's also the White House visitor database, which does have some outright admissions, but still contains valuable information if you know how to filter the columns:
But it's also a case (as it is with most data) where having some political knowledge is almost as important as being good at data-wrangling. For example, it's trivial to discover that Rahm Emanuel had few visitors despite is key role, so you'd have to be able to notice than and then take the extra step to find out his workaround:
And then there are the many bespoke systems and logs you can find if you do a little research. The FDA, for example, has a calendar of FDA officials' contacts with outside people...again, it might not contain everything but it's difficult enough to parse that being able to mine it (and having some domain knowledge) will still yield interesting insights: http://www.fda.gov/NewsEvents/MeetingsConferencesWorkshops/P...
There's also OIRA, which I haven't ever looked at but seems to have the same potential of finding underreported links if you have the patience to parse and text mine it: https://www.whitehouse.gov/omb/oira_0910_meetings/
And of course, there's just the good ol FEC contributions database, which at least shows you individuals (and who they work for): https://github.com/datahoarder/fec_individual_donors
This is not to undermine what's described in the OP...but just to show how lucky you are if you're in the U.S. when it comes to dealing with official records. They don't contain everything perhaps but there's definitely enough (nevermind what you can obtain through FOIA by being the first person to ask for things) out there to explore influence and politics without as many technical hurdles.
Web scraping is a really powerful tool for increasing transparency on the internet especially with how transient online data is.
My own project, Transparent, has similar goals.
We're happy to unban accounts when people give us reason to believe they will post only civil and substantive comments in the future. You're welcome to email firstname.lastname@example.org if that's the case.
I don't expect the public school to teach my kids any kind of career skills or path. Think they'll teach them Python or Swift? Even if they did, it'd be boring as heck and wouldn't happen until they're 16 years old. Conversely, I fully expect to start "outsourcing" my web dev work to my children when they are 13. My wife thinks it won't work. I think having them make $60 an hour while their friends make nothing will be a large deciding factor.... And by the time they're 16, I am expecting them to be taking classes at the local community college which is just a few miles from here.
"In numeracy and problem solving in technology-rich environments, the United States performed below the PIAAC international average. In numeracy, the U.S. average score was 12 points lower than the PIAAC international average score (257 versus 269, see figure 1-B), and in problem solving in technology-rich environments, the U.S. average score was 9 points lower than the international average (274 versus 283,see figure 1-C). Compared with the international average distributions for these skills, the United States had
" a smaller percentage at the top (10 versus 12 percent at Level 4/5 in numeracy, and 5 versus 8 percent at Level 3 in problem solving in technology-rich environments, see figures 2-B and 2-C), and
" a larger percentage at the bottom (28 versus 19 percent 6 in numeracy, and 64 versus 55 percent in problem solving in technology-rich environments at Level 1 and below)."
In trying to be as objective and self-aware as possible, it is clear to us that the homeschoolers/unschoolers we built a community with are vastly better prepared for adulthood, regardless of level of general intelligence. There is a reason the acceptance rate at Stanford is ~27% for homeschoolers vs. 5% for those who went to school. 
Regardless of the pursuit, it seems like our friends who went to school the whole time are stuck in this weird immature pergatory where they can't make decisions or stick to things.
For the most part, the unschooler/homeschoolers are similar demographics, and from all different "walks of life", and yet invariably omit this issue of accepting adulthood.
Our thesis was always that school spoon feeds you, and you have to learn the pain of learning independently to be successful and learn a real growth mindset. But who knows. It is a complicated issue.
The study looked at basic technology tasks: things like using email, buying and returning items online, using a drop-down menu, naming a file on a computer or sending a text message.
Something to consider about your users.
I haven't met a single ambitious person in the US who dreams of teaching in a high school.
It's kind of sad that secondary ed isn't high-prestige in today's American culture, but it's not.
It used to be that a high school diploma meant something, and guaranteed a certain level of literacy and basic familiarity with mathematics and other useful basic skills. Now there are virtually no jobs where the core requirement is literacy or basic math skills etc. which one can get as only a high school graduate. The credential of a college education, or even just having spent some time in college, is the new high school diploma. Except whereas the public funds high school, it does not completely fund college.
And even though total government outlays to colleges have actually increased, admissions have increased even more, and spending on administrators has as well, so per student costs not covered by tax payers has gone through the roof.
> She offers a sample math problem from the test: You go to the store and there's a sale. Buy one, get the second half off. So if you buy two, how much do you pay?
> "High school-credentialed adults, they can't do this task on average," says Carr.
High school graduates can't do what is basically a middle school level math problem. It's no wonder employers don't want to hire them. As a society we spend hundreds of thousands of tax dollars on K-12 students as they pass through the school pipeline but when they graduate they aren't educated and they have very little to show for all that time, effort, and expenditure.
This is unquestionably a national tragedy that will haunt our country for decades to come. We've got a "lost generation" already with millenials who were vastly underemployed for several years after the big financial crash (which would be expected to have a lifetime impact on career and wealth development). And we're seeing that there's a new lost generation of young adults who have been ill served by the educational establishment.
Edit: even worse, HS education does a poor job of preparing students for college, leading to the double whammy of debt + dropping out of college without earning a degree.
You can either be getting a world-class education in the best high schools, or apparently a terrible one. I'm glad I went to a good high school. I never knew how good my education was until my first year of college, in my first writing class. Yeesh.
How many illegal immigrants do these countries have in their school system? How many with minorities who dont speak the main language at home? How many kids in Japan/Finland dont eat properly because their families can't afford food? Of course the American's student's results are going to be worse on average.
> Americans who went to college and graduate school did well. They scored above their peers with similar degrees in other developed countries.
The problem is that the bottom half does badly. I think this is largely due to poverty, culture and low expectations rather than poorly resourced schools. Maybe its also that its so easy to get a job in America that working classes really dont need to study hard.
Here's the thing, my son spends more time doing homework and studying for tests in elementary school than I ever did throughout my entire public education.
I have worked internationally. From my own anecdotal experience, I didn't see a difference in intelligence or ability to do the job between cultures.
This may sound grossly elitist, but democracy can be super scary sometimes.
"creativity is essential to successful civilizations" http://www.amazon.com/dp/1907794883
However, as an American citizen who went to public schools as a child/teenager and am now finishing up at a public state university, I'm inclined to say that the education system here is a complete wash.
It works for some people who fit the one-size mold that the system here seems to target, but there are a large number of children/teenagers/college-aged students for whom it does not.
That isn't to say one set of students are anymore gifted than the other. Just that the approach to education is deeply flawed. It works for the type of student it is set up for, and has little to no appreciation for other types, least of which if they even exist. The approach needs a serious revolution in order for our country to have a successful academic system.
source: I am from California and the state university I currently attend and am almost graduated from is in CA as well.
Poor kids don't have the support nor the means to excel. They typically have to worry about other things. Like earning money or working the farm or just trying to keep the family together anyway they can. While other kids have computers, you didn't even have a desk to do your homework. Other kids get dinner, you go hungry. When your mom was sick it meant staying home to take care of her.
The potential that is lost to poverty is no different than the potential lost to Wars. Millions of people who could have changed the world never get a chance.
Way to measure stuff! How about using instant messaging, swiping and dragging stuff, taking pictures with a phone and sending them and doing tasks with Siri/HG instead? We're taking about HS-level students, right? They seem to be measuring stuff for older folks :P
Maybe because the education model in the US (and elsewhere) relies heavily upon Ludditism.
Student's intelligence is measured on a single linear numeric scale based on paper-and-pencil administered examinations and are strictly limited to using computer technology (calculators) that was invented 40 years ago even though they posses in their pockets computers that are millions (billions?) of times more powerful.
"Never memorize something that you can look up."
He addresses the problem of education and the ways it is being done. One really important remark he has made was that one of the biggest differences between children of well off families and others less fortunate is the availability of books: kids of comfortable families have access to more books since they are very young, contrary to their peers. The more important remark is the follow up: libraries tend to offset the impact of economic differences.
It's an interesting talk.
It doesn't even seem answerable. The closest I can think of a way to answer that question is "75%".
Kinda explains why South Korea ranks #1 for the most innovative economy. There's no hazing of nerds although bullying and suicide due to over studying is a definite problem....it just follows the trend that American kids are seriously being left behind by other countries.
For young adults with a high school diploma or less, things did not look so good. These Americans performed significantly worse than those in other countries with the same education level."
Doesn't that just mean that smart kids in America are more likely to get a higher education than in other many other countries?
See the details on race/social rank and performance of high schoolers that went on to college vs the ones that stopped after hs to fully understand the data.
the US has layers, you have a first world country, a second world country and a third world country intermixed, skewing all these data sets. well-off whites perform as well in the US as everywhere else. Asians too. no need for homeschooling or other panic modes.
if you're poor, you're fucked. just like anywhere else.
Solution: Khan academy. let them issue diplomas.
However I have to say when I was in MBA course in US, I don't know why the teacher needs to teach the students how to calculate a linear equations of two unknowns. In China that is a question in elementary school and less than grade 5. I wonder how they graduated from high school and learning business.
"Buy one, get the second half off."
Is that meant as the second unit you buy, you get half off? Don't know if this is a common way to phrase a discount like that in the US..
To put it politely, the US has much greater demographic diversity, with its associated implications for IQ and consequently, all associated g factors.
Of course the bulk of US graduates are going to be unskilled... that's just how the bell curve works, and the US has the resources to support lots of unskilled laborers.
It's the top of the bell curve that matters. How do the smartest engineers and scientists in the US compare to their counterparts in other countries?
Eventually we'll have a basic income so if people don't want to learn, they won't have to.
Here's an example of the sort of task you had to complete (http://nces.ed.gov/pubs2016/2016039.pdf):
> Level 3: Meeting rooms (Item ID: U02) Difficulty score: 346This task involves managing requests to reserve a meeting room on a particular date using a reservation system. Upon discovering that one of the reservation requests cannot be accommodated, the test-taker has to send an e-mail message declining the request. Successfully completing the task involves taking into account multiple constraints (e.g., the number of rooms available and existing reservations). Impasses exist, as the initial constraints generate a conflict (one of the demands for a room reservation cannot be satisfied). The impasse has to be resolved by initiating a new sub-goal, i.e., issuing a standard message to decline one of the requests. Two applications are present in the environment: an e-mail interface with a number of e-mails stored in an inbox containing the room reservation requests, and a web-based reservation tool that allows the user to assign rooms to meetings at certain times. The item requires the test-taker to [u]se information from a novel web application and several e-mail messages, establish and apply criteria to solve a scheduling problem where an impasse must be resolved, and communicate the outcome. The task involves multiple applications, a large number of steps, a built-in impasse, and the discovery and use of ad hoc commands in a novel environment. The test-taker has to establish a plan and monitor its implementation in order to minimize the number of conflicts. In addition, the test-taker has to transfer information from one application (e-mail) to another (the room-reservation tool).
Basically, users are given an intentionally badly designed user interface in which they receive no training, and a task that is impossible to accomplish within the obvious constraints of the interface, and asked to accomplish a goal. It simulates the experience of being a low paid customer service rep in the third world using crappy software and seeing if you can handle it or not. If one has common sense, intelligence, and a sense of valuing their own time, they will recognize this tasks as useless BS and refuse to cooperate further in the test.
Americans as a society have clearly traded job safety of teacher for good quality education.
Stossel's Stupid in America : https://www.youtube.com/watch?v=Bx4pN-aiofw
Maybe we are barking up the wrong tree, maybe school isn't as important for a country as what we take as just fact.
TFA only mentions Gaussian blur, but a Gaussian blur is just a moving average, with "closer" pixels being valued higher, plus a smooth falloff. When you replace each value with an average of its neighborhood, you "soften" the transitions.
Like open this image in your browser or in your favorite image viewer and scale it down to 50%:http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png
Don't forget scikit-image and scipy.ndimage too.
I once worked on a UI where the users wanted to capture a screenshot of the current page.
Because color toner is more expensive they also wanted the option to print grey scale. I'm pretty terrible at working in 2D space but a quick Google search let me know that the conversion to grey scale involved averaging the RGB values for each pixel.
Unfortunately, the coloring of the UI was darker more than light so the resulting greyscale image was still black toner intensive. So we provided an additional option to invert black and white.
To make it work a second transform was applied to each pixel that reversed the pixel value from upper to lower bound (or vice versa depending on how you look at it).
The result was an output that trended toward white instead of black. The output looked surprisingly good and saved on toner so the users could print many screen captures without worrying about wasting resources.
For the business, it resulted in a cost and resource savings. For users, picking the resulting output provided better results that were easier to understand. From a development perspective, the implementation wasn't difficult at all to add. So, win-win-win.
What surprised me was how easy these transforms were to apply. It's a bit CPU intensive on high resolution images but it's not terribly difficult to come up with good results.
It would be awesome to see some more examples of algorithms used for image processing. So much material covers generic algorithms and data structures that come with the typical CS degree.
It would be much more interesting to see algorithms that can be used in practice. For example, how to scale images, implement blur, color correction, calculate HSL, etc...
Libraries are great but these concepts are simple enough that they don't require 'high science'.
The article mentions a curiosity related to how edge detection works. I'd assume that you select a color and exclude anything that falls outside a pre-determined or calculated threshold. For instance, take a color and do frequency analysis of colors above-below that value by a certain amount. Make multiple passes testing upper and lower bounds.
A full color image @ 24 bit (8R 8G 8B) will take a max of 24 passes and will likely have logarithmic runtime cost if implemented using a divide-conquer algorithm.
Things like blur and lossy compression sound a hell of a lot more interesting because they have to factor in adjacency.
I think creativity is like a well, and when you do creative work its like drawing that water out. If you use too much water one day, the well runs dry. You have to wait for the goundwater to fill it up again.
Not only did I begin viewing creativity as a limited resource I create and have access to over time, but I noticed that some activities, like reading about science, listening to music, and walking around town actually increase the rate at which the well fills up.
So now I have made it a daily habit of doing things that inspire me, and I also daily draw from the well like the author said - but Im more careful not to creatively 'overdo it' and leave myself unable to be creative the next day.
Viewing it this way has helped a lot, for all the same benefits the author listed. Im in a rhythm where I dont feel I need a break on the weekend, I just still have energy.
Yet for the potentially even more complex range that is different people, it amazes me that so much of the advice is didactic - we all need 8 hours sleep, 8 glasses of water, and 8 hours of work with breaks is optimal.
The closest I get to advice is 'learn your body and what works for you'. Thanks to the OP for sharing what works for him.
The problem I am running into now is what do I do with my spare time? All my hobbies are computer based (video games and Raspberry Pi projects) but I am trying to minimize my screen time in my off hours. This will get better in the spring and summer as the weather gets better but during winter on the Oregon Coast going outside is hit or miss.
And I hear you about not being able to go to bed until I solve a problem I am stuck on, that drives me crazy.
This article reminded me of my previous workplace (about 7 years ago) where my manager discouraged engineers from working for more than 6 hours a day. He recommended 4 hours of work per day and requested us not to exceed more than 6 hours of work per day. He believed working for less number of hours a day would lead to higher code quality, less mistakes and more robust software.
Although, he never went around the floor ensuring that engineers do not exceed 6 hours of work a day, and some engineers did exceed 6 hours a day, however, in my opinion, his team was the most motivated team on the floor.
For some projects it's perfectly fine but some tasks can only be done if you focus for a large amount of time on it, work obsessively on it until you reach a milestone.
The greatest work I have ever done, was always done when I retreated like a monk for several weeks, cutting myself of the whole world and working almost non-stop on the task until I made a significant breakthrough.
Then I go back to the livings and share the fruits of my work, and of course, take a well deserved rest for several days.
The trap into most people fall is that they are confusing being active and working.
But i agree that working less than 8h/day could be really more productive. I also liked the "less stuck for coding" topic as "...it is sometimes hard to go bed without solving some unknown issues, and you dont want to stop coding in the middle of it..." so maybe forcing themselves to stop could be a solution.
Anyway, i would really like to work 4 or 5 hours a day but keeping holidays and weekends free from work and i think this can only be achieved if you can pay your living with products of your own such as your apps and not by freelancing (i am a freelance and i know it!).
But i enjoyed the idea behind the article and i will try to achieve it one day.
A few points are raised in the post:1. If you only work 3 hours, you're less tempted to go on twitter/facebook/hacker news.
True - but that's really a question of discipline, work environment and how excited you are about what you're working on.It's perfectly possible to perform for 10 hours straight without distractions, just make sure to take an occasional break for physical health.
2. Better prioritization.
Treating your time as a scarce resource helps focus on the core features. But your time is a scarce resource even if you work 12 hours a day.Programmers are in shortage. They cost a lot. And the time you're spending on building your own apps could have been spent freelancing and working for someone else's apps.Always stick a dollar figure on your working hours. Even if you're working on your own projects.You should always prioritize your tasks, and always consider paying for something that might save you development time (Better computer. better IDE. SaaS solutions, etc).
3. Taking a long break can help you solve a problem you're stuck on.
Personally, I find that taking a short walk, rubber duck debugging or just changing to a different task for a while does the same.If I'm stuck on something, I don't need to stop working on it until tomorrow. I just need an hour or two away from it.
What works for me is having a baseline of 3 or 4 hours of daily work, and not imposing any hard limits when I want or need to do extra hours. This works out great, because I have no excuses not to do the boring routine work as it's just a few hours, but I also have the liberty of doing obsessive 10h sessions when I'm trying to solve a tough problem or when I'm working on something fun.
I usually work 7 days a week, but invariably a couple days a week I only work an hour, checking email and replying to people.
The work I do is of better quality, I'm happier, and I easily could work at this pace until the day I die.
Sadly, it isn't always possible.
I hate about the corporate workplace that it doesn't accept any kind of rhythm but treats you like a machine that performs exactly the same at all times. Nature is built around seasons and so are humans. They are not machines.
I would much prefer to have a time sheet where I can do my 40 hours whenever I feel like it.
I actually had similar routine while at school, but it was 6 hours a day total. 3 hours in the evening, usually just before I went to sleep, might be 19-22, or 21-24 and 3 hours in the morning when I woke up and continued for ~3 hours and then left for lectures.
I started doing this because I realized that I am no longer capable of pulling all-nighters. And it worked surprisingly well :-)
'everyday' is an adjective
I would guess that, if the OP had a competitor, then the OP would be easily forced out of the market if that competitor worked 4 hours a day :)
Edit: I usually do three blocks of three hours each and one two hour block each week. I find three hours perfect to tackle a problem, and a good chunk to be able to reflect upon afterwards.
EDIT: I got downvoted. Come up with whatever standard of productivity you want (ANY standard that you want) and adduce a single human who in 16 years times 90 minutes per day accomplished more than I can find a counter-example of someone doing in the same field in 1 year. 1 year of 24 hours a day strictly dominates 16 years of 90 minutes per day, and you cannot find a single counterexample in any field from any era of humanity. Go ahead and try.
oh and by the way, in 1905 Einstein published 1 paper on the Photoelectric effect, for which he won his only nobel prize, 1 paper on Brownian motion which convinced the only leading anti-atomic theorist of the existence of atoms, 1 paper on a little subject that "reconciles Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing major changes to mechanics close to the speed of light. This later became known as Einstein's special theory of relativity" and 1 paper on Massenergy equivalence, which might have remained obscure if he hadn't worked it into a catchy little ditty referring to an "mc". You might have heard of it? E = mc^2? Well a hundred and ten years later all the physicistis are still laying their beats on top.
Your turn. Point to someone who did as much in 16 years by working just 90 minutes per day.
Closer to our own field, Instagram was sold for $1 billion about a year after its founding date, to Facebook. Point out anyone who built $1 billion in value over 16 years working just 90 minutes per day.
Another good example of this is Trenki's software renderer for the GPX2, which implements a shader architecture if memory serves. I haven't looked at it for many years, but I remember it being a useful resource when learning this stuff.
Other useful resources are, of course, Michael Abrash's Graphics Programming Black Book (despite it's age, is still a great read filled with useful information), and for a really deep dive into the graphic's pipeline, ryg's (of Farbrausch fame) A Trip Through the Graphics Pipeline.
The articles describing the operation are much more interesting than the code itself.
It's just an inefficient triangle rasterizer. All it does is loop over the pixels in a rectangle covering a triangle, and for each pixel inside it calls a "shader" function. All the beef is in these 40 lines .
I don't know how they've done the texturing in all those pretty pictures (it's in the "shaders", not included here), but they don't calculate the partial derivatives required for correct, mipmapped texturing. Simple non-mipmapped perspective correct texture mapping can be computed in the shaders, with the usual caveats.
OpenGL is much more than a rasterizer, there's texturing, depth-stencil operations, blending, compute shaders and efficient memory management.
edit: Someone in reddit pointed out that this is a translation of a Russian language course. The original Russian version looks to be a bit longer than the English translation (but I don't read Russian, so I can't tell if it is better): https://habrahabr.ru/post/249467/
edit: Now I see it doesn't implement the OpenGL API though, the goals are obviously different.
Thank you for posting this online.
Edit: Also thishttps://graphics.stanford.edu/courses/cs248-08/scan/scan1.ht...
What's going on there with the template<> template<> ?
In particular, I'm very curious how tile-based deferred rendering wound interact (positively!) with a Vulkan software rendering implementation by keeping all tile buffers for a render pass in the on-chip cache of a modern Intel CPU. It seems like Vulkan provides a better API for a software rendering than OpenGL for that reason, and I'd like to see that confirmed one way or the other.
1. Universal basic income: Everyone gets $10K per year.
2. Negative income tax: Everyone gets $15K per year, phased out linearly across an income of $60K (i.e. if you earn $0 you get $15K, if you earn $20K you get $10K, if you earn $40K you get $5K, and if you earn $60K then the negative income tax is fully phased out).
Why is 1 preferable to 2? Is it just that it's less susceptible to tax fraud? Note that the amount that an unemployed person gets in UBI is less, because the same amount of money is being distributed to more people, even millionaires.
The biggest problem I see is giving people cash. Why not instead give each adult a voucher for housing (just barely adequate for cheap housing) and each person $5/day food stamps (or something similar), regardless of income.
Everyone gets a lower stress level this way. The risk of being homeless or hungry go to near zero. It doesn't matter if you're unemployable, or starting a business, or between jobs. You know you're going to be okay.
While giving people cash seems to have the same effect, it doesn't. At least here in the US, money problems are (mostly) a money-management problem. Many people use any cash they get to pay for whatever seems the most pressing at the moment - whether it's rent or a big screen TV. People buy the TV when they have money and rent isn't due yet, then have a little unexpected expense and can't pay rent. The stress level hurts them, hurts their families, causes increased expenses (e.g. payday loan).
My parents were like this, in six-figure-income years and in dead-broke years. It hurt us quite a bit. And "loaning" them money NEVER helped - they'd pay the mortgage today and then buy the TV when the next payday came in. And there's LOTS of people like this, which is why there's a payday loan on every corner.
Now, we've taken care of my mom by covering her housing and utilities directly. And her stress level is down a lot. It works great. This would have also helped me a lot when I was a student. And it means more startups - since all startups would be "ramen profitable" by default. And in the US, we could fund this with about a 10% tax (probably less if we took some funds out of SS, disability, section 8, etc.).
But one very important point that people don't know is that giving basic income like that, even if abused, reduce criminality. When you have no money and are desperate, you're more likely to start doing illegal actions. Economically speaking, when you start thinking about the cost of more criminality, you realize that it's a pretty good deal to give basic income.
Not everyone here agrees with what I just said - most people don't even know that - but I think that in itself is a great reason.
As an Ontario resident it'll be interesting to see if this goes into effect and what the long term results are.
Can we get another province to try a libertarian approach and we can compare notes in about 25 to 50 years?
Well, work isn't just about producing goods. We work to solve problems. And there are plenty of problems and challenges that machines won't solve for us ranging from cancer and dementia to clean energy to global warming. Not to mention some nice-to-haves for the long-term like space colonization, life extension or nanotechnology.
Saying that humans should not need to work is like saying this is it. We're done here. This is the world we want.
...but there's not political will to shrink the government complexity and capture this savings, which means mincome in Ontario is the fiscal policy equivalent of this xkcd comic:
*Annual interest rate of 1200%
Which would be a shame since it's a good idea and necessary due to technological advances.
From the Budget:>One area of research that will inform the path to comprehensive reform will be the evaluation of a Basic Income pilot. The pilot project will test a growing view at home and abroad that a basic income could build on the success of minimum wage policies and increases in child benefits by providing more consistent and predictable support in the context of todays dynamic labour market. The pilot would also test whether a basic income would provide a more efficient way of delivering income support, strengthen the attachment to the labour force, and achieve savings in other areas, such as health care and housing supports. The government will work with communities, researchers and other stakeholders in 2016 to determine how best to design and implement a Basic Income pilot.
I find it quite interesting they are presupposing a Universal Basic Income, will strengthen attachment to the labor force instead of decreasing it. Human Nature suggests other wise.
One thing I worry about is that this could cause massive inflation and a recession (stagflation) as people drop out of low-wage work in droves. What percent of society will decide to live on solely the basic income if it's high enough to pay for basic expenses? Work is virtuous and builds character. Idle hands are the devil's playthings.
And what would it do to our democracy if a huge portion of the population is living on someone else's dime and not even trying to join the workforce. Isn't it fair to call them children? What insight could they possibly have in the democratic process except to vote their own immediate monetary interests? I believe in universal suffrage, which is why this is a conundrum for me.
Eventually, there won't be enough people giving back into the system and the whole thing will collapse. Before this happens, taxes will continue to be raised in people in lower income brackets.
The politicians love it though. It creates an instant voter base. Why would a person, receiving free money, vote for someone that will take it away?
Because of things like this, I wish we had laws in place that all voters had to at least 1) work some sort of job (it doesn't matter what it is) and 2) proof they paid income taxes.
I do think it's misguided benevolence, though. I hope that largely removing adversity and creating dependence aren't viewed as trivial changes here. People need to be challenged. We need to consider the more subtle ramifications of this. Humans have always labored. Work is in our blood.
* We are not against working on Python 3 - it just happens that there is a lot of interest these days in things like numerics, warmup improvements and C extensions that we want to focus on.
* We essentially exhausted Py3k pot. I personally think it delivered what it promised, despite being short on the funding goals. It's crazy what level of expectations people have with crowdfunding - it's really difficult to find someone to deliver a big, multi-year project for 60k, even outside the states.
* We're closely watching py3k adoption - since we're always a few releases behind, we'll probably do a 3.5 after CPython 3.6 is out, but it all depends on good will of volunteers, who I have no control over.
* Money can easily change focus, but it would need to be a significant enough amount to actually commit to delivering a fast and compliant PyPy 3.5, not 5 or 10k
I hope this clear some things up, those opinions are my own and not necesarilly represent everybody in the pypy project
EDIT: there is just over 8k USD left in the py3k pot. At $60 USD/h (official SFC rate) it's 146h. That's not enough to even fix the inefficiencies in the current version. We hope to use it to get to version 3.3
I understand that the major sponsors of PyPy are interested in the python 2.7, but not updating it for 1.5 years seems like they have abandoned it.
I get a free speedup for my non-numpy/scipy projects and that's flipping awesome. The 2.7 and 3.2 support is just fine for my needs. Your focus on the C API emulation seems totally appropriate to me.
If numpy and friends worked well on pypy, IMO there'd be little reason left to use CPython.
I mainly use Scikit Learn, theano, numpy and pandas; is PyPy able to work with the above, and likely to give any speedups at this stage?
It's about doing the things that are "worth doing". And about doing them yourself, instead of outsourcing them to someone else. Take responsibility for doing the things that are difficult but worth doing.
Things that people outsource:
Gym - people outsource their Gym attendance to "the experts", Personal Trainers.
Their health - to "the professionals", be those Doctors, vitamin salesmen, or chiropractors.
Music - to professional musicians.
Here, I'll try to edit/snip/boil it down into something easier to read. Money-quote is at the very end. (Original text at http://www.online-literature.com/chesterton/wrong-with-the-w... )
All the educational reformers did was to ask what was being done to boys and then go and do it to girls [...] "Would you go back to the elegant early Victorian female, with ringlets and smelling-bottle, doing a little in water colors, dabbling a little in Italian, playing a little on the harp, writing in vulgar albums and painting on senseless screens? Do you prefer that?" To which I answer, "Emphatically, yes." [...]
There was a time when you and I and all of us were all very close to God; so that even now the color of a pebble (or a paint), the smell of a flower (or a firework), comes to our hearts with a kind of authority and certainty; as if they were fragments of a muddled message, or features of a forgotten face.
To pour that fiery simplicity upon the whole of life is the only real aim of education; [...] To smatter the tongues of men and angels, to dabble in the dreadful sciences, to juggle with pillars and pyramids and toss up the planets like balls, this is that inner audacity and indifference which the human soul, like a conjurer catching oranges, must keep up forever.
This is that insanely frivolous thing we call sanity. And the elegant female, drooping her ringlets over her water-colors, knew it and acted on it. She was juggling with frantic and flaming suns. She was maintaining the bold equilibrium of inferiorities which is the most mysterious of superiorities and perhaps the most unattainable. She was maintaining the prime truth of woman, the universal mother: that if a thing is worth doing, it is worth doing badly.
1) "Some things you should care enough about to do badly." - Start as a hobby
2) If a thing is worth doing, it is worth doing badly. - You work on it some more but you are still mediocre at it
3) "If it is worth doing, it is worth overdoing." - You work at it, again and again and you have a ton of iterations
But you get tired and you question what you are working on; #4,#5,#6 creeps in your head
4) "If a thing is not worth doing at all, it's not worth doing well."
5) "If doing something isn't worth the effort, doing it well won't fix that."
6) "There is nothing so useless as doing efficiently that which should not be done at all."
I see 4,5,6 a lot in "features". Techs spend too much time on features that no one really cares about. Its the same for crappy movies.. lots of talented people work on really crappy work.. 99% of the time its not their own passion project. In todays, work-world, we are forced to do great work on really vapid stuff.
I see 1,2,3 in really passionate people and what the world gets are iterations, variety and meaningful work. *the world is better for it - scientist, entrepreneurs and artist do this. Many variations and angles of an idea. A lot of times, the body of work becomes meaningful.
If something is worth doing, it may be worth doing even badly, rather than insisting it must be done well, resulting in paralyzing inaction.
If you have to worry about how well you do it, it's not worth it for you to do.
I think this applies equally well to things doing the things you love doing - raising kids, making art - as it does to clearing blockers and doing-things-what-need-doing.
Note that the "have to" is a key part - most people will and/or should actually worry about how well they do - the difference is whether you're required to worry.
This definitely doesn't apply to all situations - it missed the entire field of "things you're good at" - but I found it decently insightful for my personal life.
P.S. I love this guy's wit. I came into his works when I researched Catholicism and because of some references from Neil Gaiman. But I would have never thought that this writer would ever be in HN.
Does such a website exist?
Chesterton is often referred to as the "prince of paradox."...
"Whenever possible Chesterton made his points with popular sayings, proverbs, allegoriesfirst carefully turning them inside out."
This relates to weight loss and the dependency on weight loss drugs. I see ads all the time for new weight loss supplements, or new workout machines, or fat burning belts. Stating that this will burn fat faster, with less effort. Society now thinks that in order to look fit you must take a fat burning pill, or some crazy concoction to actually loose weight. >We have left the things worth doing to others, on the poor excuse that others might be able to do them better. But in reality it just takes a healthy life style to be fit, so if loosing weight is worth doing, it is worth doing badly.
Are there any quotes that you know of where you feel similarly? Where you have transplanted the quote into your own context, rather than the original context, and felt better about it as a result?
"If it is worth doing, it is worth overdoing."
... and you know he would say that.
"Anything worth doing is worth doing poorly until you learn to do it well."
But the two quotes appear unrelated.
Example: I never baked a cake (ok that's not a large project ;p). I will bake a cake. I know it is most likely not going to turn out well. The result of my work might even be garbage that I have to dispose of. I will bake a bad cake. But the next cake will be better, I learned something from my mistakes.
Were doing it badly not worth doing at all, then the thing is not worth doing - only doing it well is.
Maybe it's that he decided to use his political clout to pick healthcare as his signature in American history, not wage war against the NSA, but either way it saddens me to have campaigned for someone who has empowered a surveillance state instead of fight against it.
Liberty literally means "freedom from arbitrary or despotic government or control", and freedom in the information age means the liberty to communicate and store information. Anything to compromise that makes us all more vulnerable to control in all parts of our lives, not just those stored in zeros and ones. I believe America can be "Land of the free, home of the brave", but not without digital liberty.
Mr. Obama, you are the one who upset the balance with secret, dragnet surveillance of nearly all communications. That's not the bargain the public has had with law enforcement for the last 300 years. Widespread, end-to-end encryption is simply the natural reaction to the arms race you started. We would have never come to this point if the government had kept surveillance within court-supervised bounds.
FTA: "then that I think does not strike the kind of balance that we have lived with for 200, 300 years."
200 years ago, it was not possible to cast a dragnet and catch everyone who was doing Something Bad(tm). The government had to get a warrant to open mail; today, the NSA can sift through billions of messages (metadata, they say) in a second.
Even considering US Mail: you could not keep track of who was sending whom mail, at scale. But today, every letter that is mailed has its front and back scanned (for reading the address); but more importantly, these images are saved for future use.
All of this is possible thanks to technology. And when the balance was tilting in their favor, the Establishment was quite happy. But when the balance tilts the other way, suddenly they're crying like a spoiled child whose toys have been taken away.
You can't just throw tantrums when things don't go your way. If the technology permits E2E encryption, they'll just have to live with it and find other ways to catch criminals.
But if companies were to implement the same sorts of impenetrable encryption, on every device, all the way down to the corporate desktop, in a way that not even the company executives themselves can read the email of their own employees, then lots of regulations the government applies to companies would be mooted.
Taken to the extreme, if all communication is digital, and 100% impregnable, and people maintain good OpSec, then it will be hard to impossible to execute lawsuits or regulatory investigations into malfeasance because they'll be no paper trail.
The end result of going full tilt on crypto is cryptoanarchy. This was pretty much well argued in the 90s among the cypherpunks community. Most of the libertarians and Objectivists were salivating over how strong crypto protocols would end fiat currency, end taxation, end regulation, and so on.
So how far as a society are we willing to take this? Does it just extend to private data? Does it extend to transactions? To payments you make for things? To transfers of money? To business transactions? Will Democracy be able to audit nothing of the interactions of citizens or our institutions in the future?
You don't have to agree with Obama's position to see that cryptoanarchy and Democracy are on a collision course, and it makes sense to discuss the possibilities openly without just plugging your ears and taking an absolutist position that demonizes anyone who disagrees.
Privacy will die, not because it's undesirable or a bad idea, it'll die like copyright and DRM - because it's technically and economically easy to defeat, and people will be motivated to do so. What's more, those people will be hard to catch - after all, the drones will be communicating over very strongly encrypted channels.
[Please refute - I genuinely have nightmares about this future]
What this essentially means is a move to Android for criminals, terrorists, and anyone that wants privacy (since Android allows installation of apps that have not been approved by a gatekeeper bound by the laws of the countries it operates in, whereas iOS does not by default). Open source Android apps with strong encryption will be built in countries without such laws. All of this will likely be the downfall of a few lazy drug dealers that don't want to give up their iPhones, but since the cat is out of the bag and apps with end-to-end encryption already exist and will continue to be built, the governments making these moves will not actually catch any reasonably intelligent terrorists that install and use these apps. They will, however, gain exactly what they want: the ability to conduct surveillance on most people in the world whenever they want.
"I'm the President of the United States of America and if I say that there must be math that gives me what I want. If you don't invent it, you are disengaged."
"And what we realized was that we could potentially build a SWAT team, a world-class technology office inside of the government that was helping across agencies. Weve dubbed that the U.S. Digital Services."
Yes, that's a great analogy! Go with that!
"And this was a little embarrassing for me because I was the cool, early adaptor President."
Cooler and adepter!
It'd still pose an issue for when people are accessing their data, however depending on your setup this will be very hard to prove from a third-parties perspective given ample security precautions taken (ie using an offshore VPN all the time, data never full accessed locally).
For larger tech companies, I'd assume setup of a new company structure offshore and sensitive data handling to be "outsourced" offshore too (ie parts of the EU, other OECD countries).
It doesn't matter who says it.
"You don't need a gun"
Oddly enough both are classified as munitions.
We need a grassroots movement here. I know we have the EFF and Apple and a slew of others. But we all need to be writing about this to have our voices heard.
I am in between projects and writing about this extensively online. Would anyone like to work together to organize facts and promote discussion in a concerted manner? The goal would be a) to make a concise message that is understandable by a non-techie, b) back it up with facts and primary source, and c) seek out public figures who can share our message. I have a running summary of events here which I will put in a github repo 
The task of educating the public and our government on encryption may be even harder than you think. Everyone needs to understand the issues at stake in order to make up their own mind, and it could take years to educate the general public about encryption.
We can expect to continue seeing terrorists attacks in the news regardless of what laws Congress passes. This much we know, and this is, of course, out of our control. However, uninformed law enforcement will blame encryption and they will blame technologists for not allowing them to catch these attacks. Unless all law enforcement truly understands the technology, then they will always blame citizens for fighting for their right to privacy.
Of course, we know this is about security vs. security, not security vs. privacy. Privacy is a secondary focus for many. But law enforcement believes our primary focus is privacy.
My primary concern is that law enforcement does not know how to keep us safe in a world where criminals can sometimes communicate with smartphones across the world in a way that cannot be monitored with a warrant. Regardless of whether Cyrus Vance, James Comey, Loretta Lynch or Obama truly understand this or if they are putting up a smoke screen, the fact is that law enforcement across the country trust them the most. Non-technologists will be more moved to understand the security and economic implications of forcing backdoors upon Americans and US phone manufacturers. For the most part, they are not going to see eye to eye with us on privacy concerns. Lindsey Graham has already changed his view. We can share facts with others and let them make up their own minds.
Some damage is already done. The fact that Vance and Comey have been fighting this for so long is going to make it difficult for them to go back and convince officers of the law that technologists were right, and they were wrong. Many officers will continue to feel snubbed by the tech community.
If we're to advance to the next level of our mutually trusting society, we must all understand encryption technology and its implications. To the extent that we do not all understand encryption, and the ease of which it can be used regardless of government mandates, we will continue infighting and not progress together.
The idea that technologists feel the issue is black and white or absolutist is absolutely incorrect :-). Math is black and white, but our public safety and security is not. It is a complex equation that must be balanced, and we have that focus just as President Obama does. The difference between us and the DOJ is we understand a few more pieces to the equation. I'm open to the idea there are pieces that technologists do not know about, and I encourage the administration to share these details with us. Until all the details are on the table, we won't be able to come up with a solution together. Let's focus on discussing and sharing the variables and their weights. Given information, people can make up their own minds.
If backdoor laws are passed, it's not the end of the world, but our industry will suffer while non-technologists struggle to understand why terrorist attacks continue to occur. It'll be another 4-8 years until we can dig ourselves out of that hole. Let's keep the great country we have and bring facts to the table for open discussion.
[with great sadness]: how low have we fallen if we seriously discussing this instead of grabbing pitchforks.
"And we agree on that, because we recognize that just like all of our other rights ... that there are going to be some constraints we impose so we are safe, secure and can live in a civilized society."
I'm not suggesting that this means POTUS and the government have the right answers at the moment. Despite that, we can't ignore the important role law enforcement plays in society, the requirements in support of their role, and the complexities surrounding the right to privacy. We need people advocating for the right balance, not just getting each other frustrated.
OP may have worries other than US law enforcement being from another country. This is one of the /many/ complexities in this space.
It's not like your communications are being monitored by random government employees at will for no reason. There's a specific safeguard here for personal privacy, and that's through a court order.
If government is monitoring your communications, then there's a pretty dammed good reason, as determined by a judge.
Sure, you might call that final safeguard as not enough or susceptible to corruption, but once you do that, you cease to be able to function in a society.
Judicial review is the "trust zone" that citizens are expected to have on society. If you don't trust judicial review, then there's no hope left for you to function in a normal society filled with other people. If you don't have such a "trust zone" in government, then you are basically forced to build your own army to protect you, since you don't trust government.
Since having your personal army is stupid, your best option is to make sure judicial review cannot be corrupted.
My fellow undergrad tech support doofuses and I knew that Prof Brooks was a god and thus walked on eggshells when we were around him...which we quickly learned was totally unnecessary. He was incredibly friendly, gracious, and encouraging. A true Tar Heel.
Congrats to Prof Brooks.
Knowing your limits is one thing, but understanding why/how they are being manipulated by outside forces (e.g. overestimating ability) is another. And how to counter those forces is also included in these pages.
Thank's for the sanity and well-managed project advice Fred!
It is excellent on what makes programming so great: http://henrikwarne.com/2012/06/02/why-i-love-coding/
Heres Fred Brooks, this giant. I mean made IBM, adviser to presidents, all this stuff. And this lady is looking for directions, so he walks with her out to the street and down the street to show her where she needs to go, Bishop said.
It was full of great names. Roger Penrose, Donald Knuth, Gary Kasparov, Vint Cerf, Tony Hoare, etc.
Brooks was one of the speakers who seemed really interested in talking to delegates in coffee breaks and sharing stories. A lovely man, and his retirement is well deserved. He has shaped the industry more than any other attendee, even if others may have contributed more to the science, so to speak.
I like that.
[For the curious: http://i.imgur.com/CGv9PGc.jpg ]
Now on the other side, I take his advice in There is No Silver Bullet  very seriously. Improving software engineering is a slog, not a shiny buzzword.
My favorite quote  of his "The most important single decision I ever made was to change the IBM 360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use of lowercase letters. That change propagated everywhere."
I guess the one surprising thing for me was that he was still actively working. Even 15 years ago I thought of him as a grand dean from a past generation. Great to see him stay so vibrant for so long.
Brooks's overview: "I fell in love with computers at age 13, in 1944 when Aiken (architect) and IBM (engineers) unveiled the Harvard Mark I, the first American automatic computer. A half-generation behind the pioneers, I have known many of them. So this abbreviated history is personal in two senses: it is primarily about the people rather than the technology, and it disproportionally emphasizes the parts I know personally."
It was a great talk covering his whole career. A video of the same talk is here: http://www.heidelberg-laureate-forum.org/blog/video/lecture-...
So yeah, seeing this feels like Dungeon Keeper meets Minecraft. Which could be very interesting if executed properly, but it's not going to be easy.
Superhot is another recent game that also meets the criteria; time advances only with player movement. Dead simple core concept. I suppose Minecraft probably stands as the most famous example though (which is ironic considering this game's art style and game mechanics appear to have been heavily influenced by it).
I think this shows a lot more promise though
Edit: As pointed out below, don't buy Towns unless you know your buying an abandoned beta-stage game. Good idea for a game, but, IIRC, the dev got kickstarted, took the money, and left
I've been craving a new game in this genre for a while. I love the 'idle' game play where the world evolves. Point me to the crowdfunding page!
Have you considered approaching Sony's indie-developer program?
As far as I can tell, on this generation they are being incredibly supportive of indie developers ... and I would 100% pay for an indie game like this on PS4.
Never seen anything like this before.
For the particular project we ended up using Redis and storing the graph as an adjacency list in a machine with 128GB of RAM.
The reason I don't think there ever will be a "graph database" is because there are so many different ways you can store a graph, so many things you might want to do with one. It's trivial to build a "graph database" in a few lines of any programming language - graph traversal is (hopefully) taught in any decent CS course.
Also - the latest versions of PostgreSQL have all the features to support graph storage. It's ironic how PostgreSQL is becoming a SQL database that is gradually taking over the "NoSQL" problem space.
Also, I made an hypergraphdb, atom-centered instead of hyperedge focused in Scheme https://github.com/amirouche/Culturia/blob/master/culturia/c....
Did you know that Gremlin, is only srfi-41 aka. stream API with a few graph centric helpers.
edit: it's srfi 41, http://srfi.schemers.org/srfi-41/srfi-41.html
My feeling is that graph databases are not suitable/ready for for lack of a better term the kind of document-like entity relationship graphs we typically use in webapps. Typical data models don't represent data as vertices and edges, but as entities with relationships ("foreign keys" in RDBMS nomenclature) embedded in the entities themselves.
This coincidentally applies to the relational model, in its most pure, formal, normal form, but the web development community has long established conventions of ORMing their way around this. The thing is, you shouldn't need an ORM with a graph database.
2-Instead, do graph DB engines try to break through bottlenecks for big data and analytics scenarios?
In fact, most (if not all) graph algorithms can be expressed using linear algebra (with specific addition and multiplication). And matrix multiplication is a select from two matrices, related with "where i=j" and aggregation over identical result coordinates.
The selection of multiplication and addition operations can account for different "data stored in links and nodes".
So there is no such dichotomy "graph vs relational".
> Between 1986 and 1995, fatal traffic accidents rose 17% the Monday following the switch to Daylight Saving Time.
Accidents rose 17% that Monday? Does it mean 17% more than any other day, or just that the raw number for that Monday is 17% higher than it used to be? Because they worded it like the latter.
For all this page says, accidents were up 17% every day of the year over that decade.
EDIT: Wikipedia says total US traffic deaths were lower in 1995 than 1986, so I'll chalk this up as poor wording. https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...
People are fine with the time system, especially since timezones themselves are somewhat arbitrary in their regions. DST is actually more comfortable to live with by allowing for more daylight after work hours. It would be better to just switch to DST permanently and avoid the constant frustrating changes.
What is always forgotten is latitude, and that we forget to learn from history and experience.
In Southern England, or California I doubt it's much more than an annoying relic of olden days. But I don't think anyone has true statistics on whether it is or not. Go north and it starts to matter and accident rates go up when you don't have DST.
The UK had an experiment of staying on summer time between 1968 and 1971, introducing British Standard Time. At the end of the period, the vote was to restore the old way, by a large cross party majority.
I believe at the start of the expermient it was generally thought it would confirm the sense of getting rid of summer time permanently. Switching clocks twice a year is annoying after all.
Sunrise is a bit complicated, and in winter it compresses the afternoon more than many people would like. Also hard to build the necessary clocks. So we simplify things and make a compromise between the two systems, and we get DST.
I don't want to get up totally in the night in the winter. And in summer I want to be able to use the long evenings with the sun still up instead of getting up too early.
Except one, the cited effects are all about the switch from winter time to summer time.
... and switch the entire planet to a single timezone (UTC).
... and require everyone use the same text encoding (UTF-8).
... and pick a single format for separating fields in numeric fields (commas for 000s and dots for decimal points).
Also, I am thrilled about news being at 11 again.
In programming, changing a global state in order to achieve something is almost always a bad practice because it affects everywhere and sometimes in unpredictable ways. Instead of abusing global states, we invented object-oriented programming, which I consider as a way to keep states locally (inside objects).
So if someone wants to save daylight, that should be achieved locally for example by changing school schedules.
"Get rid of DST! If you get up an hour earlier than usual, you might die!"
In the USA, sports aren't just sports. They're more like sacraments, tentpole observances which help to shape the order of society.
>If we stayed on Standard Time throughout the year, sunrise here in the Chicago area would be between 4:15 and 4:30 am from the middle of May through the middle of July. And if you check the times for civil twilight, which is when its bright enough to see without artificial light, youll find that that starts half an hour earlier. This is insane and a complete waste of sunlight.
>If, by the way, you think the solution is to stay on DST throughout the year, I can only tell you that we tried that back in the 70s and it didnt turn out well. Sunrise here in Chicago was after 8:00 am, which put school children out on the street at bus stops before dawn in the dead of winter.
DST is the only sensible option in my opinion.
I was the third engineer at Eventbrite, and I spent years working many extra hours. After 4 years, it felt like I worked 10 years.
I quit and moved to Europe to try to leave the startup scene, but a month later, I found myself the CTO of a startup in London. The addiction continued. Eventually I found myself back in SF. I'm on my 3rd CTO role now. We're about to raise a series B.
More than I'd like to admit, I want to stop this madness and just enjoy life. Hang out with my family. Perhaps move to Denver or Austin to maintain some semblance of tech life, but get out of the madness. I've been looking at houses in Denver for over a year. And it depresses me.
I know that it can't happen. I know that I'll be working like this until my health prohibits me.
Take a look at this chart: http://blogs.wsj.com/economics/2015/12/14/a-brief-history-of...
Yes, things are not quite as volatile since 1930, but also note that there is only inflation, whereas before, there was deflation to balance out the inflation. I know everyone says deflation is the devil. I'm not so sure about that.
Here's a much shorter response, with more pathos:
"Oh. And if your reading this while sitting in some darkened studio or edit suite agonizing over whether housewife A should pick up the soap powder with her left hand or her right, do yourself a favour. Power down. Lock up and go home and kiss your wife and kids."
-- Linds Redding, "A Short Lesson in Perspective"
Read the whole thing:http://www.lindsredding.com/2012/03/11/a-overdue-lesson-in-p...
Increased productivity doesn't help, because everybody is still running flat out to compete, at a higher level.
When I find myself pushed into situations where irritating people have crept into the mix, and no one seems to be willing or able to do anything about that, I look for an exit.
Boss' nephews. Obnoxious assholes who constantly talk about getting laid. Bitchy careerist ladies who constantly demand bullshit, and seem to foment panic with every breath they can muster. Narcissistic retards dumb as a bag of hammers, but smug to the core about everything they do, which usually turns out to be sitting on their asses all day, looking up trivia about sports. Weirdos who can't seem to bathe themselves, even though they're like 40 years old?
I work hard to separate myself from these people.
C. B. Chisholm
Another question would be: why do we keep working so hard for money when technology could already solve many of our needs?
Bob Porter: Looks like you've been missing a lot of work lately.
Peter Gibbons: I wouldn't say I've been missing it, Bob.
For better or worse, there are an infinite number of puzzles to solve.
Purpose, autonomy, work/life balance: pick 2
Working is the only way I know of for me to accumulate capital in order to become financially independent.
The more I do, the faster I accumulate capital, the more years of my life I'll spend able to do what I want to do.
It doesn't feel like a choice to me. The alternative is to live a mediocre life and always have one eye on my "responsibility" towards my capitalist overlords.
However, I think there's two more aspects I think I can contribute.
Firstly: some kind of social identity. I've worked in a fair number of fields and a fair number of jobs, so being attached to a job or thinking of "myself as a particular profession" seems quite alien. But a fair number of my colleagues saw/see themselves as possessing a particular identity, and work/professions defined that for them. We have a very powerful social indoctrination that you are your job: we have titles, little boxes for "profession" on forms, and many people have internalized the messages that "you are what you are employed as", and that you need this external direction/identity to tell you what to do (I don't want to retire, what would I do with myself?!), and that their social identity is formed through their work/networks. Its a bit of a self-fullfilling prophecy, because as we move away from community oriented networks, people's social networks do become defined by where they work. Even if you manage to get out of the ratrace, you discover that your friends are still in it, so they don't have time to spend with you and you can't identify with some of their everyday struggles if you aren't going through it also. I should also note that people who gained this identity through work took retrenchment and change the hardest psychologically, and its easy upon reflection to understand why.
The second aspect though is this: generating the impression of work. I don't know whether its base human psychology (I think there are good arguments from anthropology that it isn't) or a culture-bound phenomenon, but I believe two things: that most humans still have a fundamentally reptilian-brain/cargo-cult psychology that is pretty close to the marxian concept of a labor theory of value, and that in modern large-scale professional life, metrics able to easily tie a worker or professional's inputs to outputs/profits accurately aren't commonly available.
So there is a social/cultural aspect here: how do most people judge how much you're bringing to the workplace? If you're not working in a widget factory, most people's have a heavily weighted proxy to just look at "how busy you look".
Would any CEO, politician, or professional in our culture, ever, justify themselves in taking their salaries by saying everything was running smoothly, and their job was to sit there, like a good taoist-esque ruler, just sitting and facing the horizon and not interrupt? The very idea is absurd, even though we must admit, I think rationally, that in some situations at the very least, that may be the most reasonable course of action. No, instead we justify such by "hours work put in" because it seems to be both a good cultural proxy, and I suspect because, even if its pretty bad, at least its a good cultural value to motivate the lower-downs into being good workers.
But it is of course, on an intellectual level, obscene and ridiculous. And it results in the promotion and workplace culture that I've experienced now at a lot of firms and professional workplaces.
Fresh out of university (economics), I was under the belief that government was generally wasteful. And I worked there, and I saw that it was, and it is, and all was good :)
But I didn't know true waste until I worked for the larger private corporations. We'd hire 15 men, 10 consultants, 4 managers, and support staff to do in 2 years what I could probably do with a skilled team of 4 in my area in government in 12 months. Am i being a little bit hyperbolic, maybe...
When I worked in government, the 2-3 staff would tell you something was bullshit, bitch, take a long lunch break, but get something done...maybe not everything, but something. They weren't salespeople.
In private sector professional firms, people will just lie and say everything they do is productive and a success. Its a hustle, its a sale. They'll come in to the office and rather than eat with their families at home, they'll eat breakfast at work while still not doing anything. They'll go to conferences and say "how great we are!". They'll come up with as many jobs and tasks as they can, and the efficiency of what they do is totally irrelevant. They still play solitaire on their computers. They get tonnes of people to proofread documents n times with n meetings (before eventually switching back to the original version). They'll restructure before restructuring back. They'll fire. They'll hire. It doesn't matter, just do STUFF.
The philosophy is just spend all the money you have and get your staff to do STUFF, expand your empire as much as possible, make everyone work, be seen to work, take credit for everything good, disown everything bad.
To them, long hours wasn't/isn't inefficient or a sign of intellectual failing, its a sign of how awesome you are, and you come in early and you stay back late not because you're doing anything (indeed, amongst the honest ones, there is a haunting realisation that your job, or at least the hours you're putting into it, maybe isn't actually producing anything, or might even be creating more work...), but because its a culturally-structurally reinforcing meme.
I'm not saying that all this culture is universal amongst us, or our workplaces, or our societies. But its there, and I think its all having a pretty powerful impact on our relationship with work, labor, and status...
It was also covered in the documentary "Happy".
I actually was thinking the article would discuss more in detail this or even just put a citation (he cited Keynes and Marx) but instead it went on a long personal anecdotal comparison after comparison.
I also was hoping the author would discuss the developing trend of people working from home and how that relates but.. nope.
IMO the article was too long for my liking. A fairly disappointing read.
Programming often feels like a series of little victories to me, and it's much harder to achieve that outside of work.
Let me tell you, it is a terrible way to live.
Working hard and smartly and with fun, which I occasionally got to do, was something different and immensely satisfying.
But, if you are "working hard" because life sucks. Get your life in order. The sooner the better, not just for you, but ultimately, for your career.
Anyone who says you can't. Or that you have to "pay some sort of dues"? Fuck them.
As I overheard in the cafe, the other day -- my paraphrase may not be as snappy as the original: There's one choice where the outcome is 100% certain: Not choosing. Making no choice, taking no action, no chance.
The young-ish fellow was advising another young fellow on whether to ask a girl out.
As someone who's ended up spending his life alone -- and, is that "not by choice", or, per the above, precisely by choice. Let me tell you, there is no more important choice.
Family, friends, lovers, work and interests that matter (however, and, big or small). There is no more important choice. "Work hard" on those.
And there is one reason he didn't mention: When you don't work so much, you need to figure out what to do with your time. You can't watch movies all day. Nobody can do that for a long time. So you need to think hard about other reasonable things to do. And thinking is painful and scary.
In essence, making people desperate makes robots that are handy, but the lack of reward incentive creates demonstratably worse worke product.
So those new generated images are structurally very similar to the original sources. Neural net seems to be good at "reshuffling" of the sources. That's probably how things like reflections on the water got there, even if not present in the doodles.
For details, the research paper is linked on the GitHub page: http://arxiv.org/abs/1603.01768
For a video and higher-level overview see my article from yesterday: http://nucl.ai/blog/neural-doodles/
You could credibly put any words in the mouth of anyone.
What I've found so far is that it takes a while to get good results like something that looks like its own creation instead of an overlap of pictures. There's no exact way to do this. If you modify existing artwork it works well enough since the source is already somewhat divorced from reality but photos are difficult. When it works it's amazing though.