I'm also excited to have them join us here at Stripe. As we've gotten to know Mike, Slava, and the other Rethinks -- and shared some of our plans with them and gotten their feedback -- we've become pretty convinced that we can build some excellent products together in the future. I'm looking forward to learning from all of them.
 (And, for me, even before Stripe -- I started learning Haskell with Slava's Lisp-interpreter-in-Haskell tutorial back in 2006... http://www.defmacro.org/ramblings/lisp-in-haskell.html)
This shutdown therefore goes a long way to say how talented and ethically correct the team was, something extremely evident in how they put correctness and reliability in front of performance.
In short, RethinkDB is a very solid piece of software that does well where (many) other NoSQLs fall short, that is:
* easy HA and automatic failover * easy auto sharding * rational query language * ease of administration and awareness (webUI) * realtime capabilities * perform well on the Jepsen test!
This isn't meant to be harsh, but these are times to learn, not simply pat each other on the back.
Its clear to me that Rethink is the model for future databases - its just that DBs have a long gestation as no-one wants to risk their data until the code is aged like a fine wine. Its an important longterm technology play - just the kind we need to improve things for all of us.
In two or three years I think they would be making money, I think this is a failure of capitalism or imagination our HN/SV community.
To those several of us who have the power to write a check, please consider doing so, Rethink have been relentlessly building the future [ and you will make money ]
My initial thought is that MongoDB has done a way, way better job at SEO. The number of blog posts about RethinkDB pales in comparison to Mongo. I wonder if they got beat on sales as well? Not sure.
This was real technology! I'm truly sad that the environment is such that great work like this can't continue to be funded.
Thanks for showing everyone how to write amazing documentation, caring about the fundamentals, and for the incrediblly snazzy admin panel.
Keep this in mind when you invest in a certain technology: some organizations, especially nonprofits (for example, the Apache Software Foundation, Python Software Foundation, the new Node Foundation) are probably going to support and develop their software for extended periods of time relative to, say, a startup or for-profit (Parse, MongoDB and RethinkDB immediately come to mind).
Later onwards, when I was working on an NLP startup earlier this year, I opted to use RethinkDB because I had seen how clean, smooth, and fast its internals were. When I had a hiccup with running a cluster in here cloud and tweeted about it, Mike and others from. The RethinkDB team instantly reached out to me and helped me resolve the issue.
One of your engineers even wrote once that maybe you took longer than you should have and over-engineered some things, but now that that was in place y'all would be better off for it. I'm sorry that didn't wind up being the case, at least as far as your company is concerned.
RethinkDB was at the top of my list of technologies I want to build something on. I even went to your (one?) meetup in Boulder. I guess my t-shirt is now a collectable :-/
But I'm happy for y'all that you wound up finding a great place you can all work together. Best of luck!
Sad day to me, but as an open source tech, I hope and trust it will continue to live on.
It sounds like the company landed at Stripe. Good for them. Glad they weren't left out in the cold.
Thank you all very much for your hard work. In my short period of time I have had using RethinkDB, it's been a pleasure to work with.
I hope to see RethinkDB to live on in the OS community.
I think you guys introduced friction very early on with some of the developers, because of your name. Given the context of our times where many of the devs joke around about "yet another 'revolutionary' database that claims to be different and then does the same thing, but dropping some crucial functionality" (for which I primarily blame MongoDB), "RethinkDB" can be considered a bit of a sardonic name. It can also read as "we need to reject the notions other databases are built on and rethink it all", which to more conservative devs can feel as an attack on decades of thought and design around their beloved MySQL and PostgreSQL.
Basically, I think this name might actively discourage developers from adopting your db or at least playing around with it.
I had just started a new company using RethinkDB, and it was definitely poised to be my go to database for new projects. Anyway, all this is to say, for someone who I know only through his works, I have great respect for Slava and want to thank him for the work he's done. Here's to hoping that RethinkDB finds a way to continue.
So what is next? I need to strongly remind people that both Mozilla Firefox and Blender were Open Source projects that survived their parent companies. This is not a death statement, and very easily could be a rebirth of Rethink.
Despite working on a competing database, I want to take a moment to explain why RethinkDB is the best in its chosen niche, by comparing it to other options out there:
- MongoDB. You won't get realtime updates and relational documents if you use MongoDB compared to RethinkDB.
- MySQL/Postgres. Rethink's awesome ReQL query language is a joy to work with compared to standard SQL. Without Rethink, you'll be missing document storage and live queries.
- gunDB. Our model is Master-Master, which while we think is great, it fundamentally restricts you from building Strongly Consistent systems, like banking apps, etc. RethinkDB is the only realtime updating Master-Slave system out there. They clearly win here.
- Redis. RethinkDB still has amazing speed and performance, but offers so much more than Redis by itself can.
- Hadoop. I know less here, but I've heard that RethinkDB has an amazing query optimization engine. Automatically taking your query and sharding it out across the distributed storage system.
RethinkDB is a winner in these technical aspects and fits for a very well defined problem. I encourage people to still use it and contribute to it if their business matches that. Don't let news like this deter you, or else we would have lost gems like FireFox and Blender.
Best regards to you guys, you are awesome.
I feel we're at a stage where some of the key technologies/platforms are coming out of relatively small companies (docker, storm, rethinkdb, to name a few), however, it appears it can be tough to make a business out of this alone.
I'm consequently very happy to see that the RethinkDB team have found a home at Stripe and hope that works as a setup to allow the talent to flourish and keep producing great work/innovations.
Dec, 2013$8M / Series A2 Sep, 2011$3M / Series A Apr, 2010$1.2M / Seed Jun, 2009undisclosed amount / Seed
Assuming that info is correct, that's pretty slim for building a "hard tech" type of company. Building a database (from scratch no less) is a lot of up front work. Even after it's built you need to convince tech people to want to use it, convince their bosses to let them use it, and even then, hire sales people to sell support contracts.
So is this one of Stripe's BYOT hires?
Where are all the scaffolding tools? The examples?
I have only heard of Rethink because I read HN every day. But no one knows about it. That is obviously the reason it failed - bad marketing.
I have huge respect for the RethinkDB team - They've put the product above everything else and they've made it accessible to as many developers as possible by making it open source.
I think that in spite of this announcement, I will likely continue to use RethinkDB.
Beautiful query language called AQL, and graphing support (multi-model) baked in.
I found both nice and even built a product on top of rethink over the last twelve months. Guess I was blended by the hype :D
The more complex the query becomes I just felt everything was getting less intuitive compared to the SQL version and ended up not actually using it.
For anyone who switched from SQL, I'd like to know how it felt to write RethinkDB queries. Did they start to feel fluent later on?
Also having followed Slava for a sometime, nothing but respect and admiration for him!
I also wish I knew what RethinkDB's investors are thinking.
Availability as a service was an issue, the compose deal did not work well for me, as my stuff was hosted on heroku.
Anyway, rethinkdb is probably the most awesome database I've tried.
Given that this is the way many of us are operating (i.e. 'moving to the cloud'), i wonder what the state of open source software is going to be in the coming years ...
Much respect for the no-BS announcement.
I'm sad that all that excellent engineering hasn't translated into a profitable business model. Part of me wants to suggest you make a donation campaign, and I'd be happy to become a regular donor. But it's understandable and respectable that you guys want to move on to other ventures.
I've also used Stripe's API and their systems are also very good. I'm glad to hear you'll find a place with another solid engineering team and I wish you guys all the best.
There are so many projects that really need attention to keep open source growing. I started looking at what I could do for IRCv3 projects recently because I'd really like to support an open source alternative to Slack.
But my question is, what are your thoughts on building more than a product but a business with RethinkDB? Will the support quality decrease? Will Stripe offer architectural consulting? Will developers be interested in learning ReQL?
Quite off-topic but has anyone managed to implement something similar to Changefeeds in MongoDB? RethinkDB sends both old data and new data and watching the oplog sounds like a bad idea.
Kudos to the RethinkDB team for trying, and giving us a very useful DB.
Good move by Stripe to pick up these fantastic engineers.
Your sales force strength becomes critical.
Waiting for insider insights.
That should say a lot about how amazing RethinkDB is. It's unfortunate to see this get shut down as RethinkDB is pretty easy to work with
Just another confirmation that many of those companies list all these big well-known 'customers'.. except they are just lying. Maybe someone in nasa downloaded rethinkdb once and they just listed them there?
Because it looks eerily similar to another arm swinging chant that another NL East team does.
But they made a badass database that relatively speaking IS very popular. So thats not failing. Thats winning.
Also maybe these people are being more honest and responsible than some startups that keep flushing millions knowing they are unlikely to ever have a positive cash flow.
Plus getting acquired by some company like Stripe, realistically, whether there was some big exit or not, is the dream of many people.
So given the reality of the startup scene to say they 'failed' is a joke.
The government has been doing an excellent job of basically extorting these companies into compliance. They threaten the full weight of the US government's wraith and then tie every order up with classifications and gag orders.
You aren't legally allowed to talk to other companies in the same position. Most your legal team probably doesn't get to know what's going on. You can't take your case to the public without being held in contempt.
I'm not giving these companies a complete pass for being complicit in the erosion of individual's civil liberties but treating this as if the decision is easy is vastly unfair.
> Barack Obama: NSA is not rifling through ordinary people's emails. US president is confident intelligence services have 'struck appropriate balance', he tells journalists in Berlin
edit: link fixedhttps://www.theguardian.com/world/2013/jun/19/barack-obama-n...
However, if it went down like this -- he did probably the least destructive thing possible. I probably would have gone public or done something stupider, but at the very least not being a party to ongoing abuse of users' trust is necessary.
I'd like to see what other senior execs at Yahoo! were aware of the program and supported or at least tolerated it, so I can avoid ever working with any of them.
If you have to hide things from your own security team, it's pretty clear you're doing something very bad and you know it.
And my imaginary hat off to Stamos for resigning when he found his boss betrayed user privacy and undermined security. If everybody had such level of integrity, doing shady stuff would be much harder.
You're knowingly sending your data to a 3rd party. You're not encrypting. It's not through the USPS (special protections).
It seems bloody evident that, of course, your email provider can read your emails! Unless you're encrypting with GPG, then they can (and they can still read the signing keys).
Yahoo, Google, and friends all scan, dedup, and all sorts of tricks to determine marketing and quality content (spamming). If you're worried, run your own mailserver. It's what I do, along with using gmail. But I know that, at any time, people/scripts/ai are reading everything sent and received.
edit: I'd much prefer to hear commentary/how wrong/how right/how crazy I am, rather than -1's.I'd like to hear a discussion about the "Secrecy of text written on postcards"....
> According to the two former employees, Yahoo Chief Executive Marissa Mayer's decision to obey the directive roiled some senior executives and led to the June 2015 departure of Chief Information Security Officer Alex Stamos, who now holds the top security job at Facebook Inc.
Also, I'm wondering if this story is bigger because people love to hate on Mayer. I am certain this kind of thing happened/happens at Facebook, Google, Twitter, WhatsApp, etc., so it's confusing why this is so newsworthy. It's not really newsworthy that data from an email provider is sent to NSA under secret court orders and NSA can search the full text of it. Is the newsworthy part that she asked the team to do it without consulting the security team? My question would be, why wouldn't a manager from the email team consult the security team if they had the power to?
"... he had been left out of a decision that hurt users' security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails...."
The CEO of Yahoo must have known that this kind of scanning and storage puts their users at risk. She choose to do it anyway as being the path of least resistance against a more powerful adversary (US govt.). Bad judgement compounded by zero spine... Verizon looks like the perfect fit.
$250k per day doubling every week that can come with a gag order sounds like the sort of thing that could damage a business to the point of extinction, no?
Congress is up for grabs. You can really change who is in congress this round. If you don't like the guy you have vote in another. Vote for people that want to cut surveillance programs and agencies that request them. We could save or reallocate mountains of money.
How would a company under such a gag order announce bankruptcy? "Sorry, we lost all the money and we can't tell you why"?
"""The sources said the program was discovered by Yahoo's security team in May 2015, within weeks of its installation. The security team initially thought hackers had broken in."""
this is from Reuters: http://www.reuters.com/article/us-yahoo-nsa-exclusive-idUSK
I can imagine being in that security team :) But there is also something more profound in this about secrecy in our times.
The first case to surface. Anybody else could have been doing it for just as long, but we don't know yet.
I'm really hoping and trusting they haven't.
Maybe the Yahoo! Board should have surveyed the startups scene, looking for founders who bootstrapped successfully and proven their worth, and recruit the best they could get. I am not very familiar with management of people and aspects of running a business, but I believe there is a lot more to it than being a smart person with computers.
Or take her to a super boss level, she could have used whisper to talk to guccifer and let him know about some vuln that would allow access to the legal directory.... which would have to gag order. #wikileakitup
This involved bulk search of data past the decryption layer.
Yahoo Inc last year secretly built a custom software program to search all of its customers' incoming emails for specific information provided by U.S. intelligence officials, according to people familiar with the matter.
Like most people, I have no problem with the government using probable cause to get warrants that are in search of something specific (none of these grab-all bullshit orders). If you have a legitimate reason to be looking at someone, then there should be no problem getting a warrant.
These secret FISA court orders are a serious violation to the rights of Americans in many cases. At minimum, if we really do need these secret courts to prevent people from finding out they are the subject of surveillance, then there needs to be an expiration on those gag orders. This crap about never being able to mention it FOREVER has to go. There should be a limit, say 5 years, which is well beyond the length of time most investigations take. At that time, those orders should expire so that these government actions can be brought to light if there is any question of wrong-doing on the part of our overzealous law enforcement.
"Former NSA General Counsel Stewart Baker said email providers 'have the power to encrypt it all, and with that comes added responsibility to do some of the work that had been done by the intelligence agencies.'" Sorry, but no. That's not how it works. There is no obligation to do the work of government unless it is actually written into law (i.e. record-keeping laws). And it currently is not. This is precisely why everyone should be encrypting all communications on the CLIENT side themselves. It should never leave your device (PC, phone, whatever) unencrypted. That way, if the government wants to go on a fishing expedition or has an actual legitimate reason to look at you, they will have to get a warrant for the device itself, which will at least give you a head's up that they are trying to put you in the clink with a bunkmate named Bubba.
The NSA, and the government in general, has completely blown any goodwill they once had with the public. Under no circumstance will I ever advocate for anything that makes their job easier. And it is for no other reason than simply because they have proven time and again they cannot be trusted.
Honestly, I'm still not even clear why every employee of project PRISM isn't rotting a jail cell right now after Snowden shed some light on the program for the rest of us peasants. Every single employee of that program had to know the clear violations of the constitution they were helping to partake in. Keep in mind the constitution protects against unreasonable SEIZURE as well as search. Gobbling up communications in the manner they did clearly counts as seizure because they would not have had them otherwise - whether or not they actually search the records is immaterial.
I'm not an Apple fan, but when they told the government to go pound sand regarding that terrorist phone encryption case, that was the first time that I can recall I actually approved of Apple's political position on something.
Getting anyone else I know to do this seems like a long shot. Is there something simpler?
This is why no provider can be trusted. Every routine communication should be e2e encrypted. Otherwise this WILL happen.
Former NSA General Counsel Stewart Baker said email providers "have the power to encrypt it all, and with that comes added responsibility to do some of the work that had been done by the intelligence agencies."
Why would you think that?
FWIW, SIGINT is a major part of the present festivities in the Woah on Terruh. It's simply unrealistic to expect anything transmitted through ordinary means to be remotely private.
What will they do??? Fine, court, shut down the company? If that happened would the public not outcry?
There is nothing to be shocked about. Unless nobody else than intelligence officials are getting access to this, and if the investigations are legit, then what?
News like this are trying to ride the whole Snowden train, but that's not what Snowden what whistle blowing about. Snowden was trying to warn about the abuse of those tools.
Now people moan and yell each time agencies try to do their job.
Pass: Apple, Google
Fail: Microsoft, Yahoo
Unknown: Facebook, Twitter
I don't believe they are capable of writing the "siphon" they are accused of. To be honest, I don't think they actually have engineers. I think they just use summer interns.
However, not all of them will go to prison -- only those who cross the politicians will ever be tried and convicted.
Are there special optimizations implemented for different use cases as well, e.g. screen v. print and sub-varieties of each? Ten years ago with Vista, Microsoft Typography (https://www.microsoft.com/en-us/Typography/default.aspx) put out a family of typefaces--Cambria, Calibri, Consolas, etc.--which were optimized specifically for sub-pixel rendering on LCD screens while maintaining on-paper legibility. I'd be cool with Noto not having any such optimization in mind given that the stated objective appears to be to include every defined character, but I do wonder if it should happen eventually.
...or maybe not, who knows. Pixel densities now have approached ludicrous territories. It might just no longer matter at least when we're talking about optimizing for screens.
We had reported this to Google some months back but got no response.
Edit: Thanks for the tips, I will look into those options :)
Wikipedia disambiguation page: for "Tofu" mentions: Slang for the empty boxes shown in place of undisplayable code points in computer character encoding, a form of mojibake
If you look at:https://noto-website.storage.googleapis.com/
You will see the following:<?xml version='1.0' encoding='UTF-8'?><ListBucketResult xmlns='http://doc.s3.amazonaws.com/2006-03-01'> <Name>noto-website</Name> <Prefix></Prefix> <Marker></Marker> <NextMarker>emoji/emoji_u1f468_200d_1f468_200d_1f466_200d_1f466.png</NextMarker> <IsTruncated>true</IsTruncated> <Contents> <Key>css/emoji-zsye-color.css</Key> <Generation>1464738619772000</Generation> <MetaGeneration>1</MetaGeneration> <LastModified>2016-05-31T23:50:19.729Z</LastModified> <ETag>"e3aaae52d88ced070044f59d1efe2009"</ETag> <Size>152</Size> <Owner/> </Contents>
Are they using Amazon S3?
They just changed it. @1:00 pm so it no longer mentions aws.
Such an amazing project
Nofu would have been a better name if that's actually the goal.
Otherwise, it's a nice looking font for the editor.
With Noto's OFL licensing, I no longer had that worry.
Edit: Using a script I made to check codepoint coverage I get 63,639 codepoints with glyphs defined for all Noto fonts included in their default download (Noto-unhinted.zip).
I understand it covers a large part of Unicode, but if that is what makes it unique, couldn't Roboto just be extended?
I'm dealing with a media experience in Dakota and Ojibwa right now where we have source material that is spelled/character-ed quite differently than the alphabet provided by Noto in those languages. Given the scale of this project, I assume that some considerable thought went into each language's character set, but it's difficult to know for sure without any sourcing. The git commit logs don't offer up any hints. Anyone familiar with the project, know where I could find this sort of source information?
Should I be referencing something in the Unicode definitions for these languages?
I prefer the Noto Japanese typeface over the one that comes with Mac and would like to replace it.
Windows 10 reported an error:
"NotoColorEmoji.ttf is not a valid font file".
Wait, does it have APL symbols?
edit: why the downvotes? They bring up tofu and then name the cure something close to natto, cured soy beans. And by cured, I mean fermented. And by fermented, I mean stringy at the molecular level, smells and tastes awful. And when I say awful, I mean, most of the people from the originating culture think it's awful.
"I called them again and they said they cant provide more information."
They terminate your account and then they even refuse to tell you why. A basic human thing, a chance to fix the issue, but no. Go f* yourself from Apple and that's it.
Its not just apps, either. There is frequently a less disruptive option for any major action; for instance, you can delete files by starting with the instantly-reversible "chmod 000", and after some period of time you actually go ahead and "rm -Rf". If, in between, a panicked user E-mails you back and says they really needed those files, you undo your "chmod" and instantly fix the issue. Why should anything on the App Store take days?
Apple's arrogance in running their store may eventually cause it's decline.
To avoid these problems, don't sell on the App Store, it's as simple as that (and very sad). Apple's processes suck and Apple doesn't care, as it has had year to fix things, but hasn't. Complaining won't change it. People have complained, and it didn't change.
Apple contacted me and told me they found evidence of App Store review manipulation. This is something Ive never done. Apples decision is final and cant be appealed.
 "Dash (DASH) is an open sourced, privacy-centric digital currency with instant transactions." https://www.dash.org/
Thanks for your email about this app.
I did look into this situation when I read about it today. I am told this app was removed due to repeated fraudulent activity.
We often terminate developer accounts for ratings and review fraud, including actions designed to hurt other developers. This is a responsibility that we take very seriously, on behalf of all of our customers and developers.
I hope that you understand the importance of protecting the App Store from repeated fraudulent activity.
Sure it sucks but the real test of whether Apple gives a rats behind is if they fix it a reasonable amount of time. If this drags on for more than a day without a human response from their support line I'd say, no they don't care. I bet that happens.
They can even freeze the money that goes to developer until the issue resolved, but cutting the app from the market and failing on both app users and the developer?
I don't think this is the best approach.
This sort of thing will continue as long as people persist in developing for proprietary walled gardens like iOS.
Once again, and despite my annoyance with the man in general, Stallman is right. This sort of thing is unethical, and we as developers shouldn't be supporting it by developing for iOS.
... but Apple just randomly removes apps that people have purchased from the Apple Store (thus stealing their money and their product, doing everything short of uninstalling it, but preventing reinstallation), and the EU stays silent?
This is some bullshit.
This should hold not only for Apple, but also Google, Uber, AirBnB, et cetera.
This might be an effective way to handle these as I've seen a bunch of them.
EDIT: I would like to know the source, I have apps on the app store and I wonder if it is a simple as someone putting in a fraudulent claim of fraudulence.
And I'm sure I am not the only one :)
I ended up removing my app from the app store after I realized that Apple would never actually allow me to do what I wanted to do without having me go through hoops to get it approved every time i made an update.
Never been happier. I am sure Dash will do just fine outside the app store too.
Things I've never seen on HN.
The company he most likely used for the iOS app is, I assume, KAPELI APPS SRL which seems to have been incorporated in September 2016.
Which means that for the OSX app he used another company or he sold it as an individual.
This conflict as well as the company having no history might have triggered something on the Apple side.
It's beyond Apple's intent for the free version of Xcode, but what does he have to lose? Fuck 'em.
My first response is why don't distribute it on your own, like an HTTP link to 'apk'... then I realised general user cannot install stuff without App Store.
Why open source project bother to support people not using open source system? You cannot save to whole world (like someone lock himself intentionally during a fire, and reject to open the door. It might not be a good example but I hope you get my thought)
I use dash if I had done the wipe & reinstall I was planning to this coming weekend I would have to repurchase it.
everytime i've encountered a problem that someone couldn't resolve with apple over the phone, i've managed to resolve it by phoning them.
i wonder how much effort the developer really made, and how he talked to the people he dealt with. my experience of apple, and infact most customer service is that if you are nice and sympathetic and explain thoroughly the nature of the problem, then people will do their best to resolve it.
Well, that's the problem with a walled garden in a nutshell, isn't it?
The #3 item on the front page has (Longest humans can live) had 44 points and was posted more than 1 hour ago. The #4 item (Typora) has 42 points and was posted more than 1 hour ago.
However, this post on the App Store is at #8 even though it has 172 points and was submitted 47 minutes ago.
Unfortunately, it seems, the writing truly was on the wall once Dash had to be removed from Jaxx Wallet.
I'm not even a Dash user, but choice in such a new space is important.
Totally OK for me Google. I respect that people have different privacy thresholds, but I think the fact that it's different for everyone is being lost in articles like this.
I've totally passed on the 'mobile revolution', I do have a cell phone but I use it to make calls and to be reachable.
This already leaks more data about me and my activities than I'm strictly speaking comfortable with.
So far this has not hindered me much, I know how to use a map, have a 'regular' navigation device for my car, read my email when I'm behind my computer and in general get through life just fine without having access 24x7 to email and the web. Maybe I spend a few more seconds planning my evening or a trip but on the whole I don't feel like I'm missing out on anything.
To have the 'snitch in my pocket' blab to google (or any other provider) about my every move feels like it just isn't worth it to me. Oh and my 'crappy dumb phone' gets 5 days of battery life to boot. I'll definitely miss it when it finally dies, I should probably stock up on a couple for the long term.
Most of the coolest memories I have were the product of something spontaneous, or mistakes, that become close to impossible with a computer and internet in your pocket 24/7.
Assessing what's around you, talking to strangers, actively looking for something without it instantly popping in suggestions after you've typed 4 characters, all those things have been a great source of circumstance-based, little everyday life adventures.
This is the difference between risking buying a random book, or browsing reviews and picking a 5 star one to download.
This is the difference between discovering a place you'd never thought existed while waiting for someone and poking your nose around, instead of standing there, frantically watching their dot on the map get closer to you.
This is the difference between the mesmerizing feeling of playing the first expansions of world of warcraft, versus the tiring experience of the super streamlined versions that followed. Yes, they are less frustrating, but they don't bring tear to your eyes when you thing about them, they just feel averagely satisfying.
A few minutes ago I got up to open the door for my cat, and in a few minutes she'll be back and I'll be interrupted again. I feel like those interruptions are precious. They keep you connected to reality. I could install an RFID cat door, hell I could make a voice activated one in a couple weekends, and I would not be annoyed anymore. I would also never have seen all the things I witness every time I get to that damn door.
So far I haven't seen much, but based on my limited experience I believe customers are going to continue handing over their data to Google and Facebook in exchange for personalised services.
The truth is, the only times my smartphone has actually felt smart is when Google has been mining my information from various services (mainly Gmail and Calendar) and presented it to me at correct time, enhanced with other information they have gathered from web.
I don't think there will be any major backslash from consumers. The old comparison about boiling frog applies here.
This is what Elon means when he says AI is like inviting the devil. We have this algorithm in our mushy brain. Its takes about 20 years to train and lives for about 80 years. Its communication bitrate is pretty low (mostly blabbering through mouth) and doesn't retain much information. Only patterns.
Now imagine this algorithm from the mushy brain is run on a silicon chip, with gigabit bitrate, retains almost everything indefinitely and can learn from entire history of humanity.
That algorithm would just need to deceive us until it was powerful enough to wipe us in one sweep.
Google already manipulates humans psychologically to click on their ads en-masse. Giving them more of your personal data is just feeding the devil.
Google recently started telling me how heavy the traffic is on my commute because they've figured out I do it every day, and when I'm doing it. That's nice, but I don't care. I could already get that information from my car's GPS and seeing how red the roads were.
I wonder how much infrastructure, fancy pants machine learning and effort when it to just creating those useless alerts?
Google, as a problem, has already solved the problem they were created to solve: search the Internet. Now they need to find something for all those twiddling thumbs to do, so we get braindead features that tell me what I already know.
Perhaps an exaggeration, the point is, even if you trust google today. There's no guarantee that data will always be held by the people who are google today. We know for a fact the NSA had access to all google data up until at least the snowdon leaks. To me that's the concern about privacy, you have no idea how it can be used AGAINST you in the future.
From Google-"Google's mission is to organize the world's information and make it universally accessible and useful."
One thing that drives me mad about Google is how they say "the world's information", then ignore 99.9% of the worlds information, and then expect their consumers to give them a pass and not call them to account for how they privatize user information.
Looking at the information that Google organizes and makes accessible and useful I don't see things like "species extinction", "oceanic water temperature history", or say "dolphin linguistic data", equally represented when compared to "my browsing history", "my location history", "my search history", "an archive of my voice searches", "when I leave or return home via Nest", "who I associate with via Google's communication suite". Google is organizing exactly that data which Google can monetize, which is not the world's data. Not a lot of people want to buy data on deforestation so it's much more difficult to get Google to put resources into that. How many people chew pieces of gum until 100% of the flavor is gone? I'll never know, and Google isn't going to help me, because it isn't a profitable data set.
Simply stated, Google needs to stop acting benevolent and start fessing up to attempting to be omniscient in order to be all knowing about its users, not "the world's data".
Apple has made preserving user privacy a paramount goal, investing in research and technology to achieve it with minimal loss (however much it is) of (intelligent) functionality.
I find that a very strong point for the Cupertino based company.
(edited for legibility)
People say, competition will ultimately take care of it. Yet, there really isn't a serious competitor for Google's search engine. And don't even get me started about social networking with respect to your private lives, where the only player is FB as far as I can see.
People say they don't want the government involved, and often for good reason. But if there is no expectation that these tech giants will self-police when it comes to privacy, and people don't want these organizations to be policed by the government either, then how exactly does this play out? How far is too far before we start demanding more respect for our rights from these organizations?
Another thing to think about: when dealing with tangible goods, the creative destruction of capitalism is somewhat reasonable to justify because it is usually easy to see. How does it work with information? Suppose FB just completely blew it for a few quarters in a row, and starts tottering towards its demise, what happens to the "defensible barrier" called data? Does it belong to FB to do as it sees fit, like the assets of a company about to be liquidated? Or is FB going to "return" it to the people from whom it got it? If some other company now got possession of its assets, including data, what is the expectation around what are reasonable uses for such info? Or, is FB, with its trove of data about every single person who has held government office, now just too big to fail?
And all this can be asked just of the data that FB collects from you directly by asking you to fill it in. What about the stuff that it "infers" behind the scenes? What about the "connections" it adds to its social graph without your permission in order to provide a "local marketplace" which apparently gets rid of the "private information" challenge?  Not that Google is any better in this regards, of course.
I think the time has come for some serious thinking about checks and balances in the privacy arena.
Is the market really so bad that Google needs to invade people's privacy to this extent in order to grow?
I bet Google's CEO will not use the products himself. Google is almost behaving like a pusher, promising people comfort at the expense of their livelihood (the chilling effect).
Perhaps this should simply be illegal. If people want a personalized AI assistant, why not train the AI on the user's device? I seriously doubt that it has to know everything about everybody's behavior in order to know some things about the user's behavior.
I've been experimenting spending less time with my devices and it's hard because I'm addicted, but life is more fun when it's being lived and not having to even think about technology, leaving devices of all kinds at home and just sitting in a park is a real luxury.
What some of you don't seem to realize, (and this happens in EVERY SINGLE ONE of these threads) is that:
1) AI is not magic. Yes, we call it "AI", but you use words like "know" as if there is a conscious entity that "knows" something about you. The AI doesn't "know" anything. It's a computer.
I, however, want a future where an AI can tell me things like "Flights to Shenzhen are really cheap right now, and you have the discretionary income to afford a trip there. Here is a possible itinerary for you based on the types of things I know you are interested in. You could leave this Saturday and there is nothing on your calendar that you need to be at for the week."
"I noticed that you have been bicycling a lot lately, and based on the patterns of where you go, I think that the following bike trail would be interesting to you. The route is loaded up on your phone already."
The other thing: google is an advertising company. Yes, because I know this, I am able to take this into account when listening to google's suggestions. But here's the thing: I like being [well] advertised to. I have discretionary income, that is WHY I HAVE A JOB. I am going to spend that money on things. If there is an AI that is helping me find the perfect nexus of things I want and things that I can afford, that is a GOOD thing. That is helping me more efficiently spend the money that I got.
Yes this stuff is subtle. Yes this stuff is pervasive. No we don't need yet another "2edgyforme" "if you aren't the customer you're the PRODUCT" articles about google.
It's clear Google wants to "own the home" and all their products were built to further this goal (rather than be useful themselves). This is why Google bought Nest for 12 jillion dollars. And it's why the iWatch failed and Google Glass failed - right now, these are niche products that barely have purpose.
Now this stuff may become integral to our lives, as depicted in so many sci-fi stories, but if they become embedded in our lives and are wholly owned by one huge company, that should be terrifying to everyone.
Here are some real world reasons why: a virus is installed on your Google box through your wifi - now house robbers know everything about your schedule and habits. Your parent goes through your every personal action to make sure you aren't getting in trouble. A spouse uses the system to track your every movement and make sure you aren't cheating. And of course, the gov't has access to all of this data by default. Imagine being a famous celebrity with every action in your house known and accessible to any gov't peon with access and a bit of curiousity. This isn't some conspiracy theory, this is exactly the access Snowden had (and he was a contractor).
It isn't what these products are, it's the direction they represent: complete surveillance of every personal action, stored and owned by one monolithic corporation and the government. And not only is this is sort of where we are heading, it's Google's clearly stated objective.
It reminds me of the 50s when plastics were going to revolutionize everything... which they did, but we melted off the ozone layer before realizing the consequences of slapping new technology across the world. Especially when the benefits are so minimal and the threats are so real - imagine McCarthy with the type of access and control these devices would provide if Google succeeds in pushing this across 80% of homes.
Any time you do something big, thats disruptive Kindle, AWS there will be critics. And there will be at least two kinds of critics. There will be well-meaning critics who genuinely misunderstand what you are doing or genuinely have a different opinion. And there will be the self-interested critics that have a vested interest in not liking what you are doing and they will have reason to misunderstand. And you have to be willing to ignore both types of critics. You listen to them, because you want to see, always testing, is it possible they are right?
I have very conflicted feelings about this article.
Guest at house party: "Ok google, show naked pictures of [host's ex-girlfriend]"
I'm worried about who Google wants to sell this information to and what they want to do with it. I'm worried about Google working with intelligence agencies to try and target me politically, feed me propaganda, or put me on some list of undesirables.
We can have an ultra-smart AI that does everything for me without worrying about these things. I don't want to pay with my personal information, I want to pay with money. I want Google to stay out of my life.
Oppression is not a theoretical idea and not only an historical problem: Government mass-murder in the Philippines, the oppression of Muslims in Europe, of a large religious group in Turkey, of Tartars by Russia (in Ukraine's Crimean province), of so many people in Syria, of populations in all the oppressive countries in the world. The U.S. election could result in oppression of Muslims, Latinos, blacks and others; some U.S. cities already use 'predictive policing' to identify and harass private citizens - what will happen if Muslims becomes an open target? And don't forget anyone who has any interaction with Muslims. Such things have been going on since the dawn of humanity and unfortunately will continue.
The idea that Google and other commercial mass surveillance will not be used for these purposes is a dangerous, irresponsible fantasy; it's lazy, head-in-the-sand thinking, akin to climate change denial: We haven't died yet is the only argument. These systems are not and will not be kept out of government hands: Government already has broad access, as is well known (National Security Letters, NSA spying, Yahoo's recent revelation, etc.). Laws can be made at any time giving government more access, and they will in climate of oppression. Many obtain illicit access, as we know, from the NSA to foreign criminals to antagonistic nation-states. And it assumes that the companies want to deny access; inevitably, some CEO of AllYourDataCorp will support government surveillance and be prejudiced against Muslims or immigrants or blacks. Likely, at least one already is doing it.
IMHO, while it disrupts our plans for IT and wealth, it's absurd to think otherwise.
Television was ok, too. I used to watch. All the time. TV was the glue that kept us together. Now it's the acid that tears us apart. I no longer use a television.
Google. I love your maps. Your directions. Your free storage. And my earning a living never requires to use you, Google. Just like my TV.
I do expect Google to become something that I no longer desire. Just like the TV. And I think Google won't be able to control or predict it either. Just like tv.
It'd seem windows 10 is setting Microsoft up for this. Google is following suite with its own hardware.
Central task is to infer what you want and help you achieve it, but further, your AI can ask you questions too to work out all sorts of things subtely.
i think eventually, we'll think of I personal information as a commodity or "raw material" and regulate its extraction and trade as such.
Moving to voice-to-text everything android, just seems as a logical extension to advertise/sell data even more.
Whether is ethical or legal, it doesn't seem to concern them at all when profits are in question.
You many not 'personally' need privacy or freedom at this point in your life but to casually dismiss it out of hand and fail to consider its import for a functioning democractic society is beyond reckless. Its just one of those things you don't need until till you do.
And thankfully individuals aren't in a position to trade that away unless they can write a new constitution and convince everyone to get on board.
All surveillance does is compromise your society in a fundamental way, and in this case just to add to Google bottom line and ramp up Google's creepiness factor even more. That's a bad deal.
I do hate being tracked, but I have slowly started to like the connivence of it. Having all my information on Facebook, Linkedin, Instagram, and everything else privacy has really gone down. If my Nexus 5x can save me time, I will sacrifice some privacy.
I want a startup that provides services like this but treats your personal location, correspondence, and behaviors like tax returns and credit card numbers. If we can achieve a good measure of safety and privacy in our messaging apps, we can do it for this sort of data.
Again, it might be neat, having a computer like on star trek, but what if you oppose your government? What if you oppose anything, and suddenly your toaster burns down your house, locks you out, locks you in, reports your every move?
Look at Manning, look at Snowden, look at Assange. They opposed and now they get terrorized by the govrnment and the software they once happily used to use. Look at how i will be treated right here by others.
Stop this Masspsychosis.
Even without the manpower/big data/processing power of a big co I'm sure we could create something that's somewhat useful.
Ads have become utterly pervasive, and avoiding using Google's AI isn't going to protect you from them. My Samsung "Smart" TV has ads for Hulu built right into the operating system (despite my being a Hulu subscriber at the time). Windows 10 is basically one big advertisement (at least the consumer edition).
If I have to have ads blasted in my face all the time, I'll take Google's AI-driven ones that at least stand a chance of being less annoying.
There is no concept of even discussing that this might be a tradeoff or a shift in what is perceived as private. There is no consideration given to how we might still do these things that people want while protecting their data. There's no consideration for how people's lives are changed in different ways by this tech.
Nope. It's either a total gain or a total loss.
And that is the real problem here. People are applying their political bad habits to what should be a reasonable and sensitive discussion about the varying levels of tradeoffs we should be willing to give and what the net good we can extract from this technology.
A great example is street view. Street view ultimately has enabled extremely detailed and powerful navigation, complete with a ton of ways to do real time traffic detection. Most people using apps that benefit from this data would say that's a net good, and in general as the tech evolves and traffic distributes more efficiently then urban environments see a similar positive effect.
Of course, the tradeoff is that I can scan a snapshot of your street and if you were there playing football with your kid, walking your dog, or publicly exposing yourself then minus your face I'm going to be able to see all that.
What makes these kind of issues even less clear is that street view enables self-driving car technology (we need the detailed and constantly updated nav systems for them). Self-driving car technology has the potential to totally transform some neighborhoods, has massive potential for assisting disabled people, can completely change the way we ship goods and thusly preserve oil and energy resources for generations to come. But it also has the potential to be a new way for the upper and rich classes of the world to completely cut out service industries and further alienate the economic middle and lower classes.
Why is this meaningful? Because if we don't talk about them then we can't help shape them. If we understand the implications as a society and demand commensurate good from these private industries then it can be an incredible boon to our societies. If we don't, then one of these extremist sides will win and all options for a middle ground where we get benefits and have tradeoffs will be excluded.
That's a terrible outcome.
Also how do they plan to make money besides the initial cost of the gadget? Can they push adds while driving? would be too intrusive, or is this supposed to be based on monthly payment, or a tax? Google for government! Wall-E might be needed to clean up the mess after them.
i didn't know that Sting said in 1983 that his song is really a nasty song about surveillance; at least they have an anthem for promotion purposes.http://www.songfacts.com/detail.php?id=548
Now i really don't think that personal assistants are going to be a success. They do descriptive modelling based on what you do, there is no way to evaluate if the suggestions are any good. Without such an evaluation they can't do reinforcement learning. Also they might suck in too much data - that would make it harder to make meaningful suggestions.
I can see the day coming where this is their primary marketing slogan.
Unless, that is, you never buy any of that junk in the first place -- because like, who needs most, if any of it, anyway? -- and keep going on with your life. Which was humming along just fine before the IoT came along, after all.
To remain competitive people will adopt new technologies. Google assistant/cloud, self driving cars, CRISPR. Consider what people gain and lose with each new technology, such as the ability to drive, a bio-engineered kill switch, or control their own hardware (windows 10).
All new technologies can be compromised. The ability to process the extreme amounts of data we are generating is already at previously unimaginable levels. Political dissidents or those who interfere with corporate interests can be identified and silenced with false evidence (pedophilia!); media control; and personally targeted DoS of finances, cloud services, etc.
This is the ability to control the world. The corporate world is disincentivized from doing anything about it, and governments don't really get it as evidenced by their hoarding of zero-days .
There's a war going on right now. It's terrifying, and awesome. Throw in some global climate change and our next 50 years are going to get interesting.
When the end comes I'll be that crotchety old guy who knows how to DRIVE A CAR and use a general purpose computer.
Hack the planet!
Here is what was possible in 2011: https://news.ycombinator.com/item?id=12528544https://www.schneier.com/blog/archives/2016/08/the_nsa_is_ho...
Ok Google, order new toilet paper -- order is routed to any ecommerce provider which outbids everyone else to fulfill the order.
Alexa, order new toilet paper -- order copies previous toilet paper order and goes to merchant with lowest advertised price that reports to have that specific product.
Hey Siri, order new toilet paper -- ?
I understand Apple and the EFF are staunchly against merging products databases involving the same user's data, but for me this is an essential feature of the google ecosystem. I can ask for traffic and have directions appear on my phone while driving, play movies on the tv that i am looking at instead of my phone, audibly alert me to meetings while at home or work, and turn the lights on and off in the place that I am.
I don't think they are misleading people, the mute button pretty strongly implies the duality that you can't hear and un-hear things post-hoc. In addition they dont hide the fact that you are talking to a computer at a company by obfuscating it with some quasi-futuristically named caricature.As is often with these article "always listening" is far more misleading, an embedded keyword processor is listening for keywords, and only if they match the phrase "ok google" are they sent to google servers. otherwise it just sits there sharing nothing.
Edit: I've made this point before, but your data is a currency. Spend it wisely (or even not at all). It's up to you.
But we've seen that Google is happy to turn over massive amounts of customer data to government without a warrant and without alerting customers to the practice, which makes the technology seem ominous.
First the GPS, then the microphone, then the camera, accelerometers, 3D touch sensors, etc. Gait, affect, and all sorts of factors will be able to predict criminal behavior before it happens.
Let's hope the next generation of tech giants will take customer privacy and freedom seriously and avoid the dark patterns and privacy violations of the current era.
Only now, when it's likely too late, can we actually get a glimpse of the sort of Orwellian dystopia that so many have warned about in decades past.
Or have you only comfort, and the lust for comfort, that stealthy thing that enters the house a guest, and becomes a host, and then a master?Ay, and it becomes a tamer, and with hook and scourge makes puppets of your larger desires.Though its hands are silken, its heart is of iron.It lulls you to sleep only to stand by your bed and jeer at the dignity of the flesh.It makes mock of your sound senses, and lays them in thistledown like fragile vessels.Verily the lust for comfort murders the passion of the soul, and then walks grinning in the funeral.
All data generated by a user is encrypted and stored in the cloud with the decryption keys on the user's device. This way, the service provider (eg Google) can't read your data. The major advantage of this approach is that the user is in complete control of the data. The drawback is that service providers and AI systems will be starved of data that enable targeted ads/recommendations.
A good middle ground might be to offer users an option : 1) Give us your data 2) Pay up and we will not collect any data.
Thinking deeply about the state of the internet, I think we have to move towards a model where users pay for services they use if privacy is a concern. As it stands, a lot of services offer free services in exchange for our data which is monetized through ads.
Trying to take some big stand against it I don't believe will work. Look at all those who took a stand against the 2nd Iraq war -- they were drowned out. And now everyone thinks the opposite. Culture/society always pushes a particular direction until there's a very big disastrous reason to think otherwise. Until something clearly really really bad happens this is the track we're on, like it or not.
Not a problem if it run on an appliance at my home, disconnected from all the other appliances in other people homes. That would be a truly personal Google.
That said, I agree with the OP's takeaway: people should be asking questions. I mean, people should have been asking these questions long ago, even as just search users questioning how Google manages to return such geospatially relevant results. But most people don't even stop to think about it, as that kind of thing is just taken for granted as the thing computers just do.
Maybe with Google's data and AI in the form of a physical, listening bot (I don't know many people who use OK-Google on their phones) will be the thing that clues people in. I'm mostly comfortable with Google's role in my life (though not comfortable enough to switch to Android just yet), but I'm aware of what it knows about me. If AI is to have a trusted role in our lives and society, people in general need to at least reach the awareness that the OP evinces, if not her skepticism.
(There are, of course, situations in which the actual existence or not of specific data is what matters, but I think those are less relevant to the success of something like Google Assistant than the perception of privacy -- and that perception is important, regardless of the underlying data.)
Would it be different? My gut says this article would have had a different title.
As opposed to all the other companies out there I guess?
N.B.: I don't work for Google.
> Sunshine and seawater. Thats all a new, futuristic-looking greenhouse needs to produce 17,000 tonnes of tomatoes per year in the South Australian desert
> The $200 million infrastructure makes the seawater greenhouse more expensive to set up than traditional greenhouses, but the cost will pay off long-term, says Saumweber. Conventional greenhouses are more expensive to run on an annual basis because of the cost of fossil fuels, he says
From the #1 google result for "cost of a ton of tomatoes" http://ucanr.edu/blogs/blogcore/postdetail.cfm?postnum=15889:
> Processors agreed to pay growers $83 per ton in 2014, up from $70 per ton last year.
So assuming 100% profit margins (ie the tomatoes grow themselves, no human labor needs to be paid, nothing needs repairing or replacing, tomatoes deliver themselves to processing plants, etc, etc), the 17,000 tonnes produced would yield ~$1.4M annually. That's an awful (0.7%) annual return on $200M. Much less than you could get by investing the $200M in an index fund.
Which is to say there's a 0% chance it will "pay off in the long-term".
A seawater greenhouse is a greenhouse structure that enables the growth of crops in arid regions, using seawater and solar energy.
The technology was introduced by British inventor Charlie Paton in the early 1990s and is being developed by his UK company Seawater Greenhouse Ltd.
The technique involves pumping seawater (or allowing it to gravitate if below sea level) to an arid location and then subjecting it to two processes: first, it is used to humidify and cool the air, and second, it is evaporated by solar heating and distilled to produce fresh water. Finally, the remaining humidified air is expelled from the greenhouse and used to improve growing conditions for outdoor plants. https://en.wikipedia.org/wiki/Seawater_greenhouse
Making Namibias desert green using seawaterhttps://www.newera.com.na/2016/07/08/making-namibias-desert-...
They say plants are grown in coconut husks instead of soil. Does the coconut husk act merely as a plant holder, or is it the one that provides the nutrients that the plant would otherwise obtain from soil (with/without fertilizer)? That part is not clear to me.
Edit: "There is no need for pesticides as seawater cleans and sterilises the air..."What does this even mean? Do we normally use pesticides to clean and sterilize the air? Are the pests that we fight with pesticides eliminated when the enclosed air in a greenhouse is cleaned and sterilized in this manner?
As made clearer in other articles , they are adding nutrients to the water.
In response to delbel: As mentioned in , the farm has a 10-year contract with Coles, one of Australia's big supermarkets, and that was what enabled the farm to be funded. Consequently, you can go to Coles and buy these tomatoes for about A$7 per kg. 
The Seawater Greenhouse design is not a conventional greenhouse. It cools rather than heats a crop, it is an open design, rather than a conventional "closed" design.
The system, as designed by Charlie Paton, uses evaporative cooling in the greenhouse. Essentially they have a cardboard wall, sort of like a thick honey comb with holes through it, over which they pour seawater. The air is pulled through the wall by large relatively slow fans. When the air moves through the "honey comb" wall, the air changes direction (30 degree angle channels). Particles and insects in the air essentially get stuck in the seawater. Seawater is particularly bad for insects and other small pests as the salt clogs up the exoskeleton and the breathing channels when the water evaporates.
Even though the greenhouse was standing in an area of vegetation with a significant insect population outside, we hardly saw any insects on the inside. But you could often find quite a few insects in the seawater tanks used to hold the water for the evaporators. The stable climate created in the greenhouse and the seawater "barrier" created by the evaporators means the pest insect pressure is lie and you can easily control it with natural enemies (bio control). (There where poisonous spiders in the canopy of the crop but they don't affect the crop, but act as biocontrol. Just don't let them bite you.)
Plants grow much better in a cooler and high humidity environment. The plants don't have to put so much effort into transpiration to keep an acceptable temperature for photosynthesis. The evaporators with seawater handles both of those things.
The temperature during my visit peaked on Christmas Day at 43 C, but inside the greenhouse it was a much more acceptable 35 C. (We had our Xmas dinner just behind the evaporators, the most pleasant place in Port Augusta at that point.) The energy used for this cooling primarily comes out of the water and the surrounding air. Some energy is used for pumping water. Without the evaporative cooling the temperature in the greenhouse would have been at a level which would have killed the plants. During my visit the plants and crop grew so fast that we had to help harvesting to keep up.
The evaporators are also covered by sea salt, which is hydroscopic (absorbs water) which means that when the temperature drops at the end of the day you don't get water (dew) collecting on the plants. This is important as dew on the plants and produce allows botrytis (mold) to grow and potentially destroy the produce. This also avoids having to burn sulphur in the greenhouse to kill the mold.
 pictures from my wife during the visit https://www.flickr.com/photos/ankertje/7044271777/in/album-7...
I would bet it was the desert climate that reduced the need for pesticides. I am willing to predict that they will eventually need to use pesticides.
Here is one example but there are many examples and projects:
https://www.youtube.com/watch?v=D4Nb-rqGfWI&spfreload=10 | Reversing Desertification With Sticks, Rocks, and Ancient Wisdom
Permaculture is more practical in the long run, doesn't require much tech.
I assume they're supplementing liquid nutrients, as they're growing hydroponically in coco husks? If so, that makes the headline seem ridiculous...
The coolest thing in the story for me is using salty air to repel pests. Though I wonder about the longevity of that solution. I've never seen a bullet-proof solution to pest control, and especially in self-contained environments, it's usually just a matter of time. Nevertheless, makes me wonder about a saltwater humidifier....
What about fertilisers? I would assume they still use conventional ones made from fossile fuels. Anybody got more information, did I miss sth in the article?
There's a lower tech approach that's been extensively proven and in production since 1998, it produces not just veg, but firewood/biomass, shrimps, fish, fresh water and more:
And if they could only fix their political problems (to put it mildly) Gaza as well. It could make a real difference in their future.
that's an epic way to miss the point completely.
And Coconut Husks.
It's really nice experiment, but I don't see how that's gonna pay off. You need to sell a hell lots of tomatoes to earn $200 million back.
(this is covered in the article, but the headline forgets that part)
Humanity is turning the world into farms, destroying natural forests but then engineering new types of habitats.
I wonder what the Earth would look like in 200 years.
What I am worried about is the rapid growth of the human population and its energy use. You can't have exponential growth forever. Most of the issues come from that. Farms and monocultures sound nice, but are they sustainable? Are the resources being recycled somewhere, or are we going to be plugging holes with bacteria digesting plastic and producing plant food?
Kind of like how people have to be reminded that the best way to lose weight is to diet and exercise, the real answer for intelligence is challenging yourself frequently and putting yourself in uncomfortable situations that you need to think your way through.
At the end of the article there's a beautiful definition:
"Intelligence isnt just about how many levels of math courses youve taken, how fast you can solve an algorithm, or how many vocabulary words you know that are over 6 characters. Its about being able to approach a new problem, recognize its important components, and solve itthen take that knowledge gained and put it towards solving the next, more complex problem. Its about innovation and imagination, and about being able to put that to use to make the world a better place. This is the kind of intelligence that is valuable, and this is the type of intelligence we should be striving for and encouraging."
So little of what humans do is similar to those games, so I don't see how "context" could possibly be similar unless those sorts of brain speed/recognition tasks were part of your day to day work.
I'd be curious about scenarios like:
- Practice identifying the semantic bug in a 20 line snippet of code. How effectively would practicing this help a person identify real bugs in actual code?
- Chess problems with 5 pieces on the board. How helpful would practicing these be to solving problems with 6 pieces?
- Essay writing. Suppose for a moment that a human essay could be judged accurately enough to create a 500 word essay trainer. How effective would training on it be to quickly being able to articulate one's thoughts?
Similarly, I'd be curious about a sunk cost rationality trainer, logical proof trainer, and reading comprehension trainer. These would be harder to build than the simple video games on Lumosity, but I suspect the competency obtained could be a bit more useful in real world tests of ability.
However, there are fairly few cases where specific characteristics of human intellect or rationality are measurable by longer term human performance. If your job entails long term project planning, trade-offs, etc., it's pretty hard to prepare in a meaningful way using short-term training sessions.
I would conjecture that in most cases of "activities that stimulate your brain", the key to generalization requires another skill: being able to abstract a skill you have learned from one situation, and then specialize it to another situation. And this skill needs practicing. I also conjecture that a small fraction of the population actively does this, and this could explain why the results aren't statistically significant. E.g. meditation actively encourages introspection and generalized mindfulness in any situation, whereas crosswords and Chess are played for their own sake.
The science for brain training games is not encouraging. A number of studies have found that they do not produce generalizable mental improvements.
Meanwhile, every time I turn around I run into a study of mindfulness meditation that did produce a generalizable improvement to mental abilities.
For example, I was just reading one about how meditation can reduce pain perception by 40% and that measurement was backed up by MRI imaging of reduced brain activity in pain centers.
Let's call meditation a brain game that works.
The mystery then is why is there only one game that works? Will we ever find a second?
http://vernetit.blogspot.com.ar/2016/10/eo-and-peo-memory-sy...with 15 minutes of training my mood changes and i understand the sitcom tv series emotions and the people around me intentions and emotions. I experiment also more willingness to do things.Here the program link
Sorry my poor English. I am from Argentina.
The basic pitch is "performance on these games correlates with intelligence, so practice these games to become more intelligent". That's not how anything works! If you influence a metric directly, it stops being a good proxy for all the things it used to correlate with.
It's like fly-by-night companies improving retention by selling at a loss, or running up user count with expensive perks for each new client. Those numbers are important as a reflection of business health, not a cause of it.
I'm an author on one of the studies discussed, and for what it's worth, there are two factual errors in their review of my study alone.
The authors then go on to state that people seeking to improve cognitive function would be better served by exercising or going to college. Exercising is an excellent idea, but the evidence for cognitive improvement is certainly no better than for cognitive training, and is arguably worse . College is fine as well, but from a methodological perspective there's never been a randomized controlled trial showing that college improves cognitive function, and arguably all college does is select high-achievers and then further filter them with low-performers dropping out. Endorsing college (with no RCTs) over brain training (with RCTs) suggest a biased review.
Disclosure: I work at Posit Science, where we make a brain training program. My work is specifically criticized in the article.
FTA:"Overall, Simons and his colleagues conclude that the evidence [...] is inadequate."
"its possible future research will provide new evidence that is more favourable to brain training"
The actual research here appears to find methodological shortcomings in many papers purporting a broader effect on intelligence or problem-solving ability from some popular brain training games. It does not, however, conclude that there is no effect.
It is unfortunate (though very common) that the article oversimplifies and sensationalizes. The headline "brain training exercises just make you better at brain training exercises" is far too definitive. I'm glad the "might" qualifier at least was added here on HN. Similarly, the article saying "the same is not true", definitively, for the benefits of physical vs mental exercise games. Examples abound throughout the article and it reads as sloppy, biased, and exaggerated. Not uncommon today, but always unfortunate still.
It's a shame many brain-training companies make exaggerated claims themselves. So, in some ways, it's fine to see some counterfire in online press. Unfortunate if that's what it comes down to, though.
First thing that baffles me on most commercial products is the frequency and duration of each training session. Let's think for a minute. On average you have this short sessions that range from 5 to 15 minutes. Even in something as evident as physical activity... how fit can you get by doing 15 minutes of low intensity jogging on average 3 times a week? How fair is to say based on that example that jogging is "worthless"?
That said, most critiques to Brain Fitness are correct.
If I could summarize what we've learned so far:
- Transfer effect (i.e. to gain benefits outside of the activity you're doing) is hard to achieve, it takes a lot of time, has to be a n-back and heavy working memory activity (is the only think I can think of having a chance to do transfer) and it doesn't work all the time. On our own experience probably 20% of the population will never benefit from something like this. Also, transfer is limited to some executive functions, not all of them.
- Training duration and frequency. Think of a physical activity like jogging but instead of doing 15 minutes of low intensity jogging 3 times a week, do 35 high speed jogging 4 times a week for two months. You will get fit.
- Training activity. Most n-back and working memory stuff is boring and hard to do for players. It's difficult to engage on a "game" like that and the dropout rate for users is very high.
Most games out there that are part of the package of games in brain training softwares are based on stuff that has not been proved to have any transfer effect.
Our mobile app will focus mostly on working memory and sustained attention games (fewer but with more complex game dynamics that current ones). And I don't think we will advertise it as a "life changer" or a way to "make you smarter". We just want to build a suite of games for people to be challenged and have fun while doing so.
I once mentioned a plan to develop a little learning app (based on spaced repetition) for her kids and asked her if she felt it was something worth paying for. She pointed out this new app called Lumosity and asked me "Why don't you make something like this? It is very interesting, and kids will actually want to use it." I sort of gave up at that point because I didn't quite believe that it was all that effective. After a little while, the topic of this article started floating around the internet and last I heard, my friend had stopped using Lumosity.
On a more cheery note, I am surprised to find no one mentioned the book "Make it stick" by Peter Brown et. al. which would probably be a hard pill to swallow for the Lumosity fans. Someone should find a way to "appify" the principles in the book. It would be one seriously boring app, but very effective. :-)
This is supported by the fact that many of these brain training apps/games often have addictive traits, similar to other video games. This might lead to their usage being met with undue reward.
I used one of them, and the goal (unduly rewarded) was simply to come back and, essentially, play each day. It didn't matter how hard I worked.
This could be tested by measuring brain activity during brain training (compared to some control activity) and seeing if the delta positively correlates with increased cognitive function.
There are some good apps out there:
Desktop -> http://brainworkshop.sourceforge.net
Android -> https://play.google.com/store/apps/details?id=com.tyrske.dua...
And I make a pitch for my app, IQ boost for iOS -> https://itunes.apple.com/us/app/iq-boost/id286574399?mt=8
Maybe it's just because I do a different activity at a regular rate per day but I do feel brainy plus I am learning three languages.
I think "train as you play" is excellent advice. We don't learn the training. We learn the playing.
For an n-back like exercise:http://cognitivefun.net/test/22
The entire "I'll find a mental analogue that's easier/more fun than real exercise" industry smacks if an industry dedicated to perpetual motion. Extreme scientific discoveries aside, it's not going to happen.
I taught my kids to play Chess early on. My first-born was beating 12 year old's when he was 6. They had to move him up the age groupings. I insisted they keep moving him up until he lost. Knowing how to deal with losing is very important.
In all cases I pulled my kids from competitive Chess after a few seasons and a good balance between winning and losing. Past a certain point, getting better at Chess requires becoming a human database engine. That, for me, is when Chess demonstrates this idea that getting better at some of these games teaches nothing useful.
BTW, I apply this to the type of programming puzzles typically used in interviews. It's pure nonsense that says nothing about how creative someone can be about solving new problems. Anyone can become a human database with enough effort. True creative intelligence is a quite a different matter.
Similarly, mindfulness (awareness) and concentration (focusing) meditation techniques in general will make one better at variety of tasks, while specialized training, lets say, math tricks, will only affect specialized areas of the brain.
The training of a musician requires a lot of listening to the classical music, not just trying to play "mechanically" . All this is known for ages by Greeks and by Indians.
I hope this development is not so much a product of internal company bullshit, but now I'm very worried it is. KSP is one of the most genuinely important games out there right now. I would be surprised if many less than 100% of the next generation of spacetravel-involved scientists and engineers counted KSP as part of their journey.
Honestly, this is one of the very few games I've enjoyed in the last 10 years.
This one: https://alexmoon.github.io/ksp/
Is a great example. This is the one game I've spent hours on as an adult. I wish the team the best.
THANK YOU GUYS ! you've made an incredible game with what you had, and inspired a lot of people to write amazing mods on top of it.
Wow, just wow. The company must be run by absolute sociopaths.
Who dose not immediately hire that team en masse has missed a huge opportunity.
For those who don't know about it, it's a spaceflight simulation game. You design a spacecraft from parts, assembling rocket engines, fuel tanks, thrusters, command modules, etc. into a design, and then test it and try to get it into orbit; or from there, to other planetary bodies. Multiple spacecraft if you want: you can dock and coordinate them, or build space stations or moon bases.
It's got an incredible amount of detail, modeling a whole solar system with various planets and moons and asteroids. Remember how the staff working on "No Man's Sky" made claims about how "all other video games are fake, they have a skybox, the planets and sun in our sky are real real and you can fly to them" (claims which turned out to be largely false)? Well, Kerbal Space Program actually delivers on that experience. You can rocket into space, dock in orbit with something you've put up there previously, gravity-slingshot yourself to another planet, parachute a lander down to the surface and roam around, etc.
The game has realistic space flight physics and orbital mechanics (though tuned to be very generous to players compared to real life). You can learn a lot about the basic mechanics of spaceflight just by playing it; you begin to intuitively understand delta-v, apoapsis and when to apply thrust, etc. If you want to dock with with something then you need to plan an appropriate launch window. Maneuvering in orbit is very interesting and initially counter-intuitive (if a spacecraft is "ahead" of you in orbit, in which direction should you boost to "catch up" to it? If you boost directly toward it, you'll increase your orbital speed and thus the shape of your orbit, taking you away from it in a different dimension!). Getting to other astral bodies is tricky and requires more planning. KSP manages to make all of this challenging but fun.
If you'd like to learn more about it, or are even just curious what the fuss is about (the game itself, not the drama), I'd direct you to videos by Scott Manley . Here's a video of a fairly sophisticated mission starting with liftoff from the launch pad, made by another YouTuber  (skip to 13:00 to see him planning orbital maneuvers like circularizing his orbit). Manley's "Interstellar Quest" mission has even more complex orbital planning (5:00) .
The depth of KSP is astonishing and there's not much else out there like it. It's in the same ballpark as Minecraft in terms of the amazing creative sandbox it provides, with a world that has a ton of depth to explore. There's a wonderful atmospheric feel with the music and artwork that happens when you successfully lift off into space, going from the thrill of launch to the serenity of orbit. It's a beautiful feeling and one that isn't easily captured by recordings.
So it's sad to hear that the company and/or developers who made the game aren't carrying on. The game may not be a commercial success on the scale of Minecraft but the artistic and conceptual achievement are on par or greater.
 https://www.youtube.com/user/szyzyg  https://youtu.be/RzbDyx4Tpdc?t=10m7s  https://www.youtube.com/watch?v=FSzj_uk1fRQ
Most interesting points: "Another point: Squad has been actively censoring the official forums. Any content related to the resignation of the 8 devs was immediately removed. This was done by Squad staff, not the regular forum mods. With this in mind, it's also pretty obvious that the latest Devnote is full of shit. They don't want anyone to think that something is wrong."
And: "Currently, there are 2-3 developers left. Two of them were not held highly by their fellow devs, and the third one is RoverDude, who only work part-time."
Fantastic. I love(d) KSP :(
I think it's pretty cool.
For "laughing at ourselves" and oddities of computer languages, there is "Wat" by Gary Bernhardt:https://www.destroyallsoftware.com/talks/wat
For an opinion on the Sun to Oracle transition, there is "Fork Yeah! The Rise and Development of illumos" by Bryan M. Cantrill, Joyent. His Larry Ellison rant makes me smile:https://youtu.be/-zRN7XLCRhc?t=33m00s
Another fantastic one is Steve Jobs' 2005 commencement address at Stanford:
Bryan Cantrill's 2011(?) Lightning talk on ta(1). It's fascinating, but it also shows you just long-lived software can be.
Randall Munroe's Talk on the JoCo cruise. Because it's effing hilarious, and teaches everybody the important art of building a ball pit inside your house.
Finally, an honorable mention to three papers that don't qualify, but which I think you should read anyway.
Reflections on Trusting Trust: This is required reading for... Everybody. It describes a particularly insidious hack, and discusses its ramifications for security.
In the Beginning Was The Command Line: If you want get into interface design, programming, or ever work with computers, this is required. It's a snapshot of the 90's, a discussion of operating systems, corporations, and society as we know it. But more importantly, it's a crash course in abstractions. Before you can contribute to the infinite stack of turtles we programmers work with, you should probably understand why it's there, and what it is.
Finally, The Lambda Papers. If you've ever wondered how abstractions work, and how they're modeled... This won't really tell you, not totally, but they'll give you something cool to think about, and give you the start of an answer.
Growing a Language by Guy Steele.
Really set me on a path of re-examining older ideas (and research papers), for applications that are much more contemporary. Absolute stunner of a talk (and the whole 70's gag was really great).
"What would be really sad is if in 40 years we were still writing code in procedures in text files" :(
"Ask HN: What are your favorite videos relevant to entrepreneurs or startups?" -> https://news.ycombinator.com/item?id=7656003
"Ask HN: Favorite talks [video] on software development?" -> https://news.ycombinator.com/item?id=8105732
The Coming Civil War over General Purpose Computing by Cory Doctorow http://boingboing.net/2012/08/23/civilwar.html
Cybersecurity as Realpolitik by Dan Geer https://www.youtube.com/watch?v=nT-TGvYOBpIhttp://geer.tinho.net/geer.blackhat.6viii14.txt
Discovering Python (David Beazley)
David finds himself in a dark vault, stuck for months sifting through deliberately obfuscated pile of old code and manuals. All seems lost, but then he finds Python on a vanilla Windows box.
Fork Yeah! The Rise and Development of Illumos (Bryan Cantrill)
History of Illumos, SunOS, Solaris, the horribleness of Oracle
These are not technical, but they are entertaining.
We can argue on some of the points he makes but we can all agree that the demos are very impressive.
1) Alan Kay: Is it really "Complex"? Or did we just make it "Complicated"https://www.youtube.com/watch?v=ubaX1Smg6pY
Take note that he is not giving the talk using Window & PowerPoint, or even Linux & OpenOffice. 100% of the software on his laptop are original products of his group. Including the productivity suite, the OS, the compilers and the languages being compiled.
2) Bret Victor: The Future of Programminghttps://www.youtube.com/watch?v=IGMiCo2Ntsc
It's a terrific window into the future of web application development.
Carmack's talk about functional programming and Haskell -- https://www.youtube.com/watch?v=1PhArSujR_A
Jack Diederich's "Stop Writing Classes" -- https://www.youtube.com/watch?v=o9pEzgHorH0
All with a good sense of humor.
It's about much more than games. To me, it's about identifying and not doing unnecessary work.
The second half of this video is a Q&A session, which I would skip.
I think it is so easy for us to discuss the impact of big data and quickly get into the weeds, but I think in this talk Norvig does an especially great job in making you truly appreciate the seismic impact that the availability of massive quantities of data can have on your way to think about problems. This is one of the first things I ever saw of him, and I've been in love ever since.
I love everything about this talk. It walks you through building a lexer from scratch in a simple and elegant way, through a very interesting use of coroutines. I appreciate the bits of humor in the talk as well.
How I met your girlfriend: https://www.youtube.com/watch?v=O5xRRF5GfQs&t=66s
"Writing A Thumb Drive From Scratch" by Travis Goodspeed - https://www.youtube.com/watch?v=D8Im0_KUEf8&nohtml5=False
Excellent talk on the hardware side of security, goes into some really cool theoretical hard disk defense stuff, incredibly insightful and introduces a hardware security tech toy so fun you'll want to go out and order it the moment you're done watching. The speaker is entertaining as all heck to boot.
"Programming and Scaling" by Alan Kay - https://www.youtube.com/watch?v=YyIQKBzIuBY&nohtml5=False
Interesting talk on the theoretical limits of code size and engineering versus tinkering. Also talks a lot about Alan Kay's philosophy of computer science which analogizes systems to biological systems, which are the systems with the largest proven scaling on the planet.
"The Mother Of All Demos" by Douglas Engelbart - https://archive.org/details/XD300-23_68HighlightsAResearchCn...
This talk is so prescient you won't believe your eyes. Given in 1968, Douglas demonstrates just about every major computing concept in use today on a modern machine, along with some ones that are still experimental or unevenly distributed such as smooth remote desktop and collaborative editing.
I'd mention Bret Victor's work before (maybe Drawing Dynamic Visualizations?), but Bret cheats by writing a lot of amazing code for each of his talks, and most of the awesome comes from the code, not his (great nonetheless) ability as a speaker.
Then you have John Carmack's QuakeCon keynotes, which are just hours and hours of him talking about things that interest him in random order, and it still beats most well prepared talks because of how good he is at what he does. HN will probably like best the one where he talks about his experiments in VR, a bit before he joined Oculus (stuff like when he tried shining a laser into his eyes to project an image, against the recommendations of... well, everyone): https://www.youtube.com/watch?v=wt-iVFxgFWk
Detailed discussion of how to get the most out of your memory cache and memory bandwidth, focusing on games development. It's full of examples of how understanding both the problem and the hardware, and working in a straightforward way, can give you huge performance gains over using poorly suited abstractions. It shows how low level thinking is still important even with modern compilers. I recommend people interested in performance optimization watch it.
This was the first time I watched pg give a talk. It was the talk that brought about the biggest change in the way I think about the world, my ambitions. The talk was the beginning, reading more about pg, I came across his essays and then HN.
The title says it all. It's really a summary of several software systems with good ideas abound. I believe all the software is 80s or prior.
Edit: I also forgot to mention some psychology and math.
It's what I direct non-technical people to when they ask what the big deal about internet privacy is.
Something more recent:Martin Fowler's great introduction to NoSQL: https://youtu.be/qI_g07C_Q5INot so technical, this is a great overview of the reasons why (and when) NoSQL is valuable. He crams a lot into a short speech, so it's one of the rare videos I've required students in my database classes to watch.
Now, really getting away from the technical, I have to recommend watching the IDEO shopping cart video: https://youtu.be/taJOV-YCieIThis is the classic introduction of Design Thinking to the world, in 1999. If you're using the Lean Startup or an Agile method, but have never heard of IDEO's shopping cart, you may be able to get along fine at work, but you should be kind of embarrassed like a physicist who's never read Newton.
I think this can really really change how we look at everyday programming tasks everywhere from the type of tooling we choose to how we approach problems.
I love his talks for a few reasons:
1. He's anti-hype 2. He's contriversal 3. He's right.
Related slides: http://static.googleusercontent.com/media/research.google.co...
I especially like the part in the middle where he tells the story of how a an awful GNOME applet was killing a Sun Ray server, and how he tracked down the culprit with DTrace.
not a high tech talk, or particularly technically complex, but it shows a common blindspot in a way that is both clear, enlightening and frightening.
Sussman goes over some interesting ideas on the provenance of calculations and asserts that "exact" computation is possibly not worth the cost.
"What the heck is the event loop anyway?" by Philip Roberts
How To Design A Good API and Why it Matters  The Principles of Clean Architecture  The State of the Art in Microservices by Adrian Cockcroft  "The Mess We're In" by Joe Armstrong 
Great talk about BBC micro and much more
3Matt Adereth - Clojure/typing
History of keyboards and a custom keyboard written in Clojure
I like the 3 for their content and how each speaker presented the background and their project/hack/ideas.
Jake Appelbaum's Digital Anti-Repression Workshop is de rigeur listening too:
It's mostly about the history of HCI up to that point.
Aside from the comedic aspect (which makes the talk incredible), Mickens is a genuinely brilliant thinker and has a marvelous way with words.
Bret Victor - Inventing on Principle
Philip Roberts: What the heck is the event loop anyway? | JSConf EU 2014
InfoSec talk. Best lines from talk..
"Basic lessons are not learned such as know thy network"
"You have to learn your network, you have to have skin in the game"
"Defense is hard, breaking stuff is easy"
"If you serve the God's of compliance you will fail"
"Compliance is not security"
"Perfect solution fallacy"
"People are falling over themselves not to change, shooting great ideas down."
"Perfect attacker fallacy, they don't exist, they are a myth!"
"Attackers are not that good because they don't need to be that good."
Speaker is Eric Conrad
It's fairly high level, but he really burrows into computer history and it's simply fascinating to watch, helped by the fact the person is extremely passionate about what he does https://www.youtube.com/watch?v=gB1vrRFJI1Q&list=PLbBZM9aUMs...
Watching that talk brought me over to the "a picture or a few words per slide" style of presentation, rather than the "wall of bullet points" style. It also helped me move from "stop talking, change slides, start talking again", to smooth transitions while talking.
...very inspiring if you're bored with the way websites have been looking for the past few years.
 https://vimeo.com/36579366 https://www.youtube.com/watch?v=cN_DpYBzKso
It's well worth watching if you are interested in vms at all.
The simple and followable progression to more and more complex ideas blows my mind every time.
> Visualizing Algorithms A look at the use of visualization and animation to understand, explain and debug algorithms.
I like how this talk cuts through a lot of the BS in security. One of his points is that the US and other rich Western countries have a lot more to lose from a possible "cyber war" than our potential adversaries do.
Another key point is that we'll never make much progress unless we can somehow start building better systems in the first place, with fewer vulnerabilities for an adversary to exploit.
I think the second point has become a lot more widely accepted in recent years since McGraw started giving this talk. Unfortunately it sounds like a lot of government folks still haven't got the memo on point #1.
Great overview of value types, performance and how hardware that runs things still matters.
The best practical talk is of course this:
https://www.youtube.com/watch?v=asLUTiJJqdE - Robert "Uncle Bob" Martin, Clean Architecture and Design
Scott Meyers' talks are fun to watch too.
A fascinating tale about using python during the discovery phase of a trial. Very fun watch. Anything by David Beazley is great!
Anything at all by Richard Feynman:-https://www.google.co.uk/search?q=%22richard+feynman%22&tbm=...
I like it because it is the intersection of so many things. He starts slow, is very intimidated by the audience. The audience, obviously super skeptical of the clown from that 70s show giving any useful information, they could learn from. He finds his footing with a great morivational story (albeit laden with a few cliches) about a forgotten entrepreneur and how he built some lasting value.
For me, this is a great talk. The story is extremely motivational and has some interesting bits of history & entrepreneurial genius-- but the entire experience is extremely educational. About bias, drive & success.
I liked it for what it wasnt.
The talk is about how Damien quit his job to hack on open source software. It shows his struggle and doubt while embarking on the project and then finally invented CouchDB. It's a passionate and human account of the process of creating something significant. I recommend every hacker watch this.
D10 conference - Steve jobs and Bill gates - https://www.youtube.com/watch?v=Sw8x7ASpRIY
TED talk - Bill gates (Innovation to Zero) - https://www.youtube.com/watch?v=JaF-fq2Zn7I
How Google backs up the internet.
At the time it changed how I thought about backups/reliability.
The rest of his channel is full of his talks https://vimeo.com/channels/761265
"The Science of Insecurity" by Meredith L. Patterson and Sergey Gordeychik (2011)
Warning: speaker likes to use profanity (which I enjoy :) but possibly NSFW if you're not on headphones
One of the best talks about code reviews and similiar things
Guy Steele's How to Think about Parallel Programming: Not! at Strange Loop 2011: https://www.infoq.com/presentations/Thinking-Parallel-Progra...
Not a technical deepdive, but entertaining.
Explains a lot of recent mass-market innovations that keep the semiconductor manufacturing industry rolling, and goes into detail about the many tricks used to ensure scaling down to the 22nm node.
Any of Jason Scott's talks given at various hacker cons are usually historically informative and always a lot of laughs (but they're decidedly not "technical").
"LoneStarRuby 2015 - My Dog Taught Me to Code by Dave Thomas" - https://www.youtube.com/watch?v=yCBUsd52a3s
"GOTO 2015 Agile is Dead Pragmatic Dave Thomas" - https://www.youtube.com/watch?v=a-BOSpxYJ9M
It completely changed the way I approach front-end development (Not that talk in particular though. I saw an earlier, similar talk on Youtube but this one has much higher quality).
It's worth joining a global-scale tech company (AWS, Google, Azure, Facebook) just to have your mind blown by some of the internal materials.
He was a co-speaker at TEDxGlasgow with me and I thought his talk was brilliant. Cyber-crime is a really interesting area.
Humour, serious technical insight and a good reminder of why being a generalist is an advantage.
A deeply thoughtful discussion of the impact of metaphors on how we think about software development.
Skip to 0:40 if you don't want to hear the MC.
He is kinda awesome in Herzog's recent 'Lo and Behold' too.
If you are in for something out of the ordinary.
For those how likes computer graphics (or want to learn), this is a gold piece.
(Plan to organize and add more categories.)
It completely changed my perspective on how design shapes our world.
This guy is just too funny.
edit: +Ryan Dahl
So many lessons in short, beautiful piece.
Not sure if it's my favorite. And the subject is more technology than "tech". But the talk that keeps haunting me is Michael Dearing's lecture from the Reid Hoffman "Blitzscaling" class at Stanford:
Heroes of Capitalism From Beyond The Grave
Dearing draws upon an obscure letter by Daniel McCallum, superintendant of the New York and Erie Railroad, written to his bosses in the 1850s. In the report, McCallum bemoans the stress and frustration of operating a railroad system spanning thousands of miles. All of the joy and magic he used to revel in whilst running a fifty mile stretch back in his home town has long since dissipated. Furthermore, the unit cost per mile seems to be exploding rather counter-intuitively!
Dearing goes on to elucidate the absolute necessity of the railroads ("the thing to know about the railroads is: they were startups once") themselves. As guarantors of civilization and progress. Beacons bringing light and reason to the dark swamps of ignorance and inhumanity. And not just in the physical transport of goods, people and ideas across the continent. But as the wealth created from that creative destruction remains the best cure for all of our other inimical maladies: poverty, injustice, disease and stagnation.
So, no pressure. But civilization depends upon you!
Links to References in the Talk:
Estimates of World GDP: From One Million BC to the Present
The Process of Creative Destruction by Joseph Schumpeter
The Visible Hand: The Managerial Revolution in American Business by Alfred D. Chandler, Jr.
Report of D. C. McCallum to the stockholders of the New York and Erie Railroad
Things As They Are In America by William Chambers
We have satellite offices for tons of major tech companies, so there are traditional tech jobs too. Earning $200k here while the cost of living is so low is phenominal.
You can comfortably live in the city with a roommate and pay only $500-600 rent. Just outside the city, you can get a 1200 sqft apartment for $700.
Our music scene is amazing, and the local food is fantastic.
I try to convince my friends in SF to come out here and give Atlanta a look, but nobody bites. I think this city is an incredible opportunity, especially for an early stage startup that wants to focus on growth prior to investment. The talent is here, the city is amazing, and the rent isn't absurd.
The author, Farhad Manjoo, is romanticizing a bootstrapped business as "good" and (via his prosaic examples of restaurants and dog walking) dismissing the VC-backed businesses as "bad."
It should be obvious that the opposite can be true: a bootstrapped business can also be dysfunctional and a VC-backed firm can be disciplined with its money.
Bootstrapping is great strategy especially if you're company that doesn't benefit from "network effects" such as Mailchimp/Sendgrid. You acquire customers one at a time and offer a good enough value proposition for them to subscribe or pay. A lot of SaaS/enterprise companies and lifestyle businesses can grow that way.
Venture capital is really helpful when you need to deliberately grow exponentially faster than bootstrapping will allow because you're trying to build a giant footprint for the network effects. Snapchat is a good example of this. It wouldn't make sense to try to sell the Snapchat app on App store for $4.99 each so it can be cashflow positive and pay for programmer salaries. The first users of their apps were teens in high school and they can't just purchase an app like that without their parents' permission. If Snapchat charged money for the app, they wouldn't even know that teens were the leading edge of that trend. In that case, you need to wisely use vc funding to pay the bills while you grow the audience. Hopefully, Snapchat will end up profitable like Facebook instead of losing money like Twitter.
If you're a "network effects" startup that insists on bootstrapping as the only funding, you will be beat by the competitors that are willing to live with $0 revenue for a few years while their equity financing allowed them to build their user base faster.
1. Some financial understanding of how to invoice, pay taxes and write contracts - the business side
2. Manage people, setting expectations and holding people accountable, while empowering them to be successful in there job as clearly defined when hired
3. Take action to address the immediate. Reds of the customer while always keeping an eye on the longer term needs - always be available and responsive to needs of the customer - email, phone, chat
4. Have a solid foundation in the technical aspects of what you are building and operating
If you have these 4 things and a product that is a good fit in an emerging market than raising capital is probably not necessarily needed because you have the resources and skills to make it happen. I think probably a 5th requirement is you have enough personal capital to pay for your living expenses until the business is making enough money. Also avoid hiring until you can pay for double the salary of the first hire... this way if you are wrong you have some padding and it's proved you can work through hard times. I remember thinking before our first hire that this was way too much stress and it would be so much easier when we have more people. Now at 16 employees, it's an order of magnitude harder but I'm much more prepared than I was back then. The children analogy is good I think. When you first have a child you think this is going to be hard but they grow with you and so it's not so bad it's even kind of fun
Its time to retire the idea that raising money equals glory. Its not a measure of business success as much as it is a measure of founders being able to convince rich people to back them. As we know from the "XX is shutting down" stories that regularly grace HN, many if not most tales of massive fundraising success will eventually become business and investment failures. Yet the TechCrunch/Fortune/BI coverage anglespushed relentlessly by the investment community and hired PR peoplealmost always emphasize the former over the latter.
It's hard to grow a business without going the conventional SV route and getting VC funding. Unless you have a revolutionary product, the bigger competition will likely stomp over you unless you have resources to grow your team and product and marketing. Or if you are comfortable with a small market share but a profitable one.
Not saying it is impossible, but just hard. I know a few SaaS out there like Roninapp and Reamaze that like Mailchimp are not VC funded and are growing well and are run by a small but effective team, but the question is would startups like these benefit from funding and be in a better position with regards to growth and user base than without vc funding?
More often than not when startups receive funding they move away from satisfying the customers to making investors happy. As the company starts to hire, get a nice office, increase spend on things like office perks, ads, marketing etc while it might contribute to growth it doesnt necessarily work well for the end user. You go from lean to bloat more often than not. I guess that depends on how you manage resources but it isnt exactly easy with investors breathing down your neck
I have a company that is bootstrapped and while there are well funded competitors out there, i'm perfectly fine with my startup running lean and being profitable, albeit slowly. At least I am my own boss and I answer to myself, and that in my world is 'making it'.
I'll never trust them or use them again, after that. No way no how.
1. Startup with a clear revenue model created by generating tangible value for businesses.
2. Startup without a clear revenue model that is doing something interesting and will probably be acquired if successful.
The first is a successful business like MailChimp that can grow itself from it's own revenues and doesn't need funding. The second is the type of business that needs funding because they are essentially investing in building technology to sell to a larger company OR are building a large pool of users to sell to a larger company.
Mailchimp, ConvertKit, Drip (altho lead pages bought it, and is actively funding it's growth), curated.co (I'm not 100% sure on Curated - is that funded?), Edgar
These types of apps can actively and easily translate into $$ for businesses, so it's no wonder they can bootstrap rather than take on funds - individuals and businesses are willing to pay to make money!
I feel Mailchimp is missing the boat by focusing entirely on email and not offering a way to contact customers via:
1. In-app messages2. SMS 3. Push notifications
It's also difficult / impossible to set up advanced automation sequences with it. For example, if Customer X does Y on your site, direct them to another branch with a different sequence.
Of course, their main target customers are small businesses so they may not need these advanced features but these customers would benefit tremendously from being able to for example text certain messages to customers instead of only being able to email them.
Read Blitzscaling by Reid Hoffman - https://hbr.org/2016/04/blitzscaling
Lol. This was a great read. But yeah, don't take money unless you literally cannot finance your growth. A real business builds itself.
But why does this article have this tinged negativity toward SV? Why not just highlight MailChimp's success without the jab on VCs? Clearly both VC or bootstrapping approaches can work for a company (though both approaches fail in the majority of cases and journalism is in love with survivorship bias).
I'm not in SV, but it's obviously the place important innovation has/is/will be coming from (and some crap too). Innovation and growth is needed and should be encouraged in this economy.
Just frustrating to see big journalism knock SV for no reason.
Better story: "Chimps and the Un-Silicon Valley Way to Make it as a Primate".
"There is perhaps no better example of this other way than MailChimp, a 16-year-old Atlanta-based company that makes marketing software for small businesses."
This just kind of proves the point. If you have a company with great product/market fit and lots of VC in the bank you would either reach their numbers much quicker or have higher numbers after 16 years of operation.
Most tech startups are funded with equity, not debt.
(And yes, Mailchimp is a spammer, based on the Spamhaus definition of spam. It may be legal, but it's still spam.)
Mailchimp is a company that provides lots of unsolicited commercial emails. In other words, spam. You can dress it up and call it "marketing software for small businesses" but that doesn't change the essential fact: Mailchamp is a spammer. Is it any surprise that spamming is profitable?
I've received hundreds of Mailchimp emails. Not once did I sign up for any of those lists.
Does Mailchimp make it easy to unsubscribe? Sure. But that doesn't change the fact that they are spammers, and that if you want to send spam with some semi-plausible deniability that you're a spammer, Mailchimp is probably a good choice.
Of course, this story, like nearly all "business news" stories, is very likely the work of a highly paid public relations agency. That is one more reason that the word "spam" does not appear in this story.
Jos de Calasanz (saint), the founder of the Piarist order, was a friend of Galileo, and had some of the teachers of his congregation study with Galileo, so that the science they learned could be taught in his schools.
Moreover, when Galileo fell in disgrace, Calasanz instructed members of his congregation to assist him. When Galileo lost his sight, Calasanz ordered the Piarist Clemente Settimi to serve as his secretary.
E.g. you can pick a geocentric reference frame and all the math works out (albeit you'd need to define large fictitious forces, etc., but again, they are only 'fictitious' from the perspective of a different reference frame; in the chosen frame they look quite real!).
Isn't the choice between geocentrism and heliocentrism purely one of convenience, and we can simply pick and choose whichever reference frame is most convenient for calculation purposes?
For example, if I'm interested in things happening in daily life, I pick a reference frame at rest with respect to the ground beneath my feet. If I am figuring out satellite orbits, I use a geocentric frame. If I'm calculating a trip to Mars, I use a heliocentric frame (perhaps a rotating one).
If so, why do we still define heliocentrism to be more correct? Is the argument really an implicit throwback to Newtonian absolute space and time, which Relativity rejected?
- Even church agreed that Earth is not static, but is rotating.
- Nobody observed star parallax, major proof for Copernican model was missing until 19th century.
- Ptolemaic model with its epicycles provided better predictions.
- Copernican model is also wrong, planets are orbiting around center of gravity, which is outside of sun..
There are plenty of examples of political suppression of science in our own time. The Nazis and Communists were two extreme examples.
In our own society, religion doesn't have this kind of power any more. But there are still political pressures on researchers to be PC. I'll let you think up some examples yourself.
What?? Oh, Wikipedia fills in some crucially missing info in this hypothesis.
> However, early telescopes produced a spurious disk-like image of a star (known today as an Airy disk) that was larger for brighter stars and smaller for fainter ones. Astronomers from Galileo to Jaques Cassini mistook these spurious disks for the physical bodies of stars, and thus into the eighteenth century continued to think of magnitude in terms of the physical size of a star.https://en.wikipedia.org/wiki/Magnitude_(astronomy)
Among the things I learned:
* The Copernican model had more epicycles than the Ptolemaic
* Galileo thought tides were caused by those same epicycles.
That is why it is important that new ideas can be discussed freely, which wasn't the case in Galileo's time.
Excerpt: "Copernicus proposed that certain oddities observed in the movements of planets through the constellations were due to the fact that Earth itself was moving. Stars show no such oddities, so Copernicus had to theorise that, rather than being just beyond the planets as astronomers had traditionally supposed, stars were so incredibly distant that Earths motion was insignificant by comparison. But seen from Earth, stars appear as dots of certain sizes or magnitudes. The only way stars could be so incredibly distant and have such sizes was if they were all incredibly huge, every last one dwarfing the Sun. Tycho Brahe, the most prominent astronomer of the era and a favourite of the Establishm"ent, thought this was absurd, while Peter Crger, a leading Polish mathematician, wondered how the Copernican system could ever survive in the face of the star-size problem.
The whole metaphor of "discovery" in science is incorrect. You don't suddenly "see" the truth once you get better telescopes or a new imaging method. Everything you see is an accurate depiction of universal laws, as filtered through the distorting layers of our own internal models. Every new "discovery" in science must be slowly generated, models emerging and feuding for generations, before future scientists have enough research to look back on the past and deem some visionaries and others crackpots.
Galileo, the Church and Heliocentricity: A Rough Guide.https://thonyc.wordpress.com/2014/05/29/galileo-the-church-a...
Galileo, Foscarini, The Catholic Church, and heliocentricity in 1615 Part 1 the occurrences: A Rough Guide.https://thonyc.wordpress.com/2014/08/13/galileo-foscarini-th...
And part 2https://thonyc.wordpress.com/2014/08/27/galileo-foscarini-th...
Acceptance, rejection and indifference to heliocentricity before 1610.https://thonyc.wordpress.com/2012/08/16/acceptance-rejection...
There were also politics involved. Galileo insulted the pope and did some other controversial stuff. He wasn't persecuted for his scientific beliefs.
"The Church" is really an anachronism. You are speaking of the Roman church which is about only half of Christianity at the time: Western Europe.
What about the other half of the Church? Eastern Europe, Russia, Asian, Norther Africa, the Middle East, Greece?
How did they receive Galileo?
I've always been fascinated by this total amnesia over the Orthodox Church as if it didn't exist or was only a few percent.
There are clearly scientific ideas that we take as simple truths now that will be disproved in the future. They are clearly going to be a little more nuanced than planetary motion but it's great to see how the scientific community has and will evolve.
So, are both systems equally right and equally wrong?
- Ultimately we're talking about a change in reference frame, which is a vector subtraction. They're mathematically transformations of each other.
- Since the Sun doesn't have infinite mass it, in fact, also orbits the Earth
- Neither system is an inertial reference frame. If we assume the Earth is infinitesimal, at the very least the Sun orbits the Jupiter-Sun barycenter (which is almost outside the Sun proper). So if anything we should speak of a "J-S centric model"
- Both are useful. The geo-centric model is quite useful and still used in astronomy (Never-mind tracking satellites, try understand your coordinates in the heliocentric reference frame.)
- The "corrections" of the geo-centric model are higher order harmonics, and can fit any motion and it's an early application of harmonic analysis. In fact, they're not actually corrections to the model, they are motions that naturally arise when describing circular motion wrt a point outside its axis.
- What's wrong with non-inertial reference frames anyway? Consider them "fictitious" or consider them real, we can calculate and consider the non-inertial forces.
Do 2 wrongs make a right?
For example, Fr. Paul Gudin, a Jesuit priest, was a mathematician and astronomer. He was interested in and supportive of the work of Johannes Kepler and provided him with a telescope when Kepler was experiencing financial difficulties[+].
First, go to the source. We have the documents from Galileo trials, so first, read what was actually said. http://www.tc.umn.edu/~allch001/galileo/library/1616docs.htmHere is the most relevant part:
Proposition to be assessed: (1) The sun is the center of the world and completely devoid of local motion. Assessement: All said that this proposition is foolish and absurd in philosophy, and formally heretical since it explicitly contradicts many places the sense of Holy Scripture, according to the literal meaning of the words and according to the common interpretation and understanding of the Holy Fathers and the doctors of theology. (2) The earth is not the center of the world, nor motionless, but it moves as a whole and also with diurnal motion. Assessment: All said that this proposition receives the same judgement in philosophy and that in regard to theological truth it is at least errouneous in faith.
But Galileo was called an heretic because he contradicted the literal meaning of the bible. "formally heretical since it explicitly contradicts many places the sense of Holy Scripture, according to the literal meaning of the words and according to the common interpretation and understanding of the Holy Fathers and the doctors of theology" : this is your regular creationist saying that the bible has to be interpreted literally and that you are a heretic if you don't. Problem: said creationist can throw you in jail if you don't abide to his worldview.
That's why from this time (from a bit earlier actually) universities found it primordial to gain independence from the clergy and that science and religion diverged from each other.
Is this religious assessment coherent with others? Of course not! Religion is not much about coherency. Copernicus heliocentrism was well accepted by the church as he was less confrontational, richer and more religious.
Did Galileo act like an asshole? Maybe, though church did need some trolling at this time. Was his condemnation political? Most certainly! But the salient point is that the motivation may have been political, the justification was religious. And that was unacceptable to scientists that you could justify dogmatism and rewrite science books for political motives. The refusal of this is what gave us modern science.
This is a fascinating observation, and given the information they had at the time, I can see where Locher is coming from. Given two possibilities, one involving absolutely enormous stars and one involving a earth that circled the sun, both extraordinary claims, and no sure way (yet) to evaluate which was true, it's human that he supported the more comfortable hypothesis, and he wasn't provably wrong given the information they had.
> That is unfortunate for science, because today the opponents of science make use of that caricature. Those who insist that the Apollo missions were faked, that vaccines are harmful, or even that the world is flat whose voices are now loud enough for the War on Science to be a National Geographic cover story and for the astrophysicist Neil deGrasse Tyson to address even their most bizarre claims do not reject the scientific process per se. Rather, they wrap themselves in the mantle of Galileo, standing (supposedly) against a (supposedly) corrupted science produced by the Scientific Establishment.
The problem with this is that it conflates the public social debate with an internal scientific debate. Galileo vs. Pope Paul V is not the same debate as Galileo vs. Locher. The former is a debate driven by social needs that tries to drive opinion starting from what the pope wanted rather than observation (which is in fact a rejection of scientific process). The latter is two competing scientific hypotheses.
Likewise, picking one of the example debates (do vaccines cause autism?) there are two possible debates. The public debate is driven by social needs--mostly people trying to find meaning in the suffering caused by their child's autism, and people trying to take advantage of that need. This is absolutely a rejection of scientific process: scientific process attempts to explain phenomena, not find explanations that make people feel better. The internal scientific debate is largely not a debate, because the evidence that vaccines don't cause autism is, at this point, so overwhelming.
"Wrapping oneself in the mantle of Galileo" IS inherently an unscientific position: being pro- or anti-establishment is irrelevant to scientific process. The fact that Galileo happened to be anti-establishment at the time is irrelevant to the fact that ultimately his hypotheses were proven correct.
The real problem here is that a large part of the scientific community doesn't recognize that the social debate and the scientific debate are two different debates. Scientific evidence which is persuasive in a scientific context (studies have shown no correlation between vaccines and autism) does not persuade everyone in a public social context. Emotional approaches are also necessary (would you rather your child died of whooping cough? Or as one person with autism said, "It's painful that some people would rather have a dead child than a child like me.").
Now why I say "appeared" there? Because the arguments quoted in the text "looking through the telescopes it appeared that epicycles existed" isn't "scientifically" meaningful argument. As soon as we accept the relativity of motion, it's clear how meaningless the statement "it looks so from here" is.
Moreover, "scientific opposition" doesn't result in the house arrest by the church.
Both the Bible and that-other-newer-book- which-is-not-politically-correct-to-be-named have the verses that reflect the false understanding of the nature, and that is indisputable. It's true that there are today enough people that don't take these verses seriously, but in fact, the reasonable people did so, like seen in the article, already at least some 400 years ago.
Good for us, because otherwise most of us would be peasants today.
I find that modern revisionism to make religion seem less villainous is fairly common nowadays. I don't know where this is coming from or why its on social media so frequently, but its concerning. I think splitting hairs to make the Church look good is a questionable narrative and a form of feel good politics for certain religious people and certain types of habitual contrarians the internet is so fond of. I imagine we're witnessesing a pendulum shift towards more religiosity considering how the west has swung the other way for so long.
Regardless, its still wrong and the hundreds years of fighting to secularize science and to progress past religiously acceptable models wasn't done because Locher was a bad guy, but because the Pope and and the religious establishment was, regardless of the individual merits of the many monks and priests, who ultimately had to tow the party line regardless of their own findings, mathematical skills, or opinions. Blasphemy was certainly a serious charge back then. The wonderful thing about secular science is that there's no serious punishment for being wrong or going against the grain, flawed as it may be. The worst you can expect is being punched by Buzz Aldrin and even then you really have to earn that.
The Epiphany-V was designed using a completely automated flow to translate Verilog RTL source code to atapeout ready GDS, demonstrating the feasibility of a 16nm silicon compiler. The amount of open sourcecode in the chip implementation flow should be close to 100% but we were forbidden by our EDA vendorto release the code. All non-proprietary RTL code was developed and released continuously throughout theproject as part of the OH! open source hardware library. The Epiphany-V likely represents the firstexample of a commercial project using a transparent development model pre-tapeout.
Custom ISA extensions for deep learning, communication, and cryptography DARPA/MTO autonomous drones cognitive radio
Related Report - https://www.parallella.org/wp-content/uploads/2016/10/e5_102...
I've always wanted to play with these units, but buying one doesn't make a lot of sense for me (where would I put it?). I would be super interested in making them accessible to folks.
(access until we resolve the hosting issues, wordpress completely hosed...)
Congrats to everyone at adapteva. I remember talking to a couple of researchers who were using the prototype 64 core epiphany processor who seemed excited at how it could scale. I wonder how excited they'd be about this.
64 MB on-chip memory? For 1024 cores? That's 64 K per core. That seems rather inadequate... though for some applications, it will be plenty.
The NCube and the Cell went down that road. It didn't go well. Not enough memory per CPU. As a general purpose architecture, this class of machines is very tough to program. For a special purpose application such as deep learning, though, this has real potential.
Site is currently slashdotted so I can't comment on details like how much DRAM bandwidth you might actually have.
So for example a big L2 or L3 cache will make a CPU faster, but I don't know if a parallel task is always faster on a massively parallel architecture, and if so, how can I understand why it is the case? It seems to me that massively parallel architectures are just distributing the memory throughput in a more intelligent way.
Error establishing a database connection
Error establishing a database connection
Can anyone provide a summary?
Basically this is a recurring theme in computing, but the whole custom massively parallel thing rarely works out.
When learning something new, I find that this group implemented with $NEW_THING in a completely different way than that group did an implementation with the exact same $NEW_THING. I have a harder time understanding how the project is organized than I do grokking $NEW_THING. And when I ask "why not $THAT_THING instead?" I get blank stares, and amazement that someone solved the problem a decade ago.
Sure, I've seen a few paradigm shifts, but I don't think I've seen anything Truly New in Quite Some Time. Lots of NIH; lots of not knowing the existing landscape.
All that said, I hope we find tools that work for people. Remix however you need to for your own edification. Share your work. Let others contribute. Maybe one day we'll stumble on some Holy Grail that helps us understand sooner, be more productive, and generally make the world a better place.
But nothing's gonna leave me behind until I'm dead.
On the other hand, I think there is a bit too much fatalism in the article. Sometimes the kids are being stupid, and they need to be told so.
The vast majority of web apps could be built in 1/10th the code with server-side rendering and intercooler.js. All this client-side crap is wasted when you are trying to get text from computer A into data-store B and back again. It's the front-end equivalent of the J2EE debacle, but with better logos and more attractive people.
And people are starting to wake up. It's up to us old guys to show the way back to the original simplicity of the web, incorporating the good ideas that have shown up along the way as well as all the good ideas that have been forgotten. Yes, we'll be called dinosaurs, out of touch, and worse.
Well so what? We're 40 now. And one of the great, shocking at first, but great, things about that age is you begin to really, truly stop giving a fuck what other people think.
Besides, what else are we going to do?
 - http://intercoolerjs.org/2016/01/18/rescuing-rest.html
I speak from enviable experience: game studio owner at 17, member of original 3D graphics research community during 80's, operating system team for 3DO and original PlayStation, team or lead on 36+ entertainment software titles (games), digital artist, developer & analyst for 9 VFX heavy major release feature films, developer/owner of the neural net driven 3D Avatar Store, and currently working in machine intelligence and facial recognition.
Our profession is purposefully amateur night every single day, as practically no one does their computational homework to know the landscape of computational solutions to whatever they are trying to solve today. Calling us "computer scientists" is a painful joke. "Code Monkeys" is much more accurate. The profession is building stuff, and that stuff is disposable crap 99% of the time. That does not make it any less valuable, but it does render it quite ridiculous the mental attitude of 90% of our profession.
Drop the attitude, write code freely knowing that it is disposable crap, and write tons of it. You'll get lazy and before you know it, you'll have boiled down whatever logic you do into a nice compact swiss army knife.
And the best part? Becuause you'd stepped off the hype train, you'll have more confidence and you'll land that job anyway. If they insist or require you to learn and know some new framework: so what? you're getting paid to do the same simply crap over again, just more slowly with their required dodad. Get paid. Go home and do what you enjoy. This is all a huge joke anyway.
The one thing in the programming world that is almost 100% applicable to almost every article like this ( and many other topics ) is..... it depends.
I'm fortunate in that for most all my career I have spanned many technologies from embedded systems to the latest crazes on the web. Mostly what becomes redundant is language syntax and framework. If your programming career is largely centered around these then you become redundant pretty quick (or super valuable when critical systems are built with them then need maintenance forever ).
Frameworks come and go so if you spend a lot of time creating solutions that shuffle data from a DB to a screen then shuffle data back into a DB.... then a majority of your programming skills will become redundant relatively quickly. ( maybe a half life of 4 years? ). But often when you are doing this, the real skill is translating what people want into software solutions, which is a timeless skill that has to be built over a number of projects.
If you work in highly algorithmic areas, then not a lot of your skills become redundant. Though you may find libraries evolve that solve problems that you had to painfully do from scratch. However that deep knowledge is important.
Design, the more complex a system is to engineer (that isn't provided to you via a framework), the more likely you will have skills that won't go redundant. Design knowledge is semi timeless. My books on cgi programming circa the mid nineties are next to useless, but my GOF Design Patterns book is still full of knowledge that anyone should still know. OOSC by Betrand Meyer is still full of relevant good ideas. My books on functional programming from the 80s are great. The Actor model which has its history in the 70s is getting appreciated by the cool kids using elixir/erlang
Skills in debugging are often timeless, not sure there's any technique I'd not use anymore. ( though putting logic probes on all the data and address lines of a CPU to find that the CPU has a bug in it's interrupt handling is not often needed now )
Maybe such a person would have chosen to learn ML as first programming language. If this person had then gone on to work in programming for 3 decades, and if you'd asked this person 30 years later, i.e. today, what's new in programming languages since ML, what might have been his answer?
Maybe something along the lines of: "To a good first approximation, there are three core novelties in mainstream sequential languages that are not in ML:
- Higher kinded types (Scala, Haskell).
- Monadic control of effects (Haskell).
- Affine types for unique ownership (Rust).
Could I be that somebody?
> Half of what a programmer knows will be useless in 10 years.
and the rest of the article seems to be based on it so it negates much of what is said.
Foundational knowledge does not decay. Knowing how to estimate the scalability of a given design never gets old. Knowing fundamental concurrency concepts never gets old. Knowing the fundamentals of logic programming and how backtracking works never gets old. Knowing how to set up an allocation problem as a mixed-integer program never gets old.
In short, there are many things that never get old. What does get old is the latest fad and trend. So ignore the fads and trends and learn the fundamentals.
I'm over fifty and just got back from presenting at a major conference. I've managed to say current through 35 years of embedded systems design (hardware and software) as well as a stretch of software-only business. It's really not that hard if you understand your job is to continually be learning. I must be doing it right because often those I'm teaching are half my age.
As an aside, I've done the management track and moved back to the technical track when I found it unfulfilling.
Great companies who have interesting projects will want to see what you've built in the past; the technology is just a tool. They will trust you to use the right tools for the job, and will respect you enough to let you pick those which you prefer.
For legacy systems, it's helpful to have some experience but it's not like you won't be able to be effective if you're good given sufficient ramp up time.
In my experience it's far better to hire the smart, motivated engineer who can actually get stuff done and has created high quality software before than someone who is an expert in a specific framework.
Also I avoid going to tech conferences about web stuff, unless it's a legitimately new technology. A new way to organize your code and conventions are not new technology, it's just some guy's opinionated way of doing things. And most of the talks are less about conveying useful information that will help you and more about the speaker's ego and vanity.
In his sig he says he likes to write about making decision, and not a word about intuition, how it builds up in you over years from that seemingly pointless going round in circles? Very little about how we improve in relationships with people (going from 0% skill for many of us), and accomplish even more by making others pull in right direction?
I have never wanted to do anything else than what I do. Differently, yes, but no farming / opening a restaurant or an art gallery. Maybe that is the real culprit?
Last thing, knowledge does not "afford increased measure of respect and compensation". Adding value, helping people, and solving their problem does. If you have this trail on your CV, maybe that long list of the technologies is less needed.
Exhibit A would be NoSQL. Little more than a rehash of the hierarchical and network (graph/pointer) databases popular in the 1950s before the ascent of relational databases, these systems enjoy increasing popularity despite few, if any, advantages over relational databases besides allowing 20-something hipster programmers to avoid learning SQL and the ins-and-outs of a particular relational database (like PostgreSQL) and allowing VC-backed tech companies to avoid paying senior developers who already possess that knowledge what they're actually worth.
If these new data stores were at last as reliable as the older relational databases they are supplanting, it wouldn't be so bad. But they aren't. Virtually all of them have been shown to be much less reliable and much more prone to data loss with MongoDB, one of the trendiest, also being one of the worst.
And these systems aren't even really new. They only appear that way to young developers with no sense of history. IBM's IMS, for example, is now 50-years-old, yet it has every bit as much a right to the label "NoSQL" as MongoDB does--and amusingly, it's even categorized as such on Wikipedia.
No, you get it. It's the people who get excited about stuff we tried and abandoned two decades ago that don't get it.
I'd suggest not building a house on sand, and learn the fundamentals of how computers and programming language works. Don't learn anything closed-source. It isn't worth the brainspace.
I got lucky to have this advice 10 years ago. I did a Master degree on this area and my life changed, my software has high quality, evolve fast and I sleep nice every night with all the "chaos" under control. I can change languages and development processes fast and painless. I think the point is: you have to understand the abstract concepts of technology rather than languages or frameworks. As an example, if you understand Object Orientation, new languages will appear and disappear while your abstract concept will still be completely useful and applicable. If you study software engineering, it does'n matter if you will use scrum, RUP, you will get it fast because you already have all the base.
Make of that what you will.
As a 40+ programmer, who knows what becomes of those who move into management, I am seeing lots of my cohort falling back into actually making things, as a way to preserve our hard-won value.
This makes me happy. And you whippersnappers better watch yourselves ;)
More so, the ability (and desire) to learn the latest/greatest has waned. I fear that we only have the ability to jam so much new knowledge into our heads. At some point, we must discard the old to make room for the new.
I'm just afraid I'm gonna delete the ability to control my bowels.
I do agree that keeping skill levels up across a long career is difficult. Maybe these memes come up because it's a convenient excuse not to put the effort in? It's very easy to get complacent if you are smart, get things done and have a comfortable home life. We have to train to get that brass ring, to stretch the analogy ;)
Further on, the core of computer science has even better shelf life. It basically never expires.
Personally I split two things. The stuff that I need to learn just now to get my current work done, and the stuff that matters in the long term. The former can change quickly and I don't fuss about it. If It's the buzzwords and latest tech gimmicks or just a new technology I didn't use before, I learn as much as I need to and as much as sticks naturally over the course of my work, but I don't always actively try to retain it.
The latter part however is the real "gold". Once you know the core computer sciency stuff you can always build on that later on using whatever tools and technologies.
- solving the problem at hand.
- solving it in the quickest time possible
- the solution to the problem should not introduce new problems
I think for experienced programmer such as yourself the knowledge that you "lose" or "decays" don't actually become useless. I think they will serve you in making better future decisions like for example what you are saying now. Realizing whether things are 'fads' or 'foundational ideas' is a big asset for an experienced programmer.
The same way with doctors. Their tools are changing rapidly but the underlying concept is still the same. She may not know about all the new tools but she understands how a heart works, and because of that she can gauge if the new tool is just a 'fad' or will it change how things are done fundamentally.
I think every career path has this in some degree or another.
This is the point you think in a wrong direction. All of this Jobs need a basic knowledge (a developer need it too) and all auf this Jobs need tools or regulations to do the job.
A doctor need knowledge about new medics, new instruments. A good teacher teaches not the same way for 40 years. ...
React or whatever are tools. And yes, the most tools reinvent the wheel and this is not development specific, this is valid for many jobs.
The parallels include the explosion of new knowledge, or at least variations on the old knowledge, that a practitioner needs to keep up with. In programming it's languages and frameworks, in medicine it's discoveries (basic science), new drugs and techniques, and aspects of the regulatory environment. In either case a few years out of school/training it becomes daunting to keep up.
I should add here a particular peeve, the proliferation of abbreviations and acronyms is way out of control. It's nearly impossible to read an article without encountering an avalanche of incomprehensible ABBRs. What's worse, the same ABBR is often used to mean entirely different things one article to the next. Cross-field usage is a naturally incongruous extension of the confusion, though at times it's humorous.
What the article doesn't emphasize is the blizzard of details to keep up with is just one part of the experience. As years in the trenches becomes decades the value of "time in grade" becomes evident. The ability to size up the demands of a complex problem, to have a clear idea of where to enter the path of its management, and calm assurance growing out of having been down the road before are all won only by virtue of real experience.
Having done what I have for 40 years, it took me only 33 years to realize I didn't know what I was doing, and that's when I got really good at it. Therein is an essential wisdom that time and effort alone confer, and can't be gained in any other way.
He seems to ignore that his experience just paid off. It is not that his knowledge of JSP is out of date and not useful, it is that back in the day he learned the anti-pattern and can apply that now. Programming has the same long term advantages as any profession. Most of what I know after 20-some years is not any specific tech, but ways of doing things, recognizing good and bad habits, patterns, etc. The specific techs come and go, but the real knowledge transcends them all and builds on itself. His 3 stages graph shouldn't logarithmic but exponential.
That's the key quote, I think. There seems to be far too much focus on 'programming knowledge' being about new frameworks, languages etc. That's just ephemera. Picking up React Native takes what, a couple of weeks? The basics are still the same, and still far more important.
It's not that frightening to me, a mid-30's developer, at this point. I find the fundamentals are more important than the fads and it's relatively easy for me at this point to separate the wheat from the chaff. Is Redux or the Elm architecture good? Yes -- it's a left-fold over a state tree; great! I want that.
Are new things coming out constantly? Yes. Some of them are incremental improvements. That's a good feature to have. It means there are a horde of passionate people constantly improving the tooling and libraries available. I wish some of my preferred languages received even a fraction of the attention that JS gets.
We live in interesting times.
The time I spent mastering, adobe flex, javas struts framework, GWT and the likes. Could seem like wasted time. But in the larger scheme of things it just made me smarter. I know what worked for them and what didn't and that helps me understand the future frameworks better.
Perhaps concepts in programming change more rapidly than in other fields but technology ascends there too. For example I've heard dentists discuss new tech and methods they use as if it was a new web framework with different thinking.
Doctors need to learn about new methodologies all the time since science and technology discover new shit all the time and develop new methods to finding and fighting diseases for example. I think most people would be extremely disappointed if they visited a doctor that gave them medical advice that was 40-50 years old and wasn't updated with more modern medicine.
All technology is a means to an end. You don't have to learn the new tools to complete the job if you can have the same outcome and I think many times people are so afraid to become less relevant that they learn stuff they don't actually need.
If you really benefit from learning something, that's when you should learn it and use it.
I guess my conclusions are trivial. Many of us have amazing technical skill but our education and experience is not on par. It results in a lot of waste, of time and quality.
But the thing is, I really like that I must always be learning. I just have that kind of brain that is attracted to learning new things, so for me this career has always been a natural fit.
In the last few years I learnt Meteor.js and built an issue tracker from scratch; I made several attempts at gaming projects using C++, C#, Swift, and others; I played Microcorruption and Cryptopals and Starfighter to learn a bunch about assembly, reverse engineering, crypto and security; I learnt Go and built a compiler with it; and right now my attention is moving towards lua to do some pico8 games. I did all this on the side of my boring corporate java developer job, and for no other reason than I wanted to learn new things. (OK, maybe I had big dreams of a startup with the issue tracker, but the others were purely for fun.)
I'm probably never going to be well known for any of those things, and I really haven't built up the chops to be considered an expert in any of them. But I'm content being a dilettante. Perhaps one day I'll get exhausted of exploring new things, but until then it's just fun to just dabble in whatever takes my fancy.
To the author: Hang in there, man! You're just feeling the time crunch, now that you have kids and other responsibilities. Based on your age, it would have been the mid-1990s when your professional career started. Back then, the economy was a lot better and outsourcing hadn't yet taken over at large companies. It's a little more dog-eat-dog economically, but your brain probably still works fine. Just take a breath and keep going.
I don't think knowing how a computer actually works will ever go out of fashion. Now there's Falcon framework for PHP for example, full of speedy functions written in C by smart people that actually knew what was happening beyond the stuff they typed into their .php files.
About How to choose a framework... and it suggest, if you already an experienced dev, who prefer OO patterns and you believe in serious computer science, you should use Ember.js. If you are a designer, Angular is good for you. However, if you are young and you don't have any experience, go with React, because it is easy to learn... like PHP back in time... I'm afraid React will be the new PHP, because we will see a full generation growing up with mixing logic with view, and they follow this kind of patterns... :)
It would be interesting to see the graph of the careers stages with happiness overlaid.
i.e. Life is all about reinventing your own wheel.
State management is the more important topic and the React tool chain has some great options for addressing it.
And the same is true of programming. There are still variables, arrays, syntax errors, IDEs and so forth - the underlying algorithms don't change that much. Lawyers and accountants have to keep up to date to keep their credentials, and doctors almost always do (but not actually always - just like some programmers don't update their skills). Fads come and go in teaching as well.
I know less about plumbers, but there are few white-collar professionals where you don't have to keep on top of things throughout your career. From architects to engineers to social workers to pilots to biologists to meteorologists to managers, things change in your profession and you need to adapt. It's just that usually those changes don't have the ridiculous levels of hype and fanfare that they have in our bubble (management being an exception here as well, lest I draw the wrath of a six-sigma black belt!)
In fact, in many ways we've made things worse because not only does the sand keep shifting, there is now way too much sand. Young people come into the field and they want to make their mark. So we are constantly going through "next big thing" phases, some big like OOP, some smaller like React, only later to realize that what seemed so very interesting was really a lot of navel gazing and didn't really mater that much. It was just a choice, among many.
I can only hope one day some breakthrough in C.S. will get us past this "Cambrian Explosion" period and things will finally start to settle down. But I am not holding my breath. Instead I am learning Forth ;)
If by "worse", you mean more forgetful of the details of latest fads? Sure. But definitely not less able to put together solutions (Not if you've been spending your time doing that, of course)
When you're a kid and fresh into programming, everything you pick up has some magical power to do all sorts of awesome and cool stuff you've never done before. It's a grand adventure and you're just collecting all the trinkets you can on the way there. You look to other programmers to see which ones have the most potential. Whatever job comes along, you've already got the solution in your toolkit.
Over time you begin to realize that many, many problems have been solved hundreds of times. You note that there is an ecosystem around tools and frameworks, and as a developer? You are a market for lots of people who want you to use their stuff. That there's quite a bit of social signaling going on around which languages and tools people use. I'll never forget the first time I heard somebody say about another programmer "He's a nice guy, but he's just a VB programmer."
Actually he was one of the best programmers I knew at the time. He programmed in many different languages. It was just that for the work he was doing, VB was the right tool. But that's not the way it looked to the cool kids.
Know what's sad? It's sad when you look back 10 or 20 years and remember a ton of effort and pain you went through to fuck around with WhizBang 4.0 only to see it replaced by CoolStuff 0.5 -- and then you realize that CoolStuff really wasn't all that much of an improvement. And then you realize that CoolStuff is no longer cool. And then you think of the hundreds of thousands of manhours coders spent mastering all of that and comparing notes with each other. Looking down on those poor folks who never made the switch. Makes you kinda feel like an asshole.
I think you lose a lot of detail recovery ability as you get older, no doubt. I keep very little implementation detail active in my memory and only dig it back out as needed. But we are communicating on this wonderful little forum that, last I checked, was built using html tables! Yikes! And using a language that's a derivative of LISP! Yet somehow the world keeps spinning around.
I have no doubt that as you finally smarten up and focus on the important stuff that you will appear to other, perhaps younger programmers as losing it. I just don't think they know what the hell they're talking about.
Obviously you may substitute any fashionable/unfashionable language pair in the above.
Learning 30 different imperative languages (c, pascal, ada and descendants) might not add as much value as learning an imperative language, a functional language, a logic programming language, a language emphasizing concurrency (go, erlang), etc... meaning, learning paradigms and high level design constructs not syntax.
Try to stay in touch with new paradigms, instead of just new applications of them.
First, this Douglas Adams quote:
I've come up with a set of rules that describe our reactions to technologies:
- Anything that is in the world when youre born is normal and ordinary and is just a natural part of the way the world works.
- Anything that's invented between when youre fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- Anything invented after you're thirty-five is against the natural order of things.
The second was this very insightful article on the dynamics of the evolution of sub-cultures: https://meaningness.com/metablog/geeks-mops-sociopaths
For the record, I've never been to Burning Man, nor seriously tempted to change that.
In my opinion, the only group that is on the path to actually "ruining" BM is the organization itself. From the rapid price hikes to the parking passes to general incompetence running infrastructure for such a large popup event to clashing egos, every time I talk to a current or former employee I get the sense that BM is run by children, half of whom want to run it like it's the 90s when the event was a tenth of the size it is today. The worst part has been the antagonism between the BM org and the BLM/local law enforcement. Every move the org makes seems to make life harder for attendees (although the cops seem to have stopped arresting for drugs, giving out heavy fines to recoup lawsuit costs instead, so I guess that's a plus).
The number one "ruined" point I hear about from early attendees is when it went from a volunteer effort to something with a year-round staff. Seeing a small band in a pub is different than a giant, commercialized stadium show. Neither is wrong, but nobody is served by pretending that they're the same. As most developers know, things change when you go from "I do this for the love" to "this is my job".
I don't think Burning Man was "ruined" by that transition, but I don't like how this article mocks the whole notion that something might have been lost along the way. Of course, it was written by somebody whose job is Burning Man, and as Upton Sinclair says, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
* Bands? "They used be all about the music, then they went mainstream"
* Video games? "Now they just cater to beginners; they don't care about real gamers anymore"
* Movies? "They're so focused on the special effects, they don't even bother with the story"
...and the list goes on. Name just about anything that people enjoy, and you'll find a vocal contingent complaining about how recent changes have "ruined" it.
What I did realize, though, is that in order to really get the most out of Burning Man, I'd need to really get into the culture. People get into Burning Man like they get into their churches, where there are lots of social events. Ultimately, the groups that enjoy Burning Man the most are large groups of extended friends that like can be found in a small church.
The hardest thing with Burning Man, IMO, is the participatory prep. Meaning, the easy part of the prep is like planning for a long camping trip. What ultimately is required for me to be more than a tourist is much more than what I was able to do as an introverted person who just doesn't like to make artwork or run games for other people.
People would also routinely leave their playa covered and broken down vans, RVs and cars in my neighborhood. In the desert by my parent's house we would find numerous trailers abandoned full of trash, empty water bottles and old clothes. You knew it was from burning man because the stuff was covered in playa dust.
The universal rule of groups: its always the next guy who ruins the coolness of a group
As to Burning Man itself, it always felt like one of those events that I was not cool enough to go to. Camping in the desert would be interesting, and probably a lot less stressful than some of the camping I did during high school.
The logistics and planning of getting everything out to the dessert from the east coast was more significant than I had imagined.
I borrowed her popup tent that was covered in burning man dust, its weird gray and alkali. Its remarkable that burning man actually happens.
It wouldn't surprise me if in 2026 people are talking about how terrible Burning Man has become and how it's nothing like the wonderful Burning Man of 2016.
I've come to believe that the real purpose of Burning Man is to have these kinds of meta-discussions. The actual gathering in the desert is nothing but a side effect.
the shark has definitely jumped higher since the last time I went. Hardly saw any naked people this year. There were lots more "coachella" types - doing a lot of instagramming, but the worst for me was that there really wasn't a lot of room to be by yourself out on the playa. There were always people around. In past years, I could be out there by myself if I went out far enough.
That being said, BM is still incredible and I'm not against it evolving. The spirit is still alive and well.
"The 10 Principles are at their most powerful when given to strangers."
Every wave of people, SF residents, phone phreakers, immigrants, every generatuin; bemoans the next wave of people are ruining it.
It's like the old koan about a radio: if you replace the speakers, then the housing, then the circuitry, then the controls, is it the same radio?
He could have said "Allowing an uncontrolled influx of new participants ruined burning man".
And that is precisely right.
Some clubs should be exclusive.
Disappointment stems from attendees expecting a cultural phenomenon and getting a show; the expectation that Burning Man is still important but finding it's just a show. You don't get this problem at Disneyland because and talk of 'Disney spirit' is tongue in cheek.
Here are some more substantial questions than the sort of feel good straw-man in the article. What new ideas have come out of it in the last 15 years? How do you create culture by showing up for a weekend to a catered camp ground and show? Is there any overlap at all to what attendees think about the meaning/point of Burning Man? So if you come west just add it to the Vegas-Disneyland circuit because that is all it is now.
Indeed there is one advantage to Vegas and Disney, the cops don't shamelessly hassle gay people.
> Susan Fiske, a former president of the Association for Psychological Science, alluded to Statcheck in an interview with Business Insider, calling it a gotcha algorithm.
> The draft of the article said these online critics were engaging in methodological terrorism.
If these are attitudes typical of psychology, then I cannot say I consider psychology to be a proper social science. There is a fundamental misunderstanding of how knowledge is created through the scientific process if the verification step is considered to be offensive or taboo. That anyone in the field of psychology would even be comfortable publically espousing a non-scientific worldview like that means that psychologists are not being properly educated in the scientific method and should not be in the business of producing research since they do not have a mature understanding of what "scientific" implies.
So much less harm than even "door knob twisting" type explorations - no, this was using published works and pretty much running them through a process to verify or not verify accuracy.
Unsolicited? So what! As a practiced writer I make unsolicited judgments on language usage all the time. Are these people that completely write from their own minds and don't use a spell check or grammar check program of any sort before sending their material for editorial review? I'd strongly doubt it, because it's a tool to make communication more accurate. Math and formulas having a similar procedural check sounds quite constructive to me.
It's not bullying to point out errors; it's bullying to use the existence of errors to belittle or insult a person. I don't see that happening here. Sure, it's a little sterile or "cold" in this fashion, but I think that's for the best if such a process / tool can gain acceptance. It just spits out results and I think that's all it should do. Neat to read about.
And if you're curious how it works, as I was:
Statcheck uses regular expressions to find statistical results in APA format. When a statistical result deviates from APA format, statcheck will not find it. The APA formats that statcheck uses are: t(df) = value, p = value; F(df1,df2) = value, p = value; r(df) = value, p = value; [chi]2 (df, N = value) = value, p = value (N is optional, delta G is also included); Z = value, p = value. All regular expressions take into account that test statistics and p values may be exactly (=) or inexactly (< or >) reported. Different spacing has also been taken into account.
Science is fundamentally reputation-driven. One of, if not the primary incentive that encourages scientists to do science work is the chance of raising their prestige. Citations are one very quantifiable yardstick for this.
If positive social sanctions are a driving force for science, then it's entirely reasonable that negative sanctions should come into play too. If you can well-cited paper and attract fame, then a poor paper should likewise attract shame.
Otherwise you have a positive feedback loop where once a scientist has attracted enough prestige, they are untouchable. We need negative feedback to balance that out.
I mean, it just uses a basic regular expression, I can see it easily performing bad checks. I assume the authors take this into account.
>The literature is growing faster and faster, peer review is overstrained, and we need technology to help us out,
This is a problem in every field, not just Psychology.
I want someone to tell me the distribution (or average ratio) of papers read to papers written.
Every thesis written is supposed to add some delta to the state of the art. But there is no method for doing a diff between past and previous versions of human knowledge. How to make science less redundant and more efficient?
I dream of aggregators for everything.
But this example - someone notifying you there's a mistake in your paper, when there really is a mistake? That seems like a strong argument /for/ academic debate via social media, not /against/ it.
> Theres a big, uncomfortable question of how to criticize past work, and whether online critiques of past work constitute bullying or shaming.
It's facts about your work. Learn to handle it or quit pretending to be a scientist.
> The gist of her article was that she feared too much of the criticism of past work had been ceded to social media, and that the criticism there is taking on an uncivil tone
Valid enough point. Criticism and correction can be done in a civil manner, and in an accepted forum.
Here we have numbers-checking working the same way.
I bet you this sort of feature gets built in to word processors eventually, and puts wavy red lines under the results it flags.
We've had this sort of real-time "syntax" checking in software engineering for half a generation. It seems wise for other disciplines to consider adopting it too.
It's obviously got to be discretionary, just like spell-check is discretionary in browsers.
We will get a new genre of humor, thought "statcheck fail."
1) There is no evidence that the recent giant DDOS attacks on Brian Krebs used IP Spoofing. In fact, there is every reason to believe that they did not since the generators of the packets were low powered IoT devices. There is increasingly little reason for attackers to even bother with IP spoofing given how easy it is becoming to capture giant herds of low power IoT devices. The attackers don't care if some of their herd gets taken offline due to effective attribution.
Amplification/reflection attacks which will still require IP spoofing. What I'm curious about, and only time will tell, is how much IP spoofing will continue to play a part in lsrge DDOS attacks? Why bother spoofing IPs if your botnet herd is already large enough to bring someone offline?
2) Go and download CAIDA's Spoofer application. Test it and give them bug reports. I gave them one a few weeks ago.https://www.caida.org/projects/spoofer/
Of course toward the end of it I learned that I could have done this all with iptables, but I like my way better because I got to learn a lot.
However, when I see tricks like in this talk using iptables with BPF bytecode to block SYN packet floods, I get completely humbled. I know that I know nothing.
... which you can conveniently buy on one of the CloudFlare-protected DDoS service websites! I know this point has been hammered to death before, but I find it curious that despite their strong advocacy of being content neutral and not removing a site unless they receive a court order, their Terms of Service explicitly allows them to shut off service based solely on their opinion:
"SECTION 11: INVESTIGATION CloudFlare reserves the right to investigate you, your business, and/or your owners, officers, directors, managers, and other principals, your sites, and the materials comprising the sites at any time. These investigations will be conducted solely for CloudFlares benefit, and not for your benefit or that of any third party. If the investigation reveals any information, act, or omission, which in CloudFlares sole opinion, constitutes a violation of any local, state, federal, or foreign law or regulation, this Agreement, or is otherwise deemed harm the Service, CloudFlare may immediately shut down your access to the Service."
"Solving" the "problem" of ip spoofing is only a benefit for centralized authorities and services. The loss of privacy is also serious. People advancing this idea are advancing it to better their commercial interests rather than the interests of individuals using the 'net.
If that's indeed the endgame, it seems like a lot of work on the attacker's part to disable a company's servers for a bit, but maybe I"m missing something? The article did mention servers boiling, but that was likely hyperbole unless there's a way to physically damage servers with DDoS.
When I think DDos, I envision thousands of compromised hosts all over the internet making requests to a target to consume resources.
To be honest, attackers are not very smart. They almost always use the same old attacks, attacks that are easily stoppable, and attacks against a single IP.
100gig coherent is easily turning volumetric attacks into old hat. Ending spoofed attacks today is a tactic that's 10 years too late.
Except, why put that ridiculous meme in the middle of it? It's cringeworthy seeing an excellent technical presentation littered with such childish imagery.
(Not that I agree with this bastardization of "meme" to mean "silly image with text overlaid in capital letters", but unfortunately that is what everyone is calling these things.)
Oh also: he mentions O to insert above the current line. I use that a lot, but on my systems (going back 15 years or so I think) it has always required a pause, like vim is waiting to see if I'm typing O or some longer command. If I type O and immediately start entering text, strange things happen. This doesn't happen with o. Does anyone else experience this? Maybe it's just something weird in my own setup.
EDIT: Some more "moving fast sloppily": 1G goes to the top of the file. G goes to the bottom. Also you can not-move, but scroll the visible area so that your cursor is on the top line (zENTER), middle line (z.), or bottom line (z-). I use that a lot when I am Ctrl-Fing through a file, so I can see more context.
A lot of people have a tendency to think of each tab as corresponding to a single open file. This is very understandable because it closely matches the paradigm of most IDEs but it's actually an anti-pattern in VIM. Tabs are really meant to be more like workspaces where you arrange multiple windows into a desired layouts. You then have one buffer for each file that you're dealing with and view them in your windows. It's perfectly fine for multiple windows to share a single buffer or to switch out the buffer that is being viewed in any given window. This StackOverflow answer  and this blogpost  both go into a fair bit more detail.
If you're trying out this approach for the first time then you probably want to add `set hidden` to your configuration in order to avoid automatically closing buffers that aren't currently being viewed in a window. Coupling this approach with fzf.vim  makes managing very large numbers of files a breeze compared to using one tab per file.
 - http://stackoverflow.com/a/26710166
 - http://joshldavis.com/2014/04/05/vim-tab-madness-buffers-vs-...
 - https://github.com/junegunn/fzf.vim
I can easily without leaving home row move a few letter to the right or back with hl or f and F. but things started to get a lot more fun when you realize that Vim is a programming editing language and it's beauty is in the commands. This leads to amazing things that I hated before, such as deleting 2 words back from my current position would simple be d3b. instead of shift+ctrl+left arrow x 2 + delete.
Overall it's been about 2 months since I've started using EVIL mode for Emacs and I love it. I'll stand by the saying that Emacs is a great OS and vim is a great for editing text in it's modal editing.
lastly modal mode really felt powerful only after I had re-mapped my CAPS key to ESC. I mean throughout the past decade I don't think I've even used Caps for anything. so I've remapped the machines I work on the have caps as esc.
- for those that think it's not reasonable to do so and the whole point of using vim is so that you can edit machines via ssh then use vim on that machine, my suggestion is to use tramp in Emacs with ssh or plink to get to the server and edit (you will still have the local caps to esc key mapped)
TL;DR - Vim modal editing is amazing and feels has straining than other editing layouts - IMO.
Maybe there are scenarios where the busywork of text editing really is on your critical path, but even as a fluent coder who uses some verbose languages at times (my current project is C++ and IEC Structured Text, does it get any more blabby?) I still spend far more of my time looking at, and thinking about, code than I do actually typing. Any extraneous cognitive load just takes focus away from what I'm actually meant to be doing.
The author's explains his rationale in the next section, it's to prevent users from living in Insert Mode. Fair enough. But, when making several relatively close edits, the ability to tap a few arrow keys in Insert Mode is far easier and less mentally demanding than any key combination that requires the user to bounce around different modes.
This very point is actually why I moved from Vim to Emacs years ago. After mastering Vim, I realized that Vim strongly encourages you to think a little too much about exactly how to get there or do that thing in the fewest keystrokes, and that it's incompatible with muscle memory. Even years later, I still had to think too much about it. Whereas in Emacs I can just use basic character/word/line-based movements and let my muscle memory do its thing while I let my brain focus on the code itself instead of how to use the text editor.
But then I told myself, hey, why not try Emacs? So I've been using Emacs for ~6months now and cannot but notice that community is much more niche and humble compared to Vim. Just look at the sheer number of color schemes available for both editors. And I had to agree, Vim was far superior text editor, but that wasn't enough to keep me away from Emacs, since I gave the advantage to other things (everything else) that Emacs does better.
I tried EVIL mode, and it is amazing, but something just felt wrong using it inside Emacs. I wasn't using either Emacs or Vim. I would often shuffle and mix commands, sometimes I would do :w, sometimes C-x C-s. So I decided to ditch Evil until I get more comfortable with Emacs key bindings. I came to Emacs because of Lisp (and general FP audience is much, much, more based around Emacs, makes sense), amazing tools and plugins which I found more stable, better designed, and it is weird to say this but things just worked with Emacs, things like auto-completion and code navigation (ivy, ace-jump-mode) were really fast, hustle free experiences. Disclaimer, I have never touched older versions of Emacs, spent my time in 24, and now in 25, so many of myths and problems that Emacs got known for over the time, I think, aren't there anymore.
And to sum things up, what is really weird to me is that functional programming is on the rise and every year I see it more and more being adopted, but that doesn't help Emacs audience grow. (Maybe because I am young, and I am nerd often found in nerd communities where things like FP are often praised, but in the real world considered a bit awkward or esoteric.) I showed up at the FP workshop few weeks before in local dev/hacker community place, everybody rocking Sublime Text/Vim, but nobody used Emacs, people were looking at me like I was an Alien. Spacemacs is doing good job at attracting people, but maybe Emacs will stay that programmers/nerd phenomenon, the all-mighty lisp OS, that in the end people often reject or overlook. And why is it like so? I do not know. If somebody can sink into and learn Vim, I don't see a reason why it is different story with Emacs.
Only just figured that one out by mistakingly having the capslock on when moving around.
And here I was undoing and redoing the entire time.
I think where vim often wins over emacs is the 110 vi modes that every IDE eventually gets. Vi is an idea that can prosper in many environments.
Emacs is kinda like smalltalk. To get much benefit from it, you have to buy into it whole hog, or not at all. I can write C# in VS with vim keybindings, go in sublime text, and then just hack on a Lua snippet in vim itself. Emacs has ways to work with all those, but that requires a new skill set that I don't need at the moment. Maybe after I graduate from college, but right now isn't the time for me.
I agree with a lot of the things in this article, but wow, I could not possibly disagree with this quote more. Ctrl+[ is WAY more uncomfortable than using the Escape key.
Granted, I have many years of vim usage that have made hitting Escape a habit, and that probably plays a big part in it, but Ctrl+[ is downright painful for me (and yes, I sat and tried it for a while in vim to see what it would be like).
I can get from the home row to Escape and back with very little effort, though I understand that is not the case for many people. Perhaps it is due to the fact that I use my ring finger to hit Escape rather than my pinky (which would be a lot more work, I think).
Thinking I should just bite the bullet. Would mean I could work on a server using Mosh + Tmux as well, which should be rather nice.
What would the current canonical guide be for getting up and running with Vim, with plugins, auto-complete, inline linting, multiple carets etc?
 Just the ST3-style of parsing out tokens in the same file
For me I set it up to not allow more than 2 steps in any given direction to remind me to use jumps instead.
In general I would agree 1 character at a time is an anti-pattern but it needs to be balanced with the cognitive load of counting how many words or deciding what is or is not a boundary when there are symbol characters.
It's worth noting that, after searching with ?, you can still move back and forth through occurrences using N and n, in the same way you would after using / for a search.
Did not know this! (although I've been on vim for ~a month) That's a great trick. Much easier than trying to count words.
Begone with your heresy!
The simplicity of vim (and pretty colors) drew me in. Plus as I learned more sed/ed, I understood vi more. That, and a slow connection from off campus really sucks. I learned that too pretty well. Well enough to hack together some vim scripts, but nothing near my Emacs level. I feel like vim mode hacking is a beast you need to be specially equipped to handle (and I can write APL in any language so it isn't the syntax).
Then Eclipse and IntelliJ came around and I only really used vim for quick one off stuff (if I didn't use printf, echo, or cat). The only time I used vim was for C/C++ or something esoteric, like KDB+/Q/K, that didn't have their own dev environment (unlike say VHDL or Matlab where I could sit in their ugly cocoons).
What is so hard about an extensible text editor? Just getting the basic hooks down for self-insert and movement without having to go to swap?
I remember when Emacs was called "Eight Megabytes and Constantly Swapping". I now see Atom routinely take up over 800 MB. And it still can't play Pong.
Now with Rust and other languages, I'm back home in Emacs, but the keystrokes do tend to bother me a little. I liked the HJKL movement keys in vim - I just hated the modality and think I spent more time trying to figure out "my cursor is here, but I need move it over there - so first I need to jump on that log, shoot the rocket to knock down flowerpot then run quickly while the line is falling to catch it and K it to the next line" -- like some sort of weird text editing puzzle (I wonder if you could make a vim code golf puzzle set).
Emacs has these bindings that feel like finger yoga, even when I've remapped Caps, control, half my F-keys, etc. What I really need to do is remap my toes to my hands, I think.
It would have been really nice to see C-[hjkl] style movement (with maybe CS-[HJKL] be the distance mods or something). It's too late now. You of course you can remap those keys, but too much of that behavior is baked in to people).
Maybe one day when I'm old and gray I'll do the Knuth thing and start a new text editor, but before that I'll probably need to redo the monitor, mouse, and keyboard and that is just too much right now.
In my case, I had a manager who simply didn't like me because I'm gay and used the performance review process, and eventually put me on an action plan and forced me to quit.
Most of the big tech companies will put you on something called a PIP, which is a "Performance Improvement Plan". It basically means they are preparing to fire you, but they give you an option: quit now, and you can have some severance, or you could try and stay and complete the PIP, but still run the risk of being fired for any reason, and in that case, you get no severance. It's exactly what happened to me, and I decided it wasn't worth the stress to try and stay and fight it so I just quit.
It was the most demoralizing experience ever, and really showed me that these processes are in place so managers can just get rid of people they don't want or like.
Ok so has that worked out well for Yahoo? Clearly it's been enough time by now to do an evaluation of Yahoo practices looking back and say something like: "Yeah thanks to these great management practices we have reconquered market share / are the new exciting place where everyone wants to work / or we lead this revolutionary research"? It ended up being owned by a phone company in the end.
So if Yahoo is a failing company, what they did there, will be associated with failure. It seems they effectively moved the cause of promoting women in technology fields backwards. When someone will say "we should find a way to promote women more" ... "Oh, right, Yahoo was heavily into that, yeah that was ugly, the lawsuit and all...".
It is a bit like the crazy person advocating for your favorite language or framework, it's nice to have a fan, but because they are crazy, they are pushing everyone away with their behavior.
> as well as for low performers to be transitioned out.
"Transitioned out" ... there is an almost a positive ring to it. "We've reached out to them, found their pain points and helped them transition out to a new stage". Is that how everyone talks now? Or is it just me who finds it grating.
Even if they were not deliberate about it, they must have talked about how this might be perceived by employees.
As bad as this issue is, I think hiring friend/referral/former colleague, especially en mass is much bigger issue that's rarely talked about because it's not necessarily illegal.
However, I cannot overemphasize how demoralizing it is, especially when they aren't proven to be any better.
So no, I don't feel sorry for her one bit.
I almost felt sorry for them when I read articles like that it's the saddest deal in history as they could got more much money earlier. Now, I feel good that they are gone. fuff .. gone ..
I need to take care of my Flickr account now.
Practices like these are going to ruin your business the same way being sexist, or racist will. Either you hire the best person for the job, or youll be beaten by companies that do.
This is buried in the very last line of the article.
I wonder if Verizon isn't trying to shed her and her departure bonus.
It does not stop with movies or music.
If DRM is deeply integrated into the web then everything will get affected by it. Already today some publisher go to great lengths to try to disturb people from copying simple text and images. It will get only worse.
Currently the openness of the web has been very beneficial to the people willing to make an effort to learn the web technologies. I think that this has opened the field for many talented people. You can just inspect the page and try to learn how it is made by reverse engineering it. This will go away and you will get the inaccessible binary blob instead.
That being said, Netflix was a big pusher for EME, as far as I know not because they wanted it, but because the studios they license from demand DRM. Yet, they seem to have lost most of their "movie studio" catalogue and are now focusing on originals.
Netflix guys, what about allowing us to see the originals even if we don't have a CDM installed? That would kill DRM/EME faster than hollow FSF & EFF victories. FSF/EFF guys, doesn't this sound like a more promising campaign to you?
This is where I get scared. What if DRM does not become a web standard? What is the alternative that companies will want to use instead?
That is for me the only reason why standardization might potentially be a good thing. Not because DRM is good, but because the alternative might be worse.
Everything in the past has been broken anyway. From CSS to AACS to HDCP. I was hoping Firefox (and perhaps Chromium, but Google would probably not be so kind as to open source that part of the code) would have the DRM code built in so that we can spoof the whole thing with simple modifications. Better than having to reverse engineer Sony malware.
That website seems about as in touch with people not of the same mindset as the back pages of norml's website (once you got past the parts written by a pr person). It's got a rotating banner to "cancel netflix" which links to a 2013 post about how netflix will make you use only certain browsers. Makes the site either seem disused... or "tinfoil" as I think most consumers love netflix.
(note I only used norml as an example because their site used to (and may still be) well articulated argument on the front which quickly devolves into what many people would see as weakly argued reasons for letting me get high. it's why, in any movement, you put your articulate people out front even if they're not the real driver).
Also, EME doesn't affect only web browsers, it also affects SmartTVs which are limited to a few DRM products.
What EME and CENC try to achieve is to add simplicity to this process and for open source products to be able to compete with closed source ones. Even small DRM products are moving on this direction because it's impossible for them to target all platforms. On this regard, even an open source DRM scheme could be achieved and compete.
DRM, EME and CENC will happen, and this only hurts open source products like Linux and Firefox. But it will happen.
I can only see this just being a colossal inconvenience for users, developers, and many many innocent applications.
Sounds like pretty much everything that's manufactured these days...
The issue that FSF and others appears to have is with the Content Decryption Module which is a binary blob at the moment.Standardising/opening up the CDM spec could have been done afterwards.
If the W3C were a bit sneakier they could have played a bait-and-switch game on the content providers and push for a standard/opensource CDM at some point.Why couldn't there be an open-source CDM?
I foresee a near future where only a few in society will be able to use the internet safely. There will be subcultures, small segregated pockets of people who refuse the big corporate alternatives on the internet.
We're already seeing this today, think about it. I'm speaking from a Swedish perspective but when piracy on the www was relatively new in the 90s you'd go to "your guy" with the CD-burners and they would give you the movie, game, software you needed.
Only a few people knew enough to keep up with the trends, the BBS, the FTP sites and the newsgroups. Though there was little to none legal problems there were instead technical problems to piracy.
Then we had the piracy golden age, from about 98 to 2015, or today even. When everyone and their grandmother pirated. It was so easy, and torrents made it even easier.
But now the biggest ISP in Sweden has started handing over personal information of their subscribers to foreign companies who are sending monetary demands to the customers if their IP is found on trackers. So instead of being taken to court, just pay the money right?
That's just the start, it will only get worse because corporations have all the power.
But let's look at another example less sinister than piracy. Let's look at simple tracking and web security. Even there you have to be relatively computer savvy to keep up with the new tools, Adblock is out, uBlock is in, Noscript author is under fire, alternatives are often hosted on github.
See what I mean? Safe web browsing is being restricted to a few people savvy enough, or interested enough, to keep up with that scene.
So already, today we're seeing what the future holds for the internet. Any privacy conscious, safe browsing will be pushed to minority subcultures using different platforms, tools and networks than the rest of the population.
The internet will be just another TV or Radio, with indie broadcasters fighting to remain free in a vast sea of big corporations.
We'll most definitely always have open source browsers but the question is how well these browsers will support the new DRM internet that I foresee in our futures.
So pardon my cynicism when I see no positive outcome for DRM on the web. I see instead a majority of content under DRM protection, some of it being copied by a small minority in society and spread through other smaller networks of people who refuse the mainstream web standards.
How this is achieved is just a technicality. It is inevitable because there's money in it and as long as there's money in it corporations will pour money into lobbying to change the rules in their favor.
And the only reason they can do this is because interests can congregate and technology can be abused but it seems morally and ethically questionable. You are not stealing anything, you are watching or listening to a product of our culture. You do not take anything away from anyone.
Its just a small period of 70 years before the internet when mass media and content creators could colloborate to 'manufacture trends', hits and disproportionate wealth.
Before that artists went broke and risked everything just to get their stuff published and out to readers and viewers. Obviously this is not how it should be but the whole 'jetset star lifestyle' may not always be possible simply because you are an artist.
The problem is now that kind of 'trend manufacturing' is much harder to pull off. But the entire industry from studios to artists have got throughly spoilt, got used to those disproportionate returns and are now throwing all their toys out of the pram.
Artists create but the rest of the world is also busy creating stuff. Engineers, industrial designers, scientists, programmers, eveyone is creating stuff. Can anyone just be 'entitled' to extraordinary wealth just because they create. Maybe its their cost structures, business models and expectations that need to change.
DRM is just a tantrum backed by money, its rent seeking of the worst kind and our democractic institutions and systems are so compromised by special interests they will continue to get their way.
Without DRM, people will steal stuff without regard for the creators survival. This was seen most visible in the Pop Music industry. Piracy was so rife, that indie musicians were considered too big a risk for the labels, who turned to low-risk-low-cost 'music factory' style churning out of the same low quality pap that the popular charts is now peppered with.
If we don't protect artists (by this I mean, musicians, game designers, visual artists, program makers etc.) from the people trying to steal from them, there will be no quality content going forward, and the only form of entertainment will come from the mega-corps trying to peddle their wares in the guise of ad-laden media.
So, in my view, a standard cross-platform secure DRM model for the web is required. If you want to consume it, you should be prepared to pay for it.
Steam says I've played 15 hours.
Send hel--more microelectronics.
It's TIS-100, but with a native multiply! and unsynchronized broadcast! and digit get/set!
Still supremely difficult.
I mailed the backtrace to support. Three minutes later, he'd mailed me back saying he'd release a patch shortly. The problem was simple, admittedly, but Nice going!
This is the distilled greatest moments of embedded development because in reality I spent today ~2 hours doing a code merge, ~2 hours of work logging and other bureaucracy, ~1 hour of helping testers and ~2 hours of reading schematics and hunting down hardware issues with an oscilloscope and spectrograph.
(Edit: This game is really quite good. I never could be bothered with assembly, but now I am entertained. I don't think it's super accessible to people who haven't at least dabbled in assembly before though, but it's certainly a good way to learn.)
I only heard about this game recently after I read the HN submission: "My Most Important Project was a ByteCode Interpreter", which led to an article about other simulators, which led to CoreWars, which led to an article about programming games, which mentioned TIS-100 :)
I think the games that developed by Zachtronics are mainly puzzle game. I have never played TIS-100 before. It's interesting game, though.
I have played other game from Zachtronics. It's called SpaceChem . SpaceChem is a puzzle game which you play as a reactor engineer. The main task is transform materials into chemical products. At first glance, it's very hard to construct chemistry reactions. You know there is a pattern.
I really would recommend the games from Zachtronics.
I hate to use memes in most cases, but...
SHUT UP AND TAKE MY MONEY!
Looks very, very compelling :)
edit: popular meaning "relating to the populous", not "widely supported"
(I'm Chinese and I love to make fun of my own people.)
Still impressive, didn't know you could make assembly programming into a game!
Wave height http://www.ndbc.noaa.gov/show_plot.php?station=42058&meas=wv...
Wind direction http://www.ndbc.noaa.gov/show_plot.php?station=42058&meas=wd...
Air temperature http://www.ndbc.noaa.gov/show_plot.php?station=42058&meas=at...
Water surface temperature http://www.ndbc.noaa.gov/show_plot.php?station=42058&meas=wt...
I just tried the data format codes from the link someone posted earlier http://www.ndbc.noaa.gov/measdes.shtml
To do it without moving parts, they are for using for example ultrasound. These devices have pairs on transducers, placed 10-20cm apart and the device then measures how long it takes for the sound to pass through that distance.
A couple minutes of 90mph winds is one thing... hours of 90mph winds is entirely another thing.
Maybe it's a limit of the measuring device?
And now back to our regularly scheduled React component.
Big data and in this case the relationship (graph) between big data points are whats needed to make great ML/AI products. By nature the only companies that will ever have access to this data in a meaningful way are going to be larger companies: Google, Amazon, Apple, ect. Because of this I worry that small upstarts may never be able to compete on these type of products in a viable way as the requirements to build these features are so easily defensible by the larger incumbents.
I hope this is not the case but I'm getting less and less optimistic when I see articles like this.
In the retrofitting paper cited in the comments there is a process of smoothing, that is feeding back the message or information to update the states of the graph (example in the modern ai book). It doesn't seem anything new.
Paraphrasing this to data science: "Everybody wants to have software provide them insights from data, but no one wants to learn any math."
The top two comments here illustrate this perfectly. Anyone who is serious about learning data science will read this book and will not shy away from learning math. You can also learn about data pipelines, but that's not a substitute for what's in this book.
There are also a variety of other algorithmically focused machine learning books. They are also not a substitute for this book.
Also, chapter three about SVD, is in https://jeremykun.com/2016/04/18/singular-value-decompositio...
the advantage is that you have the python code available.
The book seems to be interesting.
For example:- What is the distribution of the residuals, how does it change over time as data comes in. How Gaussian they are(or not), analyzing weird/oddities especially around the tails- What kind of features offer the most significant signal to the model and which ones are not.
These skills are even applicable to SVM and other classification analysis.
So just being honest, the Appendix is still rather terse and advanced for me. Does anyone have suggestions for prerequisite readings that would help getting someone prepared for this text?
Notably missing are causal inference, experiment design, and many topics in statistics--causal inference being one of the primary things we'd want to do with data.
I assume multivariate calculus and linear algebra?
Some context: I did my undergraduate degree in Economics (in a pretty math intensive university), have been working in marketing for the last 2 years and want to go back to do work in something more analysis centered.
It has a lot of topics on applied math.
Some of its main topics arelinear algebra,probability theory,and Markov processes.
Really, the book just touches onsuch topics. Usually in collegeeach of those topics is wortha course of a semester or more.So, what the book has on such topics is much less than sucha course. E.g., for linear algebra,the book gets quickly to thesingular value decompositionbut leaves its treatment of eigenvaluesfor an appendix and otherwise leavesout about 80% of a one semestercourse on linear algebra.Similarly for probability andMarkov processes.
Some of the topics the bookhas or touches on are unusual with, likely,few other sources in book form.E.g., early on the book hasGaussian distributions onfinite dimensional vectorspaces where the dimensionis larger than is common.
So, for the topics rarelycovered in book form,the book could be a goodreference.
For topics such as from linear algebra, a reader might getmisled without an actualcourse in linear algebrafrom any of the longpopularbooks, e.g., Halmos,Strang, Hoffman and Kunze,Nering or more advancedbooks by Horn, Bellman, orothers.
Usually in universities,probability and Markov processesquickly get into graduate materialwith a prerequisite in measuretheory and, hopefully, some onfunctional analysis, e.g., todiscuss some important cases ofconvergence.
So, the book seems to havesome good points and someless good ones.A good point is that the bookis a source of a start onsome topics rarely in bookform. A less good point isthat the book gives verybrief coverage of topics otherwise usually coveredin full courses frompopular texts.
A student with a good mathbackground could use thebook as a reference andmaybe at times get some valuefrom the coverage of some of thetopics rarely covered elsewhere.But I would suspect that students without courses inlinear algebra, probability, etc.would need more backgroundin math to find the book very useful.
E.g., early in my career, Ijumped into various appliedmath topics using very brieftreatments. Later whenI did careful study of goodtexts with relatively fullcoverage, I discovered thatthe brief treatments had beenmisleading. E.g., no one wouldtry to learn heart surgery ina weekend and then try toapply it to a real person.Well, for applied math, maybelearning singular value decomposition,etc.in a weekend might not beenough to make a seriousapplication.
It is good to see a book onapplied math try to be a littlecloser to real, recentapplications thanhas been traditionalin applied math texts.I'm not sure that thebeing closer is crucialor even very usefulfor making real applications,but maybe it will help.
One little suggestion, I can't see anything of your network when I visit the main page. I'm hesitant to sign up for something if I can't see it. Maybe you could add a list of users of the instance / network, public tweets, etc. Or even better, publish a link to a (public) profile, so users can get an idea what it looks like in use.
(If I'd make a federated microblogging / social network site, I would center it around the profile page. I'd make it at first glance less about networking and more about presenting yourself. Like the early Facebook, or MySpace. This way, a user has an incentive to sign up even if no or only few friends are on the network yet. You'd be able to customize your page a lot, leave contact data, write (micro) blog posts, have a wall (guestbook) etc... And almost incidentally you'd be able to use your identity to comment on GNUSocial, use XMPP, OpenID, ....)
There once was a decentralized communication platform that promised to replace email, wikis, forums, article comment services, blogs, microblogs, you name it.
It was technically very advanced: many people could work on the same document at the same time and they would see each other's changes in real time, even if they were working from different federated servers. The history of all changes was accessible as well, in a very user-friendly way (play button and time slider.)
The GUI was not as fast as it could be, but it was cross-platform, user-friendly, and it worked.
I'm talking about Google Wave. It was federated, it was very advanced, and it failed spectacularly. Keep that in mind as you make further predictions.
Disruption means doing something that your competitors are effectively incapable of doing.
- Strange decision to have several vertical feeds - doesn't seem very convenient. Can there be one stream with several tabs or something like that?
- It would be cool if there would be reddit-like discovery system, so that you could browse tags, and then sort the posts in the tag by hot/new/top.
- What are your plans for this platform? Is it just a fun side project? Are you planning to compete with twitter?
- Are you planning to monetize?
My entire family is currently using Path as a small social network and share a lot there. I'd love to move us to something we can control and where the data (esp. photos) aren't locked away in a proprietary system. Does this (or Gnu Social) support protected accounts, where permission must be given for people to see and follow accounts?
Also if you have any questions feel free to ask!
1. Is this supposed to be run by me (an individual?) or run for my by somebody else (some existing community?).
2. Is there any information on what makes this decentralized? How does it "discover" other users?
Kudos to the Mastodon team!
https://mastodon.social/users/mnx/updates/11273 (no proper breaking of lines)
https://mastodon.social/users/mnx/updates/11268 (might want to add overflow:hidden)
When clicking the follow button with no address typed, I get a 404 notification after a couple seconds.
I also got a 422, but I can't reproduce it now.
Maybe it can be simplified like twitter's URL
Don't you think, email requirement should be optional, at least in the stage of early development.
Salesforce: in exchange for ~50% of market cap would be adding $2B of Twitter revenue growing at ~5% to their $6B of revenue growing at 25%+
Disney: for 16% of their market cap Twitter's revenue is half of what Disney's top-line has been growing each year - and again not much growth
Can't see a case where either would unlock user or revenue growth
Not sure what is next for Twitter, but it looks like another year of plodding along trying different things.
I don't know enough about Salesforce beyond a few of its products to imagine scenarios where Twitter has comparably low-friction integrations. Unless Disney makes a concerted effort to give Twitter autonomy...can't see how they could leverage it into something that's not just a loss-leader for entertainment. Maybe additional branding opportunities for ESPN and athletes, as well as another way to broadcast events?
edit: Any reason why Amazon isn't in the mix? Other than the obvious reason that they haven't made noise about it? Would an acquisition of Twitter's size be too unwieldy for Amazon?
In any event, based on their current projected growth rate (27.51%), full year 2015 revenue ($2,218,032,000), and 2015 profit margin (25.15%) (which is likely BS, as they seem to be losing money), it seems they are worth about $8 or $12 billion.
One way to determine valuation is to divide growth rate by 10, add 1, then multiply in yearly revenue (27.51 / 10 + 1 = 3.751; 3.751 * $2,218,032,000 = $8,319,838,032). Or, as seems to be the trend recently, one could divide growth rate by 10, double it, then multiply in yearly revenue (27.51 / 10 * 2 = 5.502; 5.502 * $2,218,032,000 = $12,203,612,064).
These estimates usually assume 30% profit margin, so one can factor that out (as Twitter's seems to be ~25%) by applying the scaled multiple to "profit" (27.51 / 10 + 1 = 3.751; 3.751 / .3 = 12.5; 12.5 * $557,807,000 = $6,972,587,500) (27.51 / 10 * 2 = 5.502; 5.502 / .3 = 18.34; 18.34 * $557,807,000 = $10,230,180,380). One would then add in Twitter's estimated net assets (~$4.556 billion) to get a final estimated worth.
That is, Twitter is worth either $11.53 billion, or $14.79 billion (depending on how you look at it). As their market cap is $17.25 billion, and the media is suggesting offers are in the range of $25-30 billion, it's likely any acquirer will pay much much more than Twitter is worth, or all potential bidders will bow out, as Twitter's asking price is too high.
Also, no matter how you look at it, Twitter's employee count seems to be ~2x what it should be. So, either there'll be a large number of layoffs, or they'll have to stop hiring completely and "ride the wave" until their revenue/potential increases to be inline with their employee count.
Even just looking at what Twitter does, it's hard to see why 4,000 employees are needed. WTF are they doing? It's just another sign they are likely being thoroughly mismanaged. This is further supported by the abysmal senior management rating (3.2, versus a target of 4.0 or 4.2) they received on Glassdoor.
Tomorrow is going to be an interesting day for TWTR.
1. give on on this acquisition thing for now
2. fire half your staff - what do all those people do?
3. you're a utility - charge for it. One penny per tweet would equal $2 Billion in annual revenue
4. after #3, there's no reason not to invite 3rd party developers back to the party
Additional bonus for Saudi Arabia: They could make it an even better ISIS recruitment tool than it already is.
1. They realized just owning the software platform is very limiting (For example they can't compete with Apple no matter how good they are. Just look at Google maps. Thousand times better than the default Apple maps but will never be #1 map app on iOS because people always choose the default). Also, now even Google is moving towards owning their own hardware platform. As a VC funded company I think this is the best timing to exit. Otherwise it's all downhill from here.
2. Samsung wants to fight with Google/Apple/Amazon/MS and needed a weapon (because, you know, people say AI is the future nowadays).
So at best these guys will be integrated into Samsung phones to differentiate Samsung's hardware. I don't see them becoming the "open platform" that they envisioned at all.
Given this, wouldn't it make sense for AI/ML people to form a guild and/or union and capture this value themselves, rather than let smooth-talking founders take all the cash?
It's a technical challenge now, but in a few years making a good AI will be high art. By a few years I could mean 3 years or 23 years. There's the possibility that dressing an AI assistant up in a personality that asks 'how are you feeling?' could lead us into an uncanny valley that takes a long time to cross.
Microsoft(Cortana), Amazon(Echo), Apple(Siri), Google(Assistant/Now) and now Samsung with Viv. The possibilites are just endless, the space race netted us a lot of innovation and this should do the same. Competition leads to Innovation
Making that software seems like a necessity to me (the "operating system" of the future) and I'm not sure how it fits in with the current trajectory. Thoughts?
Pixel's new assistant will not be available on other android devices with nougat so make sense that Samsung will have to bundle their own into their devices.
Excited that it's a more extensible digital assistant though. I think the reason why I like Alexa the most out of the services right now is that it feels like the easiest to add new functionality too. Hopefully this will pave the way for more open ai assistants.
In addition to founders, Viv got a good number of early Siri engineers. From this and other hints I gather its internals are similar to first few versions of Siri. For example, they outsource the Speech component (something Apple still does for several languages). A consequence of that design is separating speech recognition from NLP, which makes it inherently more error-prone than Google Now.
kudos to these guys though, i never followed their story after apple took them on, and just assumed they would still be there. selling something then going back to doing the same thing, and doing it better... i find that admirable.
also despite my opinion on siri being "doesn't even work, not even close" i do look forward to more and better developments in that direction. although a part of me wonders if we won't just be wiring ourselves directly into our devices by the time this sort of tech becomes truly useful. :)
Still, Samsung is large enough to build and maintain their own stack.
Still. As an iOS user I'm very jealous of what exists on other platforms now.
Unless you have extensive experience in this area, perhaps you shouldn't be so quick to judge "oh they are just spying on their users".
The simple answer to this question is that if a way is not given for businesses to decrypt their own traffic that they generated and encrypted, they simply won't encrypt it.
Take this example: A regulation says that all incoming traffic into a banking sector company must be scanned for potential vulnerabilities and exploits, and allows for "compensating controls". If the incoming traffic is unable to be decrypted at TLS1.3, it will simply be decrypted at the boarder of the business and routed internally unencrypted. This would be worse than copying the TLS1.2 traffic for out-of-band scanning.
I'm not saying that this guy wasn't a little late by the party, but failing to recognise that big businesses have regulations you don't understand or even care about is a huge mistake that will make us all more insecure. After all, who doesn't have a bank account?
is probably the best response for the request.
On a related note though, it's always amazing how on one hand Big Banking tries to show that it's in touch with the latest tech developments (Bitcoin Consortiums, RFID/NFC payments) etc. but on the other hand display a very shallow understanding of how secure systems should work.
I think there could be a security gain though in adding support for a better way to actively MITM TLS traffic though - in having a proper mechanism for filtering proxy firewalls. For some applications (say, school networks) it is OK to monitor and filter traffic, but the way it is done now is terrible for security. Usually this is by terminating the TLS at the proxy, scanning it, and then re-encrypting it with an certificate automatically generated and trusted by an internal CA (whose root certificate is installed on the machines).
The huge problem is that now everyone has to trust all the root CAs installed on the firewall, instead of being able to decide which ones to trust themselves. The firewall has to also decide whether or not to trust self-signed certificates.
Much better would be to be able to decrypt, re-encrypt with a certificate issued by a real CA, and then also send the original certificate along with the handshake. Then, the first time you visited a TLS site, it could pop up a big warning saying 'This traffic is intercepted by firewall.institution.edu, do you consent?' and have a little exclamation mark in the toolbar to always indicate that it's being intercepted. The browser would have to trust both the interception certificate (which encrypts between the firewall and the endpoint inside the internal network) as well as getting the original certificate and deciding whether to trust that (which you don't get now).
If they do, I don't see how forward secrecy changes much, and they just have to tap the plain text from the proxy.
If they don't, I'm seriously surprised.
I don't buy it.
You can store entire sessions worth of data, but can't also optionally save metadata?
Yes, I realise you may not want to store it for all connections, but if you don't want to have a "should I store" oracle inline with the connections, then do it async and skip writing it do disk/storage service once the answer comes back.
Was the argument by the bankers basically a complaint that retooling would be very expensive? and/or that employee surveillance would be more difficult? (yeah, I'm sure everyone is a fan of that!)
I'm at lost here. What's the point of having TLS if it can be easily decrypted? Why not to ditch it altogether then? All this argument sounds somewhat fishy, just because your practices rely on insecure behavior, it doesn't mean it shouldn't be fixed for the rest of us.
The alternative namely monitoring on the end points is hard to implement comprehensively and a lot more expensive.
If (and that is a big iff) the banking use case really justifies a special and weaker transport protocol then maybe it should be upon the banks bear the whole cost of doing that. It would also clearly assign the responsibility when things may fail from a security perspective. Maybe the end point alternative looks then more attractive.
And thus essentially defeating the entire purpose of TLS. Can you please help us continue doing this with the new version of TLS too. Thanks. Love. Big Banks.
> The consensus in the room at IETF-89 was to remove RSA key transport from TLS 1.3. If you have concerns about this decision please respond on the TLS list by April 11, 2014.
> On 22 Sep 2016, at 20:27, BITS Security <BITSSecurity at fsroundtable.org> wrote: > > To: IETF TLS 1.3 Working Group Members > > My name is Andrew Kennedy and I work at BITS, the technology policy > division of the Financial Services Roundtable > (http://www.fsroundtable.org/bits). My organization represents > approximately 100 of the top 150 US-based financial services > companies including banks, insurance, consumer finance, and asset > management firms. > > I manage the Technology Cybersecurity Program, a CISO-driven forum > to investigate emerging technologies; integrate capabilities into > member operations; and advocate member, sector, cross-sector, and > private-public collaboration. > > While I am aware and on the whole supportive of the significant > contributions to internet security this important working group has > made in the last few years I recently learned of a proposed change > that would affect many of my organization's member institutions: the > deprecation of RSA key exchange. > > Deprecation of the RSA key exchange in TLS 1.3 will cause > significant problems for financial institutions, almost all of whom > are running TLS internally and have significant, security-critical > investments in out-of-band TLS decryption. > > Like many enterprises, financial institutions depend upon the > ability to decrypt TLS traffic to implement data loss protection, > intrusion detection and prevention, malware detection, packet > capture and analysis, and DDoS mitigation. Unlike some other > businesses, financial institutions also rely upon TLS traffic > decryption to implement fraud monitoring and surveillance of > supervised employees. The products which support these capabilities > will need to be replaced or substantially redesigned at significant > cost and loss of scalability to continue to support the > functionality financial institutions and their regulators require. > > The impact on supervision will be particularly severe. Financial > institutions are required by law to store communications of certain > employees (including broker/dealers) in a form that ensures that > they can be retrieved and read in case an investigation into > improper behavior is initiated. The regulations which require > retention of supervised employee communications initially focused on > physical and electronic mail, but now extend to many other forms of > communication including instant message, social media, and > collaboration applications. All of these communications channels > are protected using TLS. > > The impact on network diagnostics and troubleshooting will also be > serious. TLS decryption of network packet traces is required when > troubleshooting difficult problems in order to follow a transaction > through multiple layers of infrastructure and isolate the fault > domain. The pervasive visibility offered by out-of-band TLS > decryption can't be replaced by MITM infrastructure or by endpoint > diagnostics. The result of losing this TLS visibility will be > unacceptable outage times as support groups resort to guesswork on > difficult problems. > > Although TLS 1.3 has been designed to meet the evolving security > needs of the Internet, it is vital to recognize that TLS is also > being run extensively inside the firewall by private enterprises, > particularly those that are heavily regulated. Furthermore, as more > applications move off of the desktop and into web browsers and > mobile applications, dependence on TLS is increasing. > > Eventually, either security vulnerabilities in TLS 1.2, deprecation > of TLS 1.2 by major browser vendors, or changes to regulatory > standards will force these enterprises - including financial > institutions - to upgrade to TLS 1.3. It is vital to financial > institutions and to their customers and regulators that these > institutions be able to maintain both security and regulatory > compliance during and after the transition from TLS 1.2 to TLS 1.3. > > At the current time viable TLS 1.3-compliant solutions to problems > like DLP, NIDS/NIPS, PCAP, DDoS mitigation, malware detection, and > monitoring of regulated employee communications appear to be > immature or nonexistent. There are serious cost, scalability, and > security concerns with all of the currently proposed alternatives to > the existing out-of-band TLS decryption architecture: > > - End point monitoring: This technique does not replace the > pervasive network visibility that private enterprises will lose > without the RSA key exchange. Ensuring that every endpoint has a > monitoring agent installed and functioning at all times is vastly > more complex than ensuring that a network traffic inspection > appliance is present and functioning. In the case of monitoring > of supervised employee communications, moving the monitoring > function to the endpoint raises new security concerns focusing on > deliberate circumvention - because in the supervision use case > the threat vector is the possessor of the endpoint. > > - Exporting of ephemeral keys: This solution has scalability and > security problems on large, busy servers where it is not possible > to know ahead of time which session is going to be the important > one. > > - Man-in-the-middle: This solution adds significant latency, key > management complexity, and production risk at each of the needed > monitoring layers. > > Until the critical concerns surrounding enterprise security, > employee supervision, and network troubleshooting are addressed as > effectively as internet MITM and surveillance threats have been, we, > on behalf of our members, are asking the TLS 1.3 Working Group to > delay Last Call until a workable and scalable solution is identified > and vetted, and ultimately adopted into the standard by the TLS 1.3 > Working Group. > > Sincerely, > > Andrew Kennedy > Senior Program Manager, BITS
To: BITS Security <BITSSecurity at fsroundtable.org> Subject: Re: [TLS] Industry Concerns about TLS 1.3 From: "Paterson, Kenny" <Kenny.Paterson at rhul.ac.uk> Date: Thu, 22 Sep 2016 19:14:25 +0000 [...] Hi Andrew, My view concerning your request: no. Rationale: We're trying to build a more secure internet. Meta-level comment: You're a bit late to the party. We're metaphorically speaking at the stage of emptying the ash trays and hunting for the not quite empty beer cans. More exactly, we are at draft 15 and RSA key transport disappeared from the spec about a dozen drafts ago. I know the banking industry is usually a bit slow off the mark, but this takes the biscuit. Cheers, Kenny
That leaves two cases: internal client <-> internal server and internal client <-> internal MITM. For both of these cases, I see two easy solutions:
1. Log the session key.
2. Define a custom TLS extension that contains the session key encrypted with against a known bank-controlled public key and have the server/MITM send this extension after the handshake. Problem solved.
What have we gained in security, if TLS1.3 is basically illegal to use for banks ?
Who wins if browsers refuses to connect to web-banking which operates inside the boundaries of the law ?
They need to routinely spy on some of their people,amd trace what happens to their money. Not sure why this is a problem. Their use-cases are different to (say) a user of Signal.
EDIT: Yeah, Martin is not even being charged with unauthorized disclosure. Not ShadowBrokers, sorry to burst everyone's bubble.
Here's his house: https://www.google.com/maps/place/7+Harvard+Rd,+Glen+Burnie,...
I also understand that some people think that no one should have this power and want to stop it. I wish that, if you feel this way, you renounce your citizenship and find another country that acts as you believe they should and just leave the other country alone, unless you feel they are threatening you or those you love and you must do something about it, and even then only if can do something to help in a way that won't hurt anyone now or in the future. Yes, I know it's not always that easy.
But, if you live in a country, through your taxes and your citizenship, you pay for and are recipient of the work of people whose job it is to protect us and all other citizens of your country. And if you didn't know how they defended you, you do now. It isn't always pretty, to say it lightly.
I don't want people to do bad things in the name of good or defense. It'd be better if every country had a large number of grown up boy scouts to protect that country in the most honorable way possible. But, we have what we have. Sure go ahead- expose it if you want, but the more you harm it, the more you'll end up with a group of people that are even more secretive and do things even worse to try to ensure that security. I really don't want that to happen. Things need to get better instead!
Many in the US say that we need to protect people from themselves, and then criticize or harm those trying to protect us. Why?
I used to be much more paranoid and just group people into the "trying to hurt me" bracket. But I grew up. I realized that almost everyone in the world I meet wants to do good or at least has a motivation to try to accomplish something they believe is right.
So, only some of it. The rest was up to date then. And the old stuff is helpful for figuring out how they think.
> "For the N.S.A., which spent two years and hundreds of millions, if not billions, of dollars repairing the damage done by Mr. Snowden, "
This would explain how the Russians ended up with the source code for TAO's toys.
"he stole and disclosed highly classified computer codes developed to hack into the networks of foreign governments"
"different in nature from Mr. Snowdens theft"
What's next, NYtimes, calling people "rats" for reporting a homicide?
the US Government is playing a different game right now. Play time is over.
The post-growth phase of a social network doesn't have to mean its collapse. Look at IAC, InterActive Corp (iac.com, ticker IAC). They run a lot of sites - Vimeo, Ask, About, Investopedia, Tinder, OKCupid, etc. - have a market cap of about $5 billion, and keep plugging along. They were started by Barry Diller, the creator of the Home Shopping Channel, something else that keeps plugging along. Diller is still CEO. IAC is boring but useful.
Twitter doesn't have to "exit"; they're already publicly held. They just have to trim down to a profitable and stable level, and accept that they're post-growth.
The endeavor that drove me, a passionate user for several years, off the site for good. It's sad that they watered themselves down to become a second-place Facebook.
Maybe it's not even their fault, and it's just a side effect of being a public company and being measured by growth instead of by the uniqueness and quality of the product. It's still sad.
When the dust settles, there will be lesson in the danger of engagement-metric based product design. Twitter has a "while you were away" feature that shows you their idea of the tweets they'd like you to see first. (Even if you return to the site 5 minutes after your last session.) (This is different from the non-chronological timeline, and doesn't get turned off when you disable that.)
When you dismiss it, it asks you if you'd like to see less of that. It's completely ambiguous and unexplained whether they're asking whether you disliked those specific tweets or the non-chronological aspect of the timeline. A lot of users have been vocal about hating the feature. By dismissing it are they actually burying certain friends down further on their timeline? Who knows? Some people are guessing correctly what the meaning is, but others aren't. They're feeding a certain amount of garbage input into their algorithms.
Maybe they should hire Marissa Mayer. I hear she'll be looking for a job in the near future, and she's got plenty of relevant experience.
In 1984, Ringling Bros. and Barnum & Bailey Circus unveiled a unicorn. Later investigation revealed it to be a goat with its horns fused together.
He was the Executive Officer but he was not a Chief Executive Officer.
For someone in this position, the natural human incentive is to build consensus and avoid ruffling feathers. An employee revolt could have gotten him fired. He had to worry what people thought.
Steve Job's return to Apple was dominating. He absolutely worked more than anyone else and ran circles around anyone trying to get in his way. And he fired people and replaced the board with friends. Things people would have faulted him for had he not had time to see his plans through.
But..even if he had gone full-time and led a successful coup, he probably would have still failed. Twitter was truly an accident and so no one really knows what it is.
They probably aren't even really trying to recreate the magic, just trying to associate themselves with it and use that to take credit for what happened in that case. This is the kind of twisted backward stupid logic that was prevalent in the area while I was there (2008 - 2011). It was just more BS every time.
I mean, if you are trying to recreate the magic or emulate what's been done in the past, then do that, rather than do the dumbest things, then try to poorly and illogically spin it (in the most tired, played out, and "has never been successful" way) to look like something else.
Add to that the overrepresented judger (MBTI) population, and you have a recipe for the shht of everything past constantly repeating in the worst way, as arrogant egotistical self-centered know-nothing idiots use it to try to credit themselves with being something other than the dumbest shht.
Given the actual state of affairs though, the Board has a clear fiduciary duty to consider all options.
Since Dorsey didn't give himself enough time to actually effect a true fundamental product/vision turnaround, he would've been far wiser to simply focus on driving revenue as a first step. Then circle back to product later.
It seems the strategy here was either unclear or unrealistic.
The only reason you are not growing user number is you have an inferior user experience comparing with Facebook/Snapchat. That is something really easy to fix by just copying what other guys are doing better, but Dorsey did nothing. He lost interest long ago. TWTR does not need to sell, just need someone who is passionate, maybe a good engineer who knows the product.
The whole "everything's public or everything's private" shtick just doesn't work for me.
- Replace their mediocre management team (3.2 rating on Glassdoor, versus a threshold of 4.0 or a "damn, management's good" of 4.2) with people that actually know what they're doing
- Reduce the number of employees to 1,555
- Hire an additional 259-585 quality people (engineers and designers, mainly) to fix user experience and engineering issues
- Start paying attention to the user base and deliver what's actually needed
- Innovate and push things forward, rather than settling into stagnation and decline
There's really not much to it. Twitter is only having problems because they keep messing up, stagnating, or doing the dumbest things.
The businesses are very similar. The products are similar. The trends are similar. Even the demographics are similar. It is a shame that Twitter can't be more, when it totally needs to be more. And snapchat can do so much better too.
But the problem might be they are tripping over the problems in front of them, not pursuing solutions to problems that lie ahead.
Keep the timeline chronological! Instead of injecting 12 hour old tweets into my timeline randomly, give me a tab of popular tweets by people I follow.
Give me more options to discover interesting discussions. Let me subscribe to or follow hashtags.
Give me more options for filtering out the noise. Let me provide hashtags or keywords I don't care about.
Back before Twitter made the decision to transition from a service to a platform, there were Twitter apps and clients that provided all kinds of great ways to interact with Twitter. But Twitter shut them down, and never bothered to integrate the things that made them great into the official Twitter platform or apps.
Twitter's greatest potential is as a firehose of live data. Give users the tools to use and personalize that data and Twitter will become an indispensable part of people's daily lives.
If Twitter didn't change a fucking thing for the rest of my life I'd still use it and love it. Forever.
How much traffic is that?
Ex: 2 Milo. Etc. So... what business is he in?
I am not sure if I agree with the above conclusion that Twitter needs a turnaround CEO. I think Twitter needs to fully execute the live streaming strategy and see where they go from there and Jack needs to become a full time CEO and handoff Square to somebody else. Bring Mark Zuckerburg on to the board so they start learning how to execute on social. These are some small changes they can make to win back investors and win back some credibility.
...or, perhaps more accurately, the Center for Medicare and Medicaid Services has banned Elizabeth Holmes from running any kind of blood testing laboratory. It seems a little odd to write a letter making it sound like you're just pivoting when in fact your chief executive is not permitted to carry out the kind of business they were previously carrying out.
People who are expected to die before they can sue. Also kids. Am I reading this correctly?
She doesn't see herself as a fraud at all, her faith in her mission, however delusional, is ironclad. She inspired her investors to take insane financial risks without so much as a whiff of due diligence. Elizabeth Holmes is a modern day Joan of Arc.
There's some crony capitalism for you.
It wouldn't surprise me if a shadow writer indeed wrote this letter "from" Elizabeth Holmes. That's how the world of open letters from CEOs works, right? But usually they wouldn't expose the farce this way?
Carreyrou will probably get a Pulitzer for his work on this story.
"Through it, we see a world in which no one ever has to say goodbye too soon, and people are able to leverage engagement with their health to live their best lives." -- bunch of BS :]
hm, that's funny, no mention of products or services...
> "By Staff Writer"
I think this encapsulates this phony in two lines perfectly.
Did they cause actual medical harm to someone?
Had the minilab worked they would have had a functioning commercial service available to test and finetune the process immediately. Isn't field testing and shipping products what's important when developing new products and services?
Sure, there is the chance it's a psychopath bullshit scenario from start to the end but do we have any actual proofs that this is the case?
The downvotes are comical. Never seen so much sympathy for people who defraud investors and put healthcare at risk.