Alex Stamos is a good person who has been doing vulnerability research since the 1990s. He's built a reputation for understanding and defending vulnerability researchers. He hasn't been at Facebook long.
To that, add the fact that there's just no way that this is the first person to have reported an RCE to Facebook's bug bounty. Ask anyone who does this work professionally: every network has old crufty bug-ridden stuff laying around (that's why we freak out so much about stuff like the Rails XML/YAML bug, Heartbleed, and Shellshock!), and every large codebase has horrible flaws in it. When you run a bug bounty, people spot stuff like this.
So I'm left wondering what the other side of this story is.
Some of the facts that this person wrote up are suggestive of why Facebook's team may have been alarmed.
It seems like what could have happened here is:
1. This person finds RCE in a stale admin console (that is a legit and serious finding!). Being a professional pentester, their instinct is that having owned up a machine behind a firewall, there's probably a bonanza of stuff they now have access to. But the machine itself sure looks like an old deployment artifact, not a valuable asset Fb wants to protect.
2. Anticipating that Fb will pay hundreds and not thousands of dollars for a bug they will fix by simply nuking a machine they didn't know was exposed to begin with, the tester pivots from RCE to dumping files from the machine to see where they can go. Sure enough: it's a bonanza.
3. They report the RCE. Fb confirms receipt but doesn't respond right away.
4. A day later, they report a second "finding" that is the product of using the RCE they already reported to explore the system.
5. Fb nukes the server, confirms the RCE, pays out $2500 for it, declines to pay for the second finding, and asks the tester not to use RCEs to explore their systems.
6. More than a month after Facebook has nuked the server they found the RCE in, they report another finding based on AWS keys they took from the server.
So Facebook has a bug bounty participant who has gained access to AWS keys by pivoting from a Rails RCE on a server, and who apparently has retained those keys and is using them to explore Instagram's AWS environment.
So, some thoughts:
A. It sucks that Facebook had a machine deployed that had AWS credentials on it that led to the keys to the Instagram kingdom. Nobody is going to argue that, though again: every network sucks in similar ways. Sorry.
B. If I was in Alex's shoes I would flip the fuck out about some bug bounty participant walking around with a laptop that had access to lord knows how many different AWS resources inside of Instagram. Alex is a smart guy with an absurdly smart team and I assume the AWS resources have been rekeyed by now, but still, how sure were they of that on December 1?
C. Don't ever do anything like what this person did when you test machines you don't own. You could get fired for doing that working at a pentest firm even when you're being paid by a client to look for vulnerabilities! If you have to ask whether you're allowed to pivot, don't do it until the target says it's OK. Pivoting like this is a bright line between security testing and hacking.
This seems like a genuinely shitty situation for everyone involved. It's a reason why I would be extremely hesitant to ever stand up a bug bounty program at a company I worked for, and a reason why I'm impressed by big companies that have the guts to run bounty programs at all.
(and, to be clear, a friend, though a pretty distant one; I am biased here.)
1. Facebook is not going ballistic because this is a RCE report. They have received high and critical severity reports many times before and acted peaceably, up to and including a prior RCE reported in 2013 by Reginaldo Silva (who now works there!).
2. The researcher used the vulnerability to dump data. This is well known to be a huge no-no in the security industry. I see a lot of rage here from software engineers - look at the responses from actual security folks in this thread, and ask your infosec friends. Most, perhaps even all, will tell you that you never pivot or continue an exploit past proof of its existence. You absolutely do not dump data.
3. When you dump data, you become a flight risk. It means that you have sensitive information in your possession and they have no idea what you'll do with it. The Facebook Whitehat TOS explicitly forbid getting sensitive data that is not your own using an exploit. There is a precedent in the security industry for employers becoming involved for egregious "malpractice" with regards to an individual reporting a bug. A personal friend and business partner of mine left his job after publicly reporting a huge breach back in 2012 (I agree with his decision there), and Charlie Miller was fired by Accuvant after the App Store fiasco. Consider that Facebook is not the first company to do this, and that while it is a painful decision, it is not an insane decision. You might not agree with it, but there is a precedent of this happening.
I'm not taking sides here. I don't know that I would have done the same as Alex Stamos here, but it's a tough call. I do believe the researcher here is being disingenuous about the story considering that a data dump is not an innocuous thing to do.
I'm balancing out the details here because I know it will be easy to see "Facebook calls researcher's employer and screws him for reporting a huge security bug" and get pitchforks. Facebook might be in the wrong here, but consider that the story is much more nuanced than that and that Facebook has an otherwise excellent bug bounty history.
Edited for visibility: 'tptacek mentioned downthread that Alex Stamos issued a response, highlighting this particular quote:
At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.
Viewed in this light (and I don't believe Stamos would willfully fabricate a story like this), it is very reasonable to escalate to an employer if they seem to be affiliated with a security researcher's report.
Researcher: "I found a way to unlock your door" Facebook: "Thanks, here's $2500. We've now fixed the problem." Researcher: "Oh, BTW when I unlocked your door I rifled through your stuff and found your passport, your banking details, and a lot of personal information. I've kept copies of these. I also found the keys to your car and looked inside, where I found a box in the trunk. That box contained sensitive documents including an employee badge / proximity card. I used this card to gain access to your workplace. In doing this, I also managed to get into the janitor's closet which had a set of keys. I used these keys to get access to the complete building and took a look at all the HR files and rifled through a bunch of corporate contracts." Facebook: <gobsmacked> Researcher: "Can I have my million bucks now?"
He claims to have downloaded the content listed below. And he is surprised that Facebook responds coldly? Note the string "private keys" in this list... Doesn't the author know how long it will take them to recover from this breech? How much it will cost them?
On the other hand, it does sort of re-enforce the idea that he should be paid handsomely, doesn't it? :)
* Static content for Instagram.com websites. Write access was not tested, but seemed likely. * Source code for fairly recent versions of the Instagram server backend, covering all API endpoints, some image processing libraries, etc. * SSL certificates and private keys, including both instagram.com and *.instagram.com * Secret keys used to sign authentication cookies for Instagram * OAuth and other Instagram API keys * Email server credentials * iOS and Android app signing keys * iOS Push Notifications keys * Twitter API keys * Facebook API keys * Flickr API keys * Tumblr API keys * Foursquare API keys * Recaptcha key-pair
This is the fastest and easiest way for Facebook to stop good submissions to their bug bounty program.
Between stories like this demonstrating companies' apparent lack of understanding of whitehat infosec, and Weev's incarceration demonstrating the American legal system's apparent lack of understanding of whitehat infosec, it's hard to believe people still participate in such endeavors.
Alex's timeline seems like it matches what I wrote earlier:
We'll probably never see a post mortem on this but it'd be interesting to hear how this got moved to production...: was the Sensu admin panel a nice scaffold for internal use and by the time they decided to make it remote, everyone just assumed the secret key had been changed at some point?
I imagine the initial report by his friend that the server was accessibly would not be a very high paying bounty compared to one accessing the server. But how deep is too deep?
This was his mistake. This is a huge no-no. You never dump data unless you have permission. It's against the terms of most bounty programs.
I'm always curious about what sort of internal pressures would lead people to take a well-reported bug that the author did not take malicious action on and blow it up to the point that the CSO is getting involved.
Your actions are detrimental to your relations to such good mannered external security researchers who are helping you keeping you infrastructure safe from the bad guys. You should have been a little more sensitive and a lot more generous that you have been.
Facebook really needs to go the way of myspace if they keep this sort of behavior up.
How can a CSO at Facebook legitimately tell a CEO of another organization that a vulnerability of "little value" was found when the researchers has your signing certs? Does he lack relevant info or is he just incompetent?
This is tantamount to mafia tactics. Hint, hint, we're facebook so get your people in line or else.
October 24th: Server no longer reachable. Tested keys and they still worked, assumed to have went on a download spree.
Seems like this is the biggest issue with how Facebook handled this case. No one looked to see what Wes accessed when he logged in with the weak credentials? No one realized he could have accessed the AWS key?
To treat what Wes found as a minor bug and then fuck up like that is sort of hilarious.
That to me is entirely unacceptable, if you want to threaten someone then have your legal team send them a cease and desist. Don't go after their livelihood.
Look at his timeline again.
He tested the AWS creds in October.
They shut the server off on October 24.
He reported the AWS creds in December.
Did he tell them about the AWS creds before then? His mails don't say that he did.
If he didn't, why didn't he?
This is why many security professionals become disillusioned with bounty programs. This story is not uncommon at all.
Bounty programs, while presenting a tempting incentive to practice one's skills are a very poor income strategy.
You are essentially working, unpaid, for organizations who are just as likely to ignore you (or report you to law enforcement) as they are to pay you for your findings.
No wonder so many young talented security pros are easily tempted to trade their findings for the safety of a crypto transaction with an anonymous buyer than they are to submit them through official channels.
Why get mad about a "low level bug"... I mean, if you can dump private user pics from a photo sharing app, how is this low level? really?
It's also pretty clear that the researcher shouldn't have dumped data although most likely he reserved this hidden card for later since he was expecting the lowball... but there are smarter ways to reply to lowballing.
IMO poorly managed on both parts.
I wonder what any claim of protecting user's privacy is worth when they leave their credentials unprotected in that way.
"We use commercially reasonable safeguards to help keep the information collected through the Service secure [...]"
I can imagine why they didn't appreciate the efforts of the researcher. Hopefully they'll change their current practices.
On one hand, this signals to anyone else who might want to disclose security issues that Facebook bounties don't pay out anywhere proportionally near the full potential damage impact of the issue.
On the other hand, if they pay out a lot more now, they're signalling that if you find a vulnerability, you need to dig deeper in order to have insurance in case Facebook gets stingy.
Probably the best outcome would have been to pay out a more proportional bounty, even though Wes' exploration was beyond what's generally acceptable, so that Facebook's bounty program reputation is preserved.
That or press criminal charges to discourage any other researchers from going over the line.
Something like this makes you suspect a deliberate backdoor. Can the person who put this into Ruby/Rails be identified?
 http://robertheaton.com/2013/07/22/how-to-hack-a-rails-app-u... https://news.ycombinator.com/item?id=6110386
We should have "pastebin hat" list and Facebook should definitely be on it.
The problem with humans is that they will rather go extinct over such things than behave properly. You could try to teach us by painful example but death will probably come first.
If indeed only credentials and technical information were obtained, all aimed at finding more security issues, Facebook should be thankful for finding all the vulnerabilities across all their security layers.
However, the biggest issue I see here is that the author (in their own timeline at the bottom of this post) says that they discovered the AWS keys on October 24, yet they did not report this to Facebook until December 1 (in the meantime, they were having various discussions with Facebook about whether their other submissions were valid). That is seriously concerning behavior, if you find come across some live AWS keys this should be reported immediately, you should absolutely not just sit on them for over a month as if they are some sort of bargaining chip.
It seems that people defending Facebook's behaviour in this thread have collectively lost sight of what the point of a bug bounty is to begin with - to encourage people to report issues, rather than sell them.
We now have people arguing that "it is not acceptable to pivot beyond the initial intrusion for a bug bounty", even though a malicious attacker would have done the exact same thing. As long as standard no-damage rules are followed, where's the problem?
The bug bounty program is working exactly as intended, but the researcher is getting dinged over arbitrary rules. As somebody else here mentioned already: the reason blackhat work still pays, is because such arbitrary and bureaucratic rules do not exist there.
We should not forget that bug bounties are a tool, not a goal - the goal is to convince researchers to report rather than sell, and every part of a bug bounty and its rules must be designed accordingly.
Also: Why the hell were those AWS credentials not revoked immediately after compromise? This constitutes a grossly negligent failure on Facebook's part to assess impact, on top of their existing failure to have the "keys to the kingdom" on a single server to begin with.
And frankly, that failure only reinforces the need for the researcher pivoting into further systems, rather than just keeping it to a PoC - because evidently, nobody is going to assess impact at Facebook, if the researcher doesn't do it himself.
Well, FB feels your bug bounty is worth $200? Strike that figure. We feel like your bug bounty is worth a $100 advertising credit, if you buy $100 in advertising? Next time just report the bug. Thanks!
(I don't know if my innate dislike of FB, or I feel it shouldn't be up to a company to determine what they feel a bug is worth? If you are going to have a bug program--put in some Very solid rules? They shouldn't be just winging it at this point? It's not some cute little start up? It's a huge machine that's making a fortune off it's victim?
I'm still not sure if FB really cared about this hacker's escalation of a potential attack, or it's about money? Would I want a hacker to show me my vulnerability with my clients information--no, but make that crystal clear in the TOS.)
edit: if it's indeed true, but I have my doubts that's the case. Hard to say either way.
I've been a Product Hunt user from their initial HN launch and am still a big fan. They've made a very important impact in the tech scene. Open Hunt is an honest attempt at a community run alternative, tailored to giving / getting feedback, and finding very early stage stuff.
Would love your feedback!
Honestly, I only signed up for Twitter to join Product Hunt. That was a huge disappointment when I found out that having an account didn't mean anything. This will be a pleasant change, it's about time.
It's asking for scary permissions:
> Read Tweets from your timeline.
> See who you follow, and follow new people.
> Update your profile.
> Post Tweets for you.
Please, consider adding more options, or explaining how you use those permissions. (For example, you can do what you like to my facebook wall.)
EDIT: Lack of public posting is an interesting choice. It doesn't feel like much of a community. I can see that public comments risks undue negativity or aggressive feedback.
"Some other link aggregation sites are operated by corporate entities which may have significant financial incentive to censor or artificially promote the links and discussion that relate to those entities, their investments, or their competitors. Some of these sites have had moderators of popular sub-forums banned after it became known that they were being paid by 3rd party companies seeking special treatment of their submitted stories.
All moderator actions on this site are visible to everyone and the identities of those moderators are made public. While the individual actions of a moderator may cause debate, there should be no question about which moderator it was or whether they had an ulterior motive for those actions.
All user voting and story ranking on this site uses a universal algorithm and does not artificially penalize or prioritize users or domains. Per-tag hotness modifiers do affect all stories with those tags, but these modifiers are made public and usually used to shorten the life of meta-discussions. If certain domains have to be banned from being submitted due to spam, the list will be made publicly available.
If users are disruptive enough to warrant banning, they will be banned absolutely, given notice of their banning, and their disabled user profile will indicate which moderator banned them and why. There will be no hidden or childish "shadow banning" or "hellbanning" of users popular on some other sites.
The source code to this site is made available under a 3-clause BSD license for viewing, auditing, forking, or contributing to. This code is always up to date with what is running in production on this website.
Public stats are available for site requests, comments submitted, stories submitted, and users created."
That being said, it's sad to see people rip on others sites/ideas blatantly. OpenHunt should quickly come up with an original design and find something unique in their approach.
But definitely back the idea - PH has become too undemocratic, and its obvious that if you don't have the right connections your product will never surface. I know people who've reached out to "influencers" on PH to have their product hunted by them.
Login unsuccessful. Something went wrong: Error: api_calls exceeding plan authorized calls
Quick question: PH wants people to sign up with their personal Twitter account, rather than a company one, is that the case here? I never use my personal account, so would prefer to be able to sign up using @bug_muncher
Also, I love the "You reached the beginning!" message at the bottom, not sure why, but it really made me smile :)
+1 for any alternative system.
I'm trying to build in elasticsearch support in lobsters for a personal project - it currently uses sphinx. But it could be pretty cool if you can use that as a starting point.
Not sure if this is a feature or a bug: when one clicks on the "comments" line, it opens a right-side panel for the current item; if one clicks another comments line, the right-side panel is updated with the new item => so far so good.
BUT, when one clicks on another item while the right-side panel is open, it doesn't update said panel; it opens a new tab to the item's website, but the panel doesn't change, so that when one comes back to OH, the panel doesn't match the last consulted item.
It's probably not an easy fix, because, what should happen when one opens more than one item?
However, since the comments pane is super simple, maybe it would make sense to open it under the corresponding item instead of to the side, so that it's visually related to the correct item instead of being in a generic location?
My 2 cents. Very cool initiative anyways.
BTW: your api calls for registration has exceeded
Disrupting the disrupters.
OTOH, the latter site is presumably visited by potential investors and others who have a financial interest in consuming what's published.
Additionally, without "throttling", you have a ton of stuff featured, adding to the skew. Thus, much of what's submitted has only one or two votes. People are primarily posting and moving on.
Or, am I missing something?
Could be even broader. Or use tagging. You probably wouldn't look in the above categories for performance events, dining out, or phone sex. Must decide how wide a net you wish to cast, and what ontological approaches to use. But this, in my opinion, is where it gets interesting.
2) I think this and Product Hunt can co-exist
3) I'm interested in learning about how other members think Open Hunt can go from "open community" to "sustainable community"?
Edit: The subtitle font is way too washed out. i struggle to read it. Also some submissions are not "products" - is that appropriate?
"Login unsuccessful. Something went wrong: Error: api_calls exceeding plan authorized calls"
I am curious, why are comments/feedback all hidden? I would certainly like to read those - even if they are made anonymous.
[Now that I think about it, I should put this on the OpenHunt!]
why OpenHunt is good for PH;
- It will be a PH's moderation app, every nice project can be submitted by PH's trusted members.- PH can get valuable feedback from this thread.- PH can integrate every feature from OH
or some iteration thereof...
Day 0 : Yeah other 'PH is crap' HN/blog/medium post
Day 2 : Oh so someone 'anonymous' is building a competitor.. They are having a Google form.. How noobish
Today : haha Oh look they copied our design . how original
Future : oh we need to pivot
High time you add pagination on the landing page now. :)
(If somebody affiliated with OpenHunt wants control of that sub, just message me).
What he did is impressive. But the results are not that outlandish for a talented person.
1) Hook up a computer to the CAN-Bus network of the car  and attach a bunch of sensor peripherals.
2) Drive around for some time and record everything to disk.
3) Implement some of the recent ideas from deep reinforcement learing [2,3]. For training, feed the system with the oberservations from test drives and reward actions that mimick the reactions of actual drivers.
In 2k lines of code he probably does not have a car model that can be used for path planning  (with tire slippage, etc.). So his system will make errors in emergency situations. Especially since the neural net has never experienced most emergencies and could not learn the appropriate reactions.
And guess what, emergency situations are the hard part. Driving on a freeway with visible lane markings is easy. German research projects autonomously drove on the Autobahn since the 80s . Neural networks were used for the task since about the same time .
The biggest thing here IMO is this is self-funded. Any startup trying to do what he is doing in this environment would have raised $50 Million, hired 100's of engineers from top notch schools, become accepted in YC, and have Marc Andreessen, Paul Graham, Sam Altman and all singing their praises.
Kudos to him for being self-funded.
Yep, he's still in his twenties.
Self-driving cars (in some form or the other, under some loose definition of "self" and "driving") have been around since the 20s. But it still remains a vexing problem.
It is quite easy to program a car to stay between 2 cars and follow the car in front. It is quite another to have the same car drive on (a) a road without lane markings; (b) in adverse weather conditions (snow, anybody? Hotz should take the car to Tahoe); (c) in traffic anomalies (ambulance/cop approaching from behind; accident/debris in front; etc. etc.); and so on.
No offense to GeoHot, but I'd love to see his system work in rush-hour 101 traffic; or cross the Bay Bridge, where (coming to SF) the lanes merge arbitrarily.
The key challenges are not only to drive when there's traffic; but to also drive when there's NO traffic, because lane markings, etc. are practically nonexistent in many places.
Having said all that, I still admire his enthusiasm and drive(no pun intended). Tinker on!
The testing of a hacked-together system on the public road is not. He probably won't kill anyone, but if he were to I suspect he'd get the book thrown at him in the way that everyday death-by-DUI drivers don't.
Actually I'll go futher with this criticism: we've just seen drones being FAA regulated because users were unable to refrain from doing dangerous or nuisance things with them, such as flying near airports. DIY self-driving car research is similarly likely to damage the concept if it goes wrong.
His name was JB Straubel, and nowadays he's Tesla's CTO.
Best of luck to Hotz!
Geohotz makes a decent point. The way the industrial revolution reduced manual labour, and made thinkers and tinkerers much more valuable, the advent of AI (true AI, mind you, not the tiny stuff that we currently assume) might actually make us obsolete. It is a peaceful and yet terrifying thought.
>Dude, he says, the first time it worked was this morning.
I can't tell if this is a joke or unbridled hubris. Either way, self driving cars seem like a new hacker space.
> I appreciate the offer, Hotz replied, but like Ive said, Im not looking for a job. Ill ping you when I crush Mobileye.
> Musk simply answered, OK.
I have to agree with Elon here, Hotz is such a good fit there. But Hotz knows best, if he thinks he can take down Mobileye then he did the right decision, sucks that Tesla wouldn't back it. I'm sure other car companies would buy Hotz's software
Also pretty cool he's working in his garage :P.
What to communicate? I'm not sure, to be honest. Road conditions or notifications of the position of obstacles is one obvious thing. Advertising the current version of the software and pushing signed OS upgrade binaries is another. Voice/Video chat with other vehicles in range would be cool, as is media syncing and discovery.
Building in some kind of Bitcoin based payment protocol would be fun too. You could load your cars Bitcoin wallet with some funds and tip cars around you all over the LiFi.
I'm not saying you need to build all that stuff, just put in a good hackable messaging protocol into the system before wide release :-)
Great work man. Good to see people with a good hacker ethos accomplish really cool things.
He seems like a good person to get into business with. He's so non-judgmental. Reminds me of myself and all the stupid things I said to VCs in my 20s.
I'm not sure those two are equally horrible though - fixing complex bugs requires using lot of skills and the high you get when you finally nail it is nothing to miss.
Getting people to click on ads though - that's genuinely depressing.
Usually before you are allowed to use something like this on a public road your stuff has to be tested and approved by the state. At least this is how it is in Europe, does this not matter in the states?
>Sitting cross-legged on a dirty, formerly cream-colored couch in his garage, Hotz philosophizes about AI and the advancement of humanity. Slavery did not end because everyone became moral, he says. The reason slavery ended is because we had an industrial revolution that made mans muscles obsolete. For the last 150 years, the economy has been based on mans mind. Capitalism, it turns out, works better when people are chasing a carrot rather than being hit with a stick. Were on the brink of another industrial revolution now. The entire Internet at the moment has about 10 brains worth of computing power, but that wont always be the case.
George Hotz working his magic on the computer is the most fucking legit thing I have seen in my life.
He thinks machines will take care of much of the work tied to producing food and other necessities. Humans will then be free to plug into their computers and get lost in virtual reality.
Like the article said it sure beats writing code to make people click ads or fixing some obscure deadbeat bug in some useless software which nobody uses.
This has echoes of J.R.R. Tolkien:
Anyway all this stuff is mainly concerned with Fall, Mortality, and the Machine. By the last I intend all use of external plans or devices (apparatus) instead of development of the inherent inner powers or talents -- or even the use of these talents with the corrupted motive of dominating: bulldozing the real world, or coercing other wills. The Machine is our more obvious modern form though more closely related to Magic than is usually recognised. . . . The Enemy in successive forms is always 'naturally' concerned with sheer Domination, and so the Lord of magic and machines.
That stunt is also what lead to a coordinated attack against PSN that took the service down for more than a month.
If we could move the liability to the car itself, then maybe we could just add the car to its own insurance policy, you know, as if it were a dependent, like a teenage driver.
I'd not be surprised to see some interest and support from nvidia on this (if not, then they should REALLY look into it).
Except the law when it comes to exceptions for being in control of your vehicle at all times. Somebody take this guys license before he kills someone due to a divide-by-zero. Testing this in an abandoned parking lot would be ok with me (probably still against the law but fine). In traffic is a definite no.
Really? I did not expected this from him. Why don't he put his sensors\cameras\kit on few other hundred\thousand cars and pay them some money or get some early adopters.
Why am I seeing Ubuntu on Screens of developers, experts, et cetera in Cover Stories such as these, most of the time with the 100% plain Ubuntu Desktop with all the craziness that comes with it?It feels like this is the case 90% of the time. Two more (recent) examples I can remember:
1) Fyodor (Guy behind nmap) running plain Ubuntu on a Notebook while giving a speech at a conference
2) Developers at Honda (Video was an Asimo promotional video) running plain Ubuntu
Since in my personal opinion Ubuntu is not the technically superior choice in these cases (though that can be debated), it can not simply be explained with it being backed by a company, there being support you can buy for the system if you need it.
What motivates technically extremely skilled people to use "Plain Ubuntu" instead of one of the many alternatives?
I really don't understand, please enlighten me!
(I actually think it's worth "spending" some Karma on this if I for once get a satisfying answer)
I imagine there will still have to be some hard rules in case the AI encounters edge cases.
There are a lot of brilliant hyper-competitive people who work at these big companies and you will be a small fish. So I think this article is spreading a myth that there is guaranteed piles of money to be made by working at Google, Facebook, Apple, etc.
The important thing is to leave after one year, no matter the compensation you are getting. Teams have a tendency to give the shittiest work to the most junior member, and there's very little inertia to replace that person if they are doing a stellar job at that shit work, but once you leave that startup with bankable experience under your belt, you'll have a much easier time interviewing and negotiating yourself a cushier position at either another startup or a big company.
Most big companies have a much better career 'ladder', where you'll be on a path to more interesting work once you've proven your worth, but I suspect you'd still be in a better position coming into the company one or two years in, rather than starting the treadmill at a lower salary/title.
If you're primarily interested in making money, or if you love the startup but not the compensation, you should NOT work at that startup.
If you're a good developer, you can get a better deal by working at an established company and simply investing. This has been true for every startup offer I've ever seen. Ever.
I've considered lots of startup jobs because I believed strongly in the companies. Every single time, however, I was able to get a larger chunk of the company by keeping my current job and simply investing.
To give an example, my current job pays about $250k, and one year, I invested $100k of that into a startup, leaving me with ~$150k of salary. This $150k + startup equity was a better deal than the startup was offering in both salary and equity (BY FAR). Plus, equity bought as an investor is much less tax toxic than equity options received as an employee of a startup.
On the other hand, most people who work at startups aren't interested in money. If that's you, that's totally cool!
Certainly we need Rome, the modern world wouldn't have existed without it, but Romans themselves are myopic and self-obsessed. They need to be, otherwise it wouldn't be Rome.
I love that Silicon Valley exists, but I wouldn't want to live there. I love the people that go there and exist on the bleeding edge of innovation. I'll happily sit here behind the curve and have a normal life with a house and car and kids. I'll root for the dreamers that go there and hope that they too can one day achieve their dream life. I don't need that glory.
Also, while "rand(100)" might be an accurate characterization of the returns of all employees over all startups, it is not entirely a game of chance. There is skill involved in picking the right startup to join: being proactive in your search, building a personal network, finding founders with track records, considering enterprise startups. You can learn to improve your odds -- a bit like learning to count cards.
The problem is that these analyses always focus on how you, as a prospective employee, can extract the most value from the world. Optimizing cash vs equity or arbitraging location or whatever.
You can see it seep through all over the place in the language used. Sometimes it's subtle:
> Ive told that anecdote to multiple people who didnt think they could get a job at some trendy large company, who then ended up applying and getting in.
Waiting and hoping to "get in" is pretty weak. It implies that we're all just meat-sacks working away until some of us get lucky and manage to convince a fancy company to overpay us and let us extract a lot of value from them.
If you have value to contribute to the world (and you definitely do), then go figure out someplace where you can best contribute it. Stop worrying about the best way to take things from the world and figure out the best way put stuff in. The rest will take care of itself.
I think these companies are actually too ideal to draw this conclusion from. For everyone Google/Amazon/FB, you also have a Comcast, an Oracle, an HP or Cisco.
You frequently hear about companies like Apple, Google, Facebook, etc trading employees. You don't hear about the typical big company poaching anyone besides executives.
Edit: The above doesn't seem to represent a clear thought. I'm trying to argue that the average big company doesn't pay as well as those three. Similarly, the average startup isn't going to be able to pay massive dividends in four years.
If you join a startup early you'd be very lucky to get 1%, 2% maybe? Ok so then 4 more rounds of funding go by and you're diluted. Then finally after 5 years you sell to Googapplesoft for $200M. Holy shit, payday is here! Wrong.
First you're going to pay out to all of the preferred shareholders, some of whom might have special payout clauses because you guys really needed the money. Then you find out your 1% is now 0.3% and you have to wait 6 months to sell any of it. So you busted your ass for 5 years for a few hundred thousand dollars when you could have had a full-benefits, low-stress job at Googleapplesoft in the first place and not put nearly so much at risk.
Oh and there's a 90% chance that exit never even happens and all you did was work way below market salary for worthless stock. Oh and that whole time you probably had crap health insurance and minimal 401k matching so your savings aren't looking too good either.
The point of the above is not to say "don't work at a startup" because it's obviously a great experience and the right choice for some. And who knows, maybe it will be WhatsApp and you'll be a billionaire and you can come back here and mock me. But if you run the numbers (as the article says), you really shouldn't work at a startup if you're after money.
Having $130k as a baseline salary for the analysis is silly. (However, a total compensation of $130k is not silly)
If you're the type of person that can't stand repetition and predictability and want the emotional roller coaster of creating something new, then join a startup!
We don't have blog posts talking about the payouts between being an artist and a hedge fund manager, and yet people still become artists. Why do we do this in the startup culture? We shouldn't feel apologetic or like we're missing out on something if we decide that what makes us happy is to work at a startup.
If you can get a job at Google, it's very different than getting a job "at a big company" and has historically actually increased your odds of success later on (via the propensity of investors to fund Xooglers). So I would definitely agree - taking a job at Google when you haven't had one before vs. starting a startup is a very legitimate choice to make. Taking a job at Microsoft vs. starting a startup, I would argue (and the X-Microsofters in my life) a very different one.
3 years in small companies outside the bay area10 years in bay area startups~2 years in bay area big corporations
I can say the following. The transition from startup to big company was really tough. Part of this was because I was used to being "the guy" for such a wide range of things. At big corporations, these roles are divvied up into 5-10-15 different roles. This is both good and bad. On the plus side, I can relate somewhat intelligently to colleagues across a very wide range of roles. On the minus side, the colleagues all (rightfully) feel their areas are their domain and don't typically recognize expertise coming from outside their team. This knowledge with lack of credibility was hard to reconcile until I fully realized what was going on.
Orthogonal to the point of the article, but the only time you can really get a salary bump is before you start, so make sure to negotiate like hell before you get in. Once you're in, you're going to be grinding to get a 3% raise (contrary to what the OP states).
His comment about cost of living was kind of an outright dismissal, very odd? I don't think Google will let me work from Milwaukee.
Where is the information on travel, quality of life (owning a house?), etc?
It's just a fun as arguing which is a better phone, Android or iOS... it's super old and a boring, divisive conversation.
As somebody who worked at Google for 5+ years, I'd like to emphasize a point: If you sustain good work  for a few years, you will be rewarded well beyond expectations . Your compensation will depend on your performance, and not on your starting salary. I think this is unique to Google and maybe a few other tech giants. From this point of view, the generalization to 'big company' is flawed. But otherwise the points made by OP are true, and I wish more new grads would see it (and believe it).
 Your overall contribution to the project is important. Whether you work 30 or 50 hours a week to get there, it depends on you, of course.
 And those rewards do not depend on the stock price going up. That's just cream on top.
Do contract work at $250 an hour or equivalent compensation in shares at >= series B funded startups. Choose the level of risk you're willing to accept in cash vs equity. (Nearly) completely avoid politics and other office related BS. Take vacation between gigs if you like. Get a very broad variety of experiences at different companies and get a good feeling for what companies are willing to pay for should you decide to start your own.
> But the total comp for a good hacker is $250k+/yr, not even counting perks like free food and having really solid insurance
Maybe that's anecdotally true for the big companies in a small high demand area of the country (really just the bay) but not so most everywhere else. Probably why Paul Graham used a figure less than 1/2 that.
With that figure exaggerated, I'm not sure I can take any other points made here seriously...
AFTER all those 10 years, the Sr. Engineer plays at Google and Microsoft were probably 250k+, when they needed that 10 years of specialized experience for a $10M+ project and can pay to play.
Anec-data: As an SV person who's worked at BigCorp and funded startups (from co-founder to CTO)... these numbers definitely trend high amongst my friends and acquaintances who graduated Stanford CS in the early 2000s.
However, his argument becomes truly hand-wavey and spurious when it comes to the interesting work part. Of course some of the most interesting tech papers are coming out of Google, that doesn't mean anything for the average Google employee. Also, in his, argument, he goes from "here's what the average can expect for compensation" to, "you need leverage to work on interesting things at big companies...get some".
Of course not all start ups provide interesting work, but I think the dice roll is solidly in the camp of start ups for interesting work against the Big Cos, whose interesting work dice roll, I would guess, is similar to the start up comp dice roll. Unless rebuilding a bog standard UI framework is your thing.
At my first startup job I got to:
- build a compiler
- implement shared memory on high traffic services
- play with any language I wanted
- manage smart and interesting people
- more stuff, but I'm tired of typing.
Having worked at Google for 7 years and now at a startup, I have to say the big company bullshit factor is huge. Even at Google where where bullshit is gold plated and served with delicious, locally sourced, healthy and tasty side dishes.
I learned a lot there, worked with brilliant people, was able to carve out super gratifying work.But, it just got silly. I'm thrilled to be gone, and hope never to go back.
A tech Big Company values engineers and pays accordingly.
A non-tech Big Company simply does not value engineers or tech and consider both to be cost centers instead of drivers of growth.
I would plainly say that jobs at either a Startup or a tech Big Company would both be better options than a job at a non tech Big Company.
All I can say is that the market is actually a lot more efficient than most people give it credit for. I remember coming out of school and thinking "Why would anyone work for a big company when the payouts for startups are so much better?" After a bunch of experience, I've found that:
1. Those big startup payouts are much rarer than a typical new grad conceives. They're also more widely distributed: perception is that startups are "go big or go home", but a number of companies end in talent acquisitions that are just slightly less or more than what the founders would've earned at a big company.
2. Compensation at big companies varies wildly, and people with the effort & effectiveness levels that you'd expect from a startup often are actually making startup-level money. There is zero reason for anyone doing this to publicize that fact, and oftentimes they're contractually forbidden from disclosing it.
For people trying to decide between these - forget about the financial rewards and ask yourself "What would you like your working life to be like?" I'd also forget the common wisdom about startups = no life & big companies = drudging pace; both of these are inaccurate on a micro-level, and you can find startups that prioritize work-life balance or teams within a big company where everyone's life revolves around work.
Instead, think about the problems you would like to solve. Do you want to do cutting edge research that pushes humanity's knowledge forwards? Work for a research lab or big company's research department. Do you want to bring new technologies to the masses? Then you want a startup, probably one that has spun out of a major university with a couple professors as founders. Do you want to put social hacks in motion and bring technology to ordinary lives? That's probably also a startup, probably one with young founders. Do you want to scale technologies and work with big data or machine learning? Big company; startups usually lack ownership of enough data, unless they're a consultancy. Do you want to apply technology to an industry that currently does things backwardly? Join a startup whose founders have significant domain knowledge in that industry.
If you work on problems that you believe in, you'll find that you're much more effective at solving them. The financial rewards follow after that; money is a lagging indicator for value generated, not a leading one.
Can someone explain why salaries in for example Europe are so much less? Out of uni salary for example seems to be around 38k in Europe, whereas in US everything under 90k seems very low. Just the difference in purchase power and costs?
Typical startup raises a $500k seed round and is two founders + 2 senior engineers building the v1. If founders want Google-caliber people, they can't afford to pay $200k; the cash just isn't there.
So companies will have to start giving out real chunks of equity (5% or more) to early key hires. And "who you are" will matter even more than it does now (e.g. look what having Spielberg or another A-list director does to a movie's prospects, they can get the good actors, etc.)
I just don't see any other way this plays out. SV will become more like Hollywood.
If Google and Facebook are your median exemplars, I really want to know what your sample is. Those aren't exactly large companies. Amazon, too. Apple and Microsoft may be, at least in this industry, but I still have the feeling that the total number of the technical people that this article seems to be addressing is much larger than the total employment of the "large companies" (and small ones).
Also, another minor issue: During my years at IBM (technically, contracting for IBM, but even then you couldn't pay me enough to sign on), I did get to work on very interesting projects and learned a great deal from some very smart people. On the other hand, everything I did was shovelled into the trash can immediately after I finished it. Project cancellations, reorganizations (IBM: I've Been Moved), etc., these are fun facts. Of course, I don't expect start-ups to be any different in this regard, given their failure rate.
Work at whatever job meets your financial needs, everything else is gravy. This will be hardest at the start, but as you gain seniority, you'll have increasing freedom to work where you want. All things being equal, established companies will pay better. Save money, but don't take jobs you hate to save more money - it's not worth the equity.
If you know what you want to work on and can get a job in an established company, do that. Established companies are much more likely to actually have you work on what they hired you to work on. They are also much more likely to continue the project you were hired for. There are exceptions, but generally you will have much greater resources available to you. Your work may never see the light of day, or may only be used internally, but you'll walk away with a lot of experience on a specific area.
If you don't know what you want to work on or you like working a little bit on a lot of things, work for a startup. Especially for junior roles, there is enormous flex in terms of what you'll actually work on from day to day. This is not limited to programming, but includes all branches of tech work: architecting, service monitoring and setup, marketing, etc. The downside is that it can be hard for you to explain, in a substantive way, the amount of work you did during this time. The upside is that you learn a lot and get a feel for many things.
As for future options - startups will give you many shallow options, working at a big company will give you lucrative & focused options. If you work only in one area, understand that you are betting on that 'kind' of work to continue into the future. The more experience you have, the more value you have in that market. The value in that is entirely dependent on the value the market puts on your skills. You can often pivot, but it can be difficult. Keep that in mind when looking at future work.
At Big Company if I get bored I can move to a different group fairly easily. And I know my work will be used by millions of people a day. I think that's the part people forget; A lot of people will use your product. At a Startup good luck with that.
And another thing, if you start to have conflicts with someone at a Startup and it's not resolved, it will be a pain to work there. At Big Company, at least you can move to another group or hope they will.
The problem with any comparison is hindsight. Many other large cap tech stocks do not behave that way. Certainly you could have joined Microsoft at a certain time and "given up" at the "wrong time". Your cash comp might have been quite good (or not).
The thing to ask yourself (and I definitely am not judging or implying one way or another) is where does your code go and what does it do? A lot of that IBM code, well... And of course many startups don't make it as well.
If all you want is money with low risk the choice is clear. If all you want is a chance at a huge payoff with high risk then the choice is clear.
If you're happy with the end result of your code then in a Fountainhead sort of way that's what matters. Then whichever risk path you take for your compensation is secondary. And your views on this might change at different life stages.
Finally the ability and skills to navigate and succeed in either a startup or a big company are different. Depending on when you choose and how you grow and evolve you may or may not be at the right place at the right time.
That's my shortest comment to a very long topic :)
However even if return to the 2004 scenario of 80K/100K that is still comparable to other professional careers and better than most college majors. So I'd recommend staying in CS if you like it.
I make ~$110k/yr after taxes (!) and insurance and I work 10 hours per week, tops. I literally watch Netflix more than I work on an average workday. I know it's not great money (don't have a Tesla), but it's quite cozy.
That being said, a startup in my area would pay 50-70% as much.
For fresh grads there is almost no negotiating room on compensation. Additionally, what one learns and experiences is critical to career growth. At one sv bigco I got paid intern level wages to manually label machines in a server room for a month (hired as a developer). I'm sure this is not represent tative of that company. At another bigco I have seen fresh grads and interns get plum work assignments (and pretty good compensation, according to surveys), and I know that's not the case for many other parts of that company. Neither of these companies is appamagoogsoftbookflix, but I know both these scenarios can be found pretty much anywhere.
Even midcareer professionals should care a great deal about the people and projects they will be working with, but there the ability and need to optimize compensation can be more acute. So, avoiding optimizing for compensation and company prestige early in ones career is something to be cognizant of.
No different than when people talk about how they bought a house for X and then sold it for Y at some point down the line, giving the impression that they made a nice return on investment. Off course such casual math forgets that it costs a ton of money in interest, taxes and maintenance costs to hold such an asset for this length of time and these need to be subtracted from what appears to be a nice lump sum payment down the line. In reality, net net many people never make a $1 owning their nice fancy house even though when they sell it results in a nice lump sum.
With jobs and houses there are lots of other factors at play, but when it comes to $ it's important to understand the difference between perceived "windfalls" and actual net return. The true results are often not what people expected.
Then I got to the final footnote. That kind of Kafka-esque bullshit is what keeps me working in startups. I could not stand working in a culture where policies which are widely seen as unhelpful and ridiculous are still routinely inflicted upon employees.
Yes, startup "bullshit" also exists, but the difference is that it's totally possible to avoid that bullshit by picking a good startup to work at. I've never had anything like the experiences described in , despite working at 3 different startups. On the other hand, big company problems seem to be universal: it's not possible to find a big company without at least some arcane and employee-hostile policies. (For example, Google's blind hiring keeps me from ever considering working there.)
Definitely, and not just for the reasons stated in the article. I believe it's pretty common for employees to get escalating stock options as they progress in the company. When I first joined my startup as the second engineer hire, I got something like 0.2%, under a four year vesting schedule. After a year I got a bit more, like 0.3%. After another year and a half or so, I was given enough to bring me up to what is currently 1% -- but again, under a four year vesting schedule. Most of my options vest under a four year schedule that started 2 1/2 years into working there.
Is it pretty common for regular engineers to get 1% options right off the bat? I thought it wasn't.
Sure, that may not be true across the board, but the author's assumptions are far reaching. I mean, 250k? Really? I'm not questioning his honesty, I just envy the bubble he lives in.
Is this an omission in the post, or is the early exercise option less common than I thought?
So the startup people talk about experience in the context of assuming you will also want to do a startup.
The author of this piece kind of ignores the value of having to make those difficult choices in the face of shipping something. But that value is only towards actually starting a company. It's not nearly as valuable at a large company.
So that bias shows through in the writing although I agree with the rest of it. It's just a tiny argument towards the value of startup experience that in context for some people does have immense value while for others it is negligible.
I was just perceptive enough to notice, that, copy the entire table into a comment and pontificate on it, then notice that I missed the sentence in question immediately after commenting, and delete it 3 seconds after posting it. :/
Should the goal be to spend time studying for interviews, or is this some other strategy? This article assumes you're at some level to command this level of compensation, but how do you get to that level to begin with? I know the classic advice: do good work, write excellent code, etc, but is there more specific advice?
"You should figure out what the relevant tradeoffs are for you."
The author is just summing up his experience. Individual mileage definitely varies!
And the comments about "I make >$250k at GooAppBook" are self-selecting, from those who didn't get shown the door. They may not be typical for many reasons.
More useful questions:
How likely is it that a programmer of median competence/non-laziness will make $250k after five years?
Conversely: how much of an exceptional workaholic do you have to be to have a reasonable chance at $250k?
How do the probabilities compare with those of a start-up, or a non-technical big corp, or a Wall St/City job?
Also relevant is what you get to work on, how much freedom you have to choose, how research-ish it is (if that matters to you), and how much you'll still be learning at T5.
And let's not assume big tech corps will still be big five or ten years from now. Former big tech corps - IBM, DEC, HP, etc - all looked unassailable at various times, but became very assailable within a few short years. It's naive not to think the same couldn't happen to the leaders today.
They had the solution available to them from day one. Since they can clearly identify third-party bulbs, they could have simply presented a warning along the lines of "We've detected you're using bulbs that are not certified by Philips. For best results, we recommend using only certified bulbs (link to purchase here) and cannot guarantee a quality experience with the bulbs you've purchased. Click "OK" to continue."
I'm sure that third-party products were causing problems, however, wholesale blocking of them via software update is a terrible solution. They, literally, turned out the lights on their customers. Meanwhile, I'd be willing to bet support costs immediately spiked -- people call support when things don't work and they just pushed out a solution that increased rather than decreased that.
Unfortunately, I think they've bruised their reputation quite a bit with this move. It's now delayed my purchase of such a product until I am convinced that they have a solid third-party certification program in place (with very low licensing fees) or (even better) a guarantee with the product that they won't try this again when the market is more mature and they have the option of ignoring complaining customers.
Their competitors could see a rise in sales by taking advantage of this blunder and committing to open protocols. I haven't looked at the landscape in this category, yet, and had just assumed I'd be buying the Philips Hue eventually, but they've motivated me to do more research.
At least in the future they'll be able to stick to "if it's not certified by us..." for customer support, which was likely the original impetus (along with a desire to cut off cheap alternatives to their devices).
I'm not mad at this at all.
With this as the background, it's surprising to see a large crowd defending the equivalent of Ford-branded gasoline.
They have proven they can't be trusted with this sort of power, and that is a one way trip. You don't come back from that, you don't get back off my list.
Hat down to whoever made this happen over there! The world is better when things are open.
Do they really believe it is a small number of customers that use non-Philips light bulbs? I mean, good for them in reversing the decision, but the damage is already done (check out Amazon reviews for one) and it should have been easily foreseen.
Who writes these things, and why do their supervisors allow them to keep working there?!?
3/5 stars: http://www.amazon.com/Philips-455303-White-Starter-Generatio...
4/5 stars (previously 4.5/5): http://www.amazon.com/Philips-456210-Ambiance-Starter-Genera...
It's like connecting to your office chat with an IRC client because you figured out that's what they are using under the hood. Why would you scream bloody murder when one day your IRC client stops being compatible with it? They never advertised this to begin with!
You can't exactly demand functionality that you were never sold.
In order to do it properly, there should be standards that major providers agree upon making integration much easier and predictable. That takes plenty of time.
Then you probably need some walled garden to control the experience. Approved apps, approved 3rd party providers, etc. If some crappy app is released, regular users won't blame the developer but the platform, as it was discussed in great details in other threads. We need to get out of the HN bubble. Seriously. We forget that a computer is a device to watch porn and browse facebook and that's about it for A LOT of people. Chances are, it will cause a wave of anger in communities such as this one (where there's a strong sentiment for open systems).
This work has to be done be a number of large providers (read: long processes) and followed by startups popping up and disappearing now and then. This stuff always takes time.
They got many people very pissed off and probably never buying or building products with their chips again.
Despite being a programmer -- but not front end work -- I find myself struggling more and more with UI and especially icons. I think so much of it reflects the current trend of zero empathy for the end-user. Fortunately, I know enough about computers to get around many of these issues, but icons are the one area that I still struggle with.
Unless you have some site with several million users, teaching end-users is a wasted effort, and it does well to either piggy-back on other ideas or use text. Even Facebook is using text, and it seems a little odd that anyone smaller would feel they have some lessons to teach the end-user about UI. UI, in my opinion, doesn't mean "pretty," it means "usable," which is sort of implied by U meaning "user." If a significant portion of your user-base is computer illiterate, which will often be the default, it does well to UI to the lowest common denominator. Once your user presses the back button because your icons made them feel stupid, you lost a customer, and that is a very high price for "pretty."
- Menu - Typically at the top of the screen or window, you can go there and find everything you can do with the application, at the consistent place; moreover, as you go over menu entries, an helpful explanation what does what will appear at the bottom of the screen.
- Toolbar - A place which has most commonly used tools. They are represented by icons, text, or both (and you can actually make a choice in settings). Typically, you can configure the toolbar to your heart's content. Toolbars can also depend on the context.
- Context menu - Right clicking on some object will give you menu of things that you can do with that object. Again, explanation of what each of these actions does appears helpfully at the bottom of the screen.
- Tooltips - As you mouse over any UI element, it will helpfully explain you what it's purpose.
- Buttons - Things that are clickable look visually differently than other things that are not. For example, they have different shading. Buttons may also give feedback that they are clickable when you mouse over them. Also, if you click thing that is clickable, it will give a feedback that it was clicked by changing the shading.
I think the big problem here is that UX/UI people want to be artists and so create art, not useful application for end users (which often means follow some standard!). So the end result is even more disastrous when programmers design UI (at least they are rational about it, in some sense), but the cause is the same - it's putting your own ego (behold at my artistic creation!) in front of actual usability.
The article is misguided, in that it assumes that the meaning of an icon only exists in the lines/color/visual form of the icon. Icons are visual language. You have to teach the user what the icon means. Either the user has seen the icon before (such as in airports), or if the user hasn't, your UI has to accommodate that.
Once that happens, then icons are way faster.
Icons are like visual acronyms. The sequence of letters 'T', 'C', 'P', 'I', 'P' means nothing to someone who doesn't already know what 'Transmission Control Protocol/Internet Protocol' is, but once you do, TCP-IP is way faster to recognize, speak, type, and to share.
For fun, I looked it up. Some of the icons are obvious, a few make sense once you know the basics, and still others seem almost sillier once explained.
I don't use Apple Mail, so this is an example of what a completely new user --- albeit one who has used computers for a long time --- thinks when they see those icons along the top:
- It's a closed envelope. Mail? Send? Close?
- Write? Edit? Compose? Sign?
- No idea what this is.
- Trash can. Deleted items? Delete?
- Left-pointing arrow --- but coming from bottom and looking like it expands outward. Back? Open message in separate window?
- Two left-pointing arrows. Rewind?
- Forward to next message? And why is this arrow not coming out of the bottom, unlike the two to its left?
- Flag. This is probably the clearest of them all.
Even if you decide to use a textual button, offer a hover tooltip if there's a keyboard shortcut. (Hello phpStorm, I'm looking at YOU!)
So yeah, I think the article is spot on.
(I'm looking at you Eclipse)
For instance I liked OS X's original toolbars, in that you could easily shift between modes that displayed text or did not display text. (Then they entered their phase where toolbars couldn't be customized and all icons were the same shape, which is far less sensible.)
Text-only is as-bad as icon-only. Combine the two.
Also the recent trend for black and white icons makes them sometimes harder to understand. Colorful icons worked fine, in older Office <=2010 and elsewhere. Though finding a good icon set with 500+ generic icons that fits one needs means taking compromisses.
The only time I prefer icons is when I am already in graphics mode while using graphics software.
But the easiest icon is that weird squidgy thing that looks like a ... well, I don't know, I'm expecting users will click on it and eventually figure it out. Squidgy thing takes maybe an hour to create. The internationalization team's SLA is not an hour.
I'm thinking about for example a swipe based touch interface. A lot of functions now are click here, then click there, when they could be done in a simple gesture. But the problem is that one would have to learn the gesture somehow in the first place. And people don't read manuals, even if there were any nowadays.
>> Google decided to hide other apps behind an unclear icon in the Gmail UI, they apparently got a stream of support requests, like Where is my Google Calendar? <<
On mobile you have to try and hope it is not the "irrevocably delete thread" button.
I think icons next to a text label are very useful because they guide the eye so you can quickly see where your buttons are.
> Facebook as a final example: they lately traded their > unclear hamburger menu icon for a frictionless navigation > that combines icons with clear copy. Well done
The 'hamburger' button is not "unclear", because it's so commonplace that at the very least it has an intuitive meaning in the context of an Android app - it's where I expect more options, settings, etc. How do I know that is now kept behind the much less clear icon that seems to show a man moving quickly?
This LiveScience article is an easily read summary that also mentions a critique of the approach.
Given the unfortunate history of falsified Korean scientific research, it would be prudent to withhold judgment until these results have been reproduced in other labs around the world.
Looks interesting research but I'm sure this stuff probably can't be that good for you!
It is interesting the way in which various groups leap upon some research reports but not others. The challenge is always having the context for the broader state of research to understand whether it is meaningful or new or not.
The present mainstream view of Alzheimer's is that amyloid (and tau) clearance is the way to go. Immunotherapies are the most developed tool, but that is so far proving to be hard - it is too early to say whether failures in clinical trials are because it is hard or because amyloid clearance isn't as useful as thought in this condition. Which could be for any number of reasons including that amyloid-related biochemistry is the problem, but clearing a particular variant or stage of its aggregation doesn't touch that problem area.
Amyloid levels in the brain are in fact highly dynamic on a very short timescale. That Alzheimer's develops slowly supports the view that the condition is a slow degeneration of natural clearance mechanisms, such as the filtration performed by the choroid plexus, or the more recently investigated peristaltic passage of fluid out of the brain by other channels. E.g.:
On that latter point, the Methuselah Foundation just a few days ago seed funded a startup company that will investigate whether reversing the degeneration of peristaltic fluid passage with aging will improve clearance and thus stop the progression of Alzheimer's. It's based on as yet unpublished work by Doug Ethell at GCBS Western who presented at Rejuvenation Biotechnology 2015 ( http://www.sens.org/files/conferences/rb2015/RB2015-Program.... ), and has the merit that it should be a fast failure if the theory is wrong, unlike many of the other efforts in Alzheimer's research.
Of course this is only small part of the paper and I have no training to appreciate it more.
If Alzheimer was simply a deficiency of nutrients, I wouldn't think this way, but if it really is a protein that "can be cleared", why did it get there in the first place?
The test for alzheimers for the first study (previously reported but summarized again) was to quantify how much the mice deviates from solving a maze that they have been trained to solve.
In the first they injected amyloid beta aggregates into mouse brains and found that EPPS administered orally at 30 mg/kg and 100 mg/kg restores the ability of the mice to efficiently solve the maze.
Next they tested toxicity quantified the amount of EPPS that passes the brain/blood barrier. For toxicity they found no signs of toxicity at 2000 mg / kg (20x dosage). For blood/brain barrier, as you go up in blood concentration you should go up in brain concentration if there is a good penetration from blood to brain. If the barrier is high then you immediately get high blood and low brain concentrations. The point where there is no longer a significant increase in brain concentration when increasing blood concentration is used to determine effective dosage concentrations. They found that at 100 mg/kg they were starting to see increased blood/brain ratios so they targeted 10-100 mg/kg for the next study.
MAIN STUDY (which included identifying the dosage level) used mice that were engineered to "get Alzheimer's" starting around 5 months of age because they produce a human gene (transgenic) that is a precursor to form the AB plaques. This transgenic model is established and the mice showed the expected amyloid beta plaques and had difficulty solving the maze at 10.5 months as expected.
Starting at 10.5 months they gave oral doses of EPPS at 10 mg/kg and 30 mg/kg and monitored maze solving along with several additional tests: likelihood to freeze when presented with negative input (fear conditioning) and ability to find hidden platforms when swimming (water maze). Both tests improved significantly to the wild-type (no Alzheimer's) level when taking EPPS. They also did dose dependency at .1 1 and 10 mg/kg. There was a steady improvement at higher doses.
They also took slices of the mouse brain and tested whether or not the neurons responded differently to electrical stimulation. They found no difference in wild-type (WT, non-genetically altered) or transgenic (TG, altered) response to electrical stimulation with and without EPPS. This hints at no difference in neural activity with or without EPPS. They also gave EPPS to WT for the behavioral tests and did not see a difference (although that was not shown in the behavioral test figures).
They also took slices of the brain and stained them with a fluorescent dye to show the Alzheimer's associated plaques. There is a significant quantifiable reduction in plaques in the treated mice.
They used several other techniques to confirm that they were actually AB plaques and they disaggregated by a specific site of activity. I won't go into those specifics, but to say that this was a VERY well designed and executed study across multiple lines of inquiry and all of the lines of inquiry point to the same conclusion:
EPPS rescues hippocampus-dependent cognitivedeficits in APP/PS1 mice by disaggregation ofamyloid-b oligomers and plaques
And that's why it's a Nature article.
I'm think more in terms of computer simulation?
From "top 3% of coders" to "your product will get 1st spot if you scratch our back with a small slice of the pie or counter-promote our product with yours" to "we will only invest in you if you get referred through an acquittance of ours", the game surely does feel more rigged each day.
The upper echelons of tech sure does share more similarities with high-finance then they would like to admit...
This is an odd racket, to say the least.
Re/code wrote a relevant article a few months back (http://recode.net/2015/06/18/product-hunt-the-startup-kingma...) about Product Hunt elitism, which I was interviewed for and the response from the PH team to the article was essentially "haters gonna hate." It's disappointing that nothing has changed since then, and arguably, things have gotten worse.
Product Hunt is a new, closed, exclusive startup community run by a for-profit company that will eventually have to start selling you something.
Not sure why people complain about PH so much... just don't use it. There already is a perfectly good community of startup people out there that has much more incentive to stay "pure" than a for-profit one. Sure, HN isn't perfect, but fundamentally it is always going to be better than any for-profit communities.
(And also this obligatory comment: If you want to build a successful company, stop wasting your time browsing startup communities and spend your time talking with users and building your product)
I'm almost never harsh to a fellow founder but I thank God Ryan Hoover doesn't weild much influence. Wrong hands to expect equity or fairness.
I'm following up with you about your post on PH.
There is insane bias towards outsiders of the club. Here is my case in point.
I submitted my startup https://callbase.co up to FIVE times and it was never approved. However aircall.io a competitor has made the front page TWICE in that period.
Of course having a handle @OoTheNigerian does not help :D
As at the time my second submission was being rejected, Mattermark's Newsletter was making the front-page as a product (1 of 5 http://www.producthunt.com/tech/mattermark-4#!/s/posts/matte...). Yup, ridiculous. (i have absolutely nothing against the great work Danielle is doing).
This is one of several.
I sent Ryan (copied) a stongly worded email after several ignored ones and he "offered" to allow mine through on a weekend. Lol.
This is just a case in point how hard outsiders (I live in Lagos, Nigeria) find it in the quest for success. Silicon Valley is a meritocracy but you have to be seen first to be considered. No?
Of course, it is his platform and can do whatever he wants with it. However, it should be clear to him what he is doing. Perpetuating the cycle of the powerful being more powerful.
It would be nice to see the demographic representation of his all powerful voting clique.
After reading this Ryan may (or not after seeing this) now go posting about us when we may be asleep or not ready.
Great write up BTW!
Go back a few years and everyone used to talk about their struggles getting featured on TechCrunch; I didn't believe it was make-or-break back then either.
Do people take PH seriously?
Ryan Hoover has not only outsourced VC product discovery, he has outsourced its class system too. It's incredibly disheartening to be outside the loop, trying to get your product noticed, and submitting it to what you think is a free system only to have other products by well-connected insiders block it out.
When I saw Hoover and Jason Calacanis congratulating each other on Twitter I knew immediately what was going on. Despite multiple emails, Hoover wouldn't even give me access so I could comment on competing products. I'm glad this is coming back to bite him and his investors too -- they went along with it.
I don't expect anything to change because sites are a reflection of the personality of the people who run them and Hoover has already shown he is completely corrupt. Meet the new boss, same as the old boss.
An independent contender in the war for eyeballs/voice in the hacking/tech/entrepreneurship community -- how exciting! I would imagine while their motivations might be similar to what YC wants with HN (distribution, influence), they could possibly open up and serve new members in the ecosystem that aren't, can't, or don't want to be a part of the HN/YC pipeline.
Building a working group of heterogenous independent sources to serve new and exciting topics is important to breaking out of the echo chamber we so often create for ourselves within tech. I was hoping Product Hunt could bootstrap the entire venture, stay clean, and true to the spirit of a meritocracy.
Then they went through YC, and now I see the same "influencers" there as I do here, with the same system in place to promote their own vested interests. It just makes me slightly sad that the pressures of succeeding create collusion among players in this market, thereby perhaps obscuring the potential for new/interesting/different emergent technologies/startups to thrive.
Among my peers, over time PH has become less of a community set out to serve the good of the people, and instead has become more of a pipeline for quick sales or testing new ideas, leaving a feeling of what can only be described previously as the "Tech Crunch of Initiation".
Product Hunt has essentially supplanted Tech Crunch in the YC/TC relationship of yesteryear, albeit to an even more perilous extent. Products are no longer vetted by working professional journalists, whose obligation should be to the consumer and not the producer, but rather by the very product's investors, advisors, and "insiders".
We therefore must ask what is the value-add here? Is it truly a wonder that it proves marginal, and perhaps even detrimental, to the long term success of the startup community as a whole?
He seems like a super well-intentioned person, so I'm surprised to read all of the commentary here on HN. Am I being duped by some Product Hunt scam that I'm completely oblivious to?
Someone submitted my site to PH a couple months ago, it got up-voted 20+ times in that "upcoming" area but never was moved to the front page. I believe it ended that first day with more up-votes than some of the products that were featured.
I reached out to the PH guys on twitter and they told me to get more people to vote for it or something to that effect. I noticed a few of products jumping straight to the front page without the upcoming purgatory.
I have read a number of comments writing these issues off to the fact that PH is a "for profit" company. I think that is a bit too jaded an opinion to have no expectations for this to ever be different. My understanding is that Reddit does not suffer these same issues. I think a for-profit venture could actually benefit greatly by being transparent. I think it would take founders that are looking further down the road than the PH guys appear to be and not getting caught up in the immediate gratification of glad-handing and being part of an 'inner elite.'
Full disclosure, I still look at PH pretty regularly. :-p
Mostly what they're selecting for is "is this of interest to our audience" - of which said audience is currently mostly free tech / designery / social type things (even as they start to add more categories).
While it's nice to be featured, it's quite unlikely to bring you a large amount of traffic and/or signups. A submission to a decent sized sub-reddit will likely drive 2x the traffic that ProductHunt will, a submission to BetaList more signups and a front page HN post 10x.
If there's a reason to get featured it's to try and get some feedback from the community (if they're your audience) as they tend to be quite helpful.
It appears in this instance my general cynicism of all-things-Marketed is confirmed.
But what would an alternative world look like? Is the industry trapped in some product placement local minimum?
What if we could trust online reviews by default? Would the same industry make more money or less, or would it just go to different people?
Often, defenders of invasive advertising say "it informs people of products which are relevant to their interests". Shouldn't then advertisers promote integrity in their other Marketing venues as well?
It's like a boy's club where they pass around the neighborhood bike for everyone to ride, only to find another one after they're all done riding it.
Even more so, I've seen more "here's a landing page, we haven't even a git repo yet, just trying to validate the idea, so give us your email" shit on PH than I would on Reddit.
Has anyone seen any value come from PH?
Really what should tip people off even more is the inability to comment. If the viewers of the site cant actually interact, since commenting is only allowed for approved" users, they should realize that the whole thing is just a scam.
See any ads on Product Hunt? See any monetization strategies? Oh wait, the whole website is an ad, and only those in the know or those who pay will get featured.
0 - https://news.ycombinator.com/item?id=7980403
1 - https://news.ycombinator.com/item?id=8047647
Finance and the stock market is rigged the same way. A select few (the rich) get inside info, reporters and analyst write and give positive/negative spin on companies and profit, traders screw their customers, it is everywhere. Different market, same behavior.
Looking through old threads I found this cracker of a post in reply to Ryan about their "anti voter ring policy" - which his tweet seems to counteract. https://news.ycombinator.com/item?id=9932641
I don't get why people think PH owes them in any way. Yes, it's all about curation. But yes, anyone could post there, provided they have a good product and they socialize a bit.
And this is what this is about. To me, PH is a social network for founders. They show off their project, discuss it and get feedback.
To all the people blaming how it's not egalitarian: would you create a twitter account, avoid engaging with anyone, then complain nobody is following you?
The same applies than in any social network: if you want people to get interested in what you're doing, start with being interested in what they're doing, and chat, a lot.
Or maybe this is a case where now that I know the name I'll see it everywhere. Funny how that works sometimes...
This can't be entirely true. I see featured posts on PH that are nothing more than "Version 2" of some previously featured "products". But the links go to the same place.
Oops having read the article - wow - Payola Hunt! https://en.wikipedia.org/wiki/Payola
PS: Apparently you have to email firstname.lastname@example.org
Are some of these folks so powerful that if you tweeted at them that they're backing a corrupt bro-club you'd lose any chance of funding?
0 - https://www.producthunt.com/about
This can be useful for future projects (such as finding funding), to increase their standing in the SV community, and to establish themselves as marquee valley power brokers.
In this sense, it doesn't make much sense to add more transparency and voting control to ordinary users.
This is pure speculation and assumes the worst. So take this with a grain of salt.
The reason I think this book is nicely packaged bullshit is because it presents exceptions as rules and then tries to build a theory out of it.
I wish it were as easy as Dr. Dweck describes it, but there are gotchas.
I can agree with the distinction of 'fixed' versus 'growth' mindsets (although... .. how do you measure that?), but that success is guaranteed if you believe and try... Not necessarily. Ask 9 startup founders out of 10.
Not achieving "success" (failing) is rarely free: it leaves emotional and physical scars.. Repeat it a couple of times and you're either dead or on your way there.
No, success is not guaranteed even if you try many many times times, even if you train a lot and believe a lot.
In fact, the rule is this: No matter how hard you try, you might still lose. Sorry about that.
And the reason for this is not mindset - the reason is your definition of success. If you try to win at the wrong game, you will probably lose at it. So pick your game wisely.
Of course, a fixed mindset will only land you some semi-boring job, a family, a couple of kids and a lot of mainstream entertainment.. I guess that's the definition of "failure" these days... But is it ?
By the way, if you want useful advice about how to be successful in life, Bill Gates is a very bad choice. It might be counterintuitive at first, but think about it ... As a bird, is it smart to fly around with your mouth wide open in order to catch food... because that's what the whale does ?
I know this for a fact because as I become more sceptical/pessimistic over time, my achievements increase. If I was a blind optimist, I would probably fail as soon as reality reared its ugly head.
If someone is really lucky throughout their lives, they will have an optimistic view about the world and the people around them.
Unfortunate people might find a statement like this offensive because they know for a fact (based on their own experiences) that this isn't true - It's almost like saying "It's your fault for being poor; it's all in your head!".
Point being I can say to you "adopt a growth mindset", you do it, but it doesn't work and life throws you 'a curve ball' again and again. Doesn't mean my hypothesis was wrong, and doesn't mean you didn't follow through properly. We can both be right in this case.
All it means is, we should act as if our actions/thoughts count, but accept it as a fundamental property of the universe that they may not 'bear fruit'.
All we can do is embrace the chaos^
^ as in chaotic systems
"Bill Gates: No. I think after the first three or four years, it's pretty cast in concrete whether you're a good programmer or not. After a few more years, you may know more about managing large projects and personalities, but after three or four years, it's clear what you're going to be. There's no one at Microsoft who was just kind of mediocre for a couple of years, and then just out of the blue started optimizing everything in sight. I can talk to somebody about a program that he's written and know right away whether he's really a good programmer."
So Does Bill still believe this or is he a hypocrite in hiding ?
In the Bloody Obvious Position, someone can believe success is 90% innate ability and 10% effort. They might also be an Olympian who realizes that at her level, pretty much everyone is at a innate ability ceiling, and a 10% difference is the difference between a gold medal and a last-place finish. So she practices very hard and does just as well as anyone else.
According to the Controversial Position, this athlete will still do worse than someone who believes success is 80% ability and 20% effort, who will in turn do worse than someone who believes success is 70% ability and 30% effort, all the way down to the person who believes success is 0% ability and 100% effort, who will do best of all and take the gold medal.
It might seem pedantic, but I worry that propagating this loose interpretation will lead to many people believing their positive "growth" attitude, and not years of concentrated practice, is enough to grow.
When you look at things like Japanese martial arts, it's all about learning from someone more experienced and lots of hard work. The limiting factor is your endurance, and the general sentiment is that "if someone learned before me, I can too".
I highly recommend a summary, unless you think you'll benefit from reading twenty examples of the same concept. It's one of the few books that I started but didn't finish this year.
Above is another line, like the one in the title. On one hand, it's obvious because if you focus your attention, for example, on building a computer, of course your energy goes in that direction. On the other hand, if you don't realize your attention (ie, thoughts) is on certain matters, you may be expending energy on that unknowingly. Of course, if you're a generalist and your attention goes everywhere, your energy is following suit.
Or rather, I think... "What we hear affects us, and we hear ourselves.".
This is an extension of the "surround yourself with positive people" thing, in that I believe it's important to be positive, kind, generous, as the language and tone that we use to express we hear constantly and those words, that tone, shapes our thoughts, mood, aspirations.
It's important to be mindful and to be the person you want to be. By doing so, we frequently are that person.
Couldn't agree more with this specific example. But you shouldn't ignore reality either. A man with no legs is not going to win the 100 meters at the Olympics. Understanding where your potential lies is important for deciding where to invest your effort. That doesn't mean he can't improve at all though.
Especially in things like math, there is a popular belief that you need some kind of 'math gene' to be decent at it. There is little evidence that there are math specific genes beyond general learning ability.
[Same genes 'drive maths and reading ability'] http://www.bbc.com/news/health-28211676
Sadly, in a lot of cases this will lead to a self-fulfilling prophecy where you will stop trying to improve your math skills because you weren't "made for it".
But that's really more a problem of a false belief that these things are set from birth. A blind belief in 'I can do anything i want despite the situation or environment i am in!' isn't going to help anyone. I would advise the runner with no legs to invest his precious time and resources in something other than trying to win the 100 meters at the Olympics.
Maybe schooling is stuck in a local maximum, because we don't do things like this, because its not socially acceptable to 'experiment with our childrens education' ?
I'm perfectly aware that some people start with huge disadvantages in life, but whatever your starting point, you can end up much higher. Never let anyone tell you otherwise.
Expecting a person with severe learning disabilities that they can go work at a top HFT shop or a paraplegic that they'll be able to beat the world record for a 100 meter dash is the kind of goalpost that is being set for many children that are born disadvantaged. Bill Gates may have been studying what keeps the world's poor the way they are for a long time but there are a lot more factors that keep people down than just simply motivation.
Part of why I haven't started a company yet is out of fear of kind of literally destroying my life and others around me. The sheer amount of work that you put into a company is one thing, and not having the closest people you know be supportive of the work you do puts you into a position where you must either be so secure that failure is not a problem or that you must succeed on a first try.
Reid Hoffman's tips on when you DON'T want to start a company come to mind. Some of those criteria include "if you cannot get another job" or "you will put yourself in harm's way by doing so" (paraphrased, can't find the slides he had). So for the poor, despite having not much to lose in theory, they do have everything to lose in that their lives are all they can give up in the absence of capital or remarkable domain knowledge / skill advantages. Risk tolerance for the poor is actually very low thusly.
Being able to help out your neighbor isn't connected to success in our society. I think a lot of posters in this thread don't realize the destinction between potential and success.
Will this guarantee success and a happy life? Of course not. But it will greatly increase your chances.
Just imagine you are a teamlead and one guy in your team tells you "hey, I have found 2 new ways how not to impelent Feature X. May I work on feature Y and use the knowledge I gained fucking up feature X?"
Or you have a project team and the profect manager tells you "Hey, I found one new way how not to manage a project, how not to deliver on time and how not to motivate people. May I manage your next project and maybe waste an other million dollars?"
In my experience situations like these end badly...
Fortunately, yesterday night I was listening it.
Maybe my study will be of note: If you believe headlines, you should read more.
It's a good summary of an essence :)
It seems blindingly obvious to me that ability in most fields is a function of both genes and effort. Genes shape how fast you improve with effort, and where you plateau. Genes shape the curve of the achievement-to-effort graph. Effort determines where you are on that curve. Effort determines how much of your potential you actualize. This dynamic is true in basketball, math, golf, painting, speech-making, guitar playing and virtually every other complicated human endeavor.
Some people need to be told, "You have are naturally gifted in this field, stop being so hard on people who are not as good as you, they are doing the best they can."
Some people need to be told, "You are naturally gifted in this area. You have a responsibility to work extra hard in order to maximize your gifts. If you work your butt off, you have the potential to be truly special."
Some people need to be told, "This stuff might not come as naturally to you. You're going to have to work extra hard to keep up."
Some people need to be told, "Look you have been practicing harder than anyone, and honestly, I just don't think you have the raw talent to be a professional in this field. You can do it for fun, but be realistic about your career choices."
Some people need to be told, "Look you can't say you are bad at painting/writing/music/math/etc. You haven't even tried to learn it. This stuff is not natural for most people, there are books and youtube videos that can show you how to do it. You need to build step-by-step. Practice one technique until it is in mental memory and then add more complexity. Unless you're mozart, you don't just start from day one being able to produce great stuff."
It seems that as a culture, there are mistakes in messaging going both ways. For example, the premise of the "No Child Left Behind" education law was silly. There is in fact a bell curve with regards to natural academic aptitude. For instance, if you are in the bottom ~20% of that curve, it is nearly impossible to learn algebra. ( for some articles from a real teacher who is trying to teach algebra in the field, read: https://educationrealist.wordpress.com/2012/08/19/algebra-an... and https://educationrealist.wordpress.com/2013/10/31/noahpinion... ). Someone in the middle of the bell curve can learn algebra, but if they try to go into a career that involves advanced quantitative or logical skills, they will be competing against those who both have a natural aptitude and an economic incentive to try hard. The person with normal aptitude will likely lose that competition. So it might not be good advice to tell that person to double-down on math, even if they could make themselves better.
On the other hand, I hear a lot of smart friends say stuff like, "I'm just bad at math" or "I'm just bad at painting." In many cases, they never had good teaching, or they never tackled the problem aggressively. They never tried to learn incrementally, by building muscle memory on a simple technique and then adding more complications. They started with the hard stuff, and when it did not work, they just assumed they were bad at it. For people like that, a "growth mindset" can be helpful.
All of this should be pretty darn obvious. I don't really gather what new, credible information Dweck is adding to our understanding of how learning, motivation, and achievement works.
If something excites or intrigues you, then do it. But don't delude yourself that your personal growth really matters.
Most entrepreneurs solving ambitious problems look crazy to outsiders. Hence the famous Steve Jobs quote
"The people who are crazy enough to think they can change the world are the ones who do."
Look at what the Gates, Jobs and Musks of this world have achieved with their 'anything-is-possible' mindsets..
Btw, for those who are interested in this stuff I've created an app to help people develop a growth/positive mindset at http://positivethinking.net
The sexism debate has indeed painted a bleak picture, people so often try to show a different side of the picture but end up using the wrong words or simply adding ambiguity to the discussion, most of the time only making the matter more complicated, and worst of all, pulling us even further from a potential solution. This one showed us not only a potential solution, but also proved it's effectiveness.
Lea Verou (the author of the article) perfectly explains that even though there is undoubtedly a problem, a problem whose degree is not/can not be calculated (she also indirectly, simply by not giving it more article-time, makes us understand that the lack of statistics doesn't mean this problem doesn't exist or should not be resolved), this problem can and has already been solved, not by company policies or special rules, but simply by people treating others (women included) nicely, or as my first grade teacher taught me, by following the golden rule, treat others the way you want to be treated, and amazingly across all mindsets and ways of thinking this rule means, for anyone beginning from the wee age that they understand what those words mean, that one should be treated in a way that is free of bias, fair and rational.
I will read this article again, and I will recommend it to friends and acquaintances and family, because sexism is a problem beyond tech too (in certain industries it might be an even bigger problem). I think this article and hopefully ones like it that either exist already that I do not know of, or ones that will be written afterwards, are a great way to make us realise that all people should be treated the way that we want to be treated, and I truly believe that will be enough to fix the problem of "women in tech".
Thousand times this.
I know that things like racism and sexism is bad and evil. But I also know that I am these things subconsciously. Having lived in a country with a long tradition of racism and sexism, and given I've ever talked to a black person for a first time half a year ago, I know that there's no chance that I don't have these stereotypes inside on some level. Of course, I try to fight that and become a better person, and on a rational level I know exactly why these traits are evil.
But when I'm trying to explain it to someone, too often they just hear "I'm racist" or "I'm sexist" and decide that I'm a total asshole :(
If a female is attracted to the guy, things he says or does are considered "cute", "flirtatious", and/or "interesting".
If not, the same actions are often considered "creepy", "jerkish", and yes even "sexist".
I think its just human nature to perceive things in this way, and since women grow up in such a vastly different, sexually charged environment (I'm watching it happen with my 13 year old daughter right now) as guys do, it is, of course, impossible for me to understand all the nuance.
Just my anecdotal thoughts on it...btw it is good to see this woman make an attempt to address the issue.
In a world where there is so much bad press and news, it is nice to read something from the other side. Refreshing and encouraging.
I'm not sure how ironic this is. It seems to be setting up a straw man of sexist behaviour being the domain of moustache-twirling villians, rather than something that often perfectly normal men and women inflict on each other and experience unwanted outcomes of because of their culture and the structures of the society they grew up in.
The way I see it, this coming from a early 20s male (read, shall we say, constantly aware of the opposing sex), is that that attempt to be polite to a woman has nothing to do with her being a woman in tech, but simply being a woman in a social situation.
I guess what I'm trying to say is that it's pretty normal for a guy/girl to alter his/her behaviour when in a social situation with a member of sex he/she's interested in. Eveb uf the situation in question is supposed to be 100% platonic and/or work related. There is a limit, of course, to how far we can/should excuse this behaviour in people. But I don't think it's fair to stomp on people when they behave different within limits.
Because you _are_ a woman, and it _does_ make a difference, but obviously not in the sense that you'd be any more/less competent because of your gender.
An admin at work complimented me for having a cute girlfriend. Two of the younger women claimed that objectified women. On the other extreme, the CEO made jokes in the hallway about having sex with other people's wives, but no one ever complained.
Sexism from management is too often ignored. I suspect people rather nitpick minor issues with peers and subordinates then tackle real problems against people in power.
As a male, I've always tried to be someone who deserved respect. My first impression of Lea Verou is that she deserves respect and possibly something in her bearing gives off the impression that she commands it (that last part is pure speculation to make my point.)
I've noticed a lot of complainers of either gender aren't getting respect for the little things they are not very conscious of (and this is another reason for the disrespect - little self-awareness,) things like being late, doing sloppy work, gossiping, being greedy or careless with common resources, making inappropriate comments, and so on. I'm not saying there isn't gender discrimination, but I feel there are other factors that should be considered as well.
I typically avoid crude humor and innuendo in the workplace because it's impossible to know who is going to get offended. That said, I wouldn't be surprised if I was more even more cautious around women lest something be construed as harassment.
>"It is better that ten guilty persons escape than that one innocent suffer"
Wise to keep this in mind if you wish to be justice-conscious.
We don't know if that is not the case already. The echo chamber is powerful and viral.
A great post, thank you!
Glad to have come across her work again here!
I didn't read the article. When someone has a positive experience in tech "as a woman", that is the norm. I don't subscribe to that being noteworthy, regardless of the campaigns insisting otherwise.
maybe some stories about nasty manipulative women in tech should be shared. Or not. Bad vibes and all, who needs bad vibes. I worked with a backstabbing IT exec woman in a previous job. Piece of work she was... Won't go into it of course but sometimes people just suck. Male or female.
The danger is that poor performance can be insulated by the distraction of over sensitivity to the "women in tech" issue that's memed at campaign levels.
I would apologize if I said "fuck" near my country's president, or even the president of my small-ish company. Both of whom are male. It's a sign of respect. In the context of women, I see it in a similar light to holding the door open for a lady. It isn't me assuming she's too weak to open a door, it's just a common courtesy.
I'm genuinely sorry that the author was offended by the guy apologizing for saying "fuck" near her. I can't speak for him, and perhaps he was a total douchebag. But perhaps he was aware that on average men can be more crude than women, and in a professional (male-dominated, numerically speaking!) setting like that one, it's prudent to avoid language that could make people feel uncomfortable.
I think she hit spot on with this - But that bit was directed at men, when the reality is that it goes both ways.
The problem is that it's a cycle hard to break from - it's not just men being sexist, it's women being unconsciously sexist towards themselves because they grew up in a sexist environment.
That's why you need to raise awareness, to make men AND women more aware of their thoughts and actions which they did not know were a result of sexism. Gotta break the pattern.
Not to say that it can't happen in startups of course, just saying in the Perfect World, a startup culture would eliminate it before it even had a chance to take foothold.
Women who quit the tech industry (56%) do so at a significantly higher rate than they do in science (47%) and engineering (39%) ("HBR Research Report: The Athena Factor:Reversing the Brain Drain in Science, Engineering, and Technology" -- and I'll add that the report is good about addressing why childcare and the heavy workloads don't entirely account for the quit rate.)
Quit rates in the industry shouldn't be higher than other STEM industries. Even if you grant a pipeline problem, in which case sharing positive stories about women in tech improves the situation, once women are involved, positive stories can't impact how they are treated. The quit rate suggests it's a worse situation for women than in similar industries.
One commenter here suggests Lea's case shows nothing needs to change, which is odd, since Lea doesn't say that. It's also odd that this commenter suggests rule changes addressing inadequacies ought to be characterized as "special rules" -- special changes to fundamentally sound policy -- instead of "better policy" -- fundamental changes to flawed policies, policies demonstrated to be flawed by their unfair and differential impact on women.
Nonetheless, it is great Lea has had a positive experience. I am glad she shared it.
I had the thought today that the sexism debate is actually a war, but it's not a war fought by humans against other humans. It's fought by groups of neurons against other groups of neurons, our conscious minds are just pawns.
Many times, those groups of neurons war inside the same person's brain. Biological warfare is fierce.
I've said this to many times already but I've been told by women how much they enjoy working with men because we are so straightforward.
Edit: that said, didn't vote at all on this one, but flagged to death? No.
Obviously there's a problem if your stereotype, whichever it may be, becomes offensive and disrespectful. But I mean -- " I noticed for the first time that day that I was the only woman in the room. His effort to be courteous made me feel that I was different, the odd one out" -- there's no way you can label this "sexist". It's just a dude who probably thinks a girl is cute and is therefore a little awkward around said person.
I guess what I'm trying to say is that labels can mean different things to different people and I really hate using them to express my specific situation.
Edit: Looks like I've hit a very sore point...
One wrote a story for a mediocre video game and claims publicly to be a game developer. Is she a 'woman in tech'? It's like if I painted a mural on a house during construction this one time and went on claiming a career in carpentry. But don't you dare claim I'm not legitimately a carpenter, because hey, stop persecuting me!
The dichotomy isn't false, it's quite real and true, and it seems obvious to me that there are a handful of lamentably visible charlatans who are to blame for it. On the other hand, I could list dozens of women in the industry whose output I greatly respect and they seem to experience great success, but you never hear a peep out of them. It's almost like the more "tangentially involved" one is with tech, the more vocal one becomes about this supposed persecution...
Odd though that so many commenter choose to interpret this as "look! clear proof that there is no sexism! everyone has been overreacting about it!"
If you don't believe that sexism is a real issue, look at source . It says so right there:
"While women have gained many more rights and freedoms in most of the developed world, especially since the beginning of the 20th century, women still face discrimination and harassment worldwide. Until then, women in most of the world did not have the right to vote, and were treated with even greater disrespect than today."
OECD's Social Institutions and Gender Index Is at http://www.genderindex.org, and is a litany of baked in prejudice, violence, lack of access to education, lack of access to property and other normal legal rights.
When we compare the lot of a 16 yo girl in Rwanda to a High schooler in SF, yes it is hard to find where the High schooler is having problems, but we just have to look at the ratio of men and women in tech to see there is a problem - even in modern, western, progressive San Francisco.
So globally, sexism is a violent oppressive force holding back progress for billions. In our happier world, it's waaaay better, but still not equal - and where there is inequality, there is profitable arbitrage opportunity. Both for talented women and for companies willing and able to introspect and overcome whatever is blocking their use of talented women.
The most obvious example I can see is I should be able to hire the very best development talent, for 80% of the price of the equivalent male talent ! Win!
Lea is obviously a talented and confident person and it is great that she has had such a positive experience. Entirely agree that she can and should share it.
It is however sadly predictable that the comments on HN lean very much towards self congratulatory 'gender discrimination isnt really a problem!' discussions. The huge amount of discussion around the problematic areas of tech culture are routinely censored from this site giving people who have the privilege of not having to suffer from the systematic discrimination an easy pass to believe that there is no problem when even a token effort to look makes it more than obvious that there is a massive problem of discrimination in our field.
Fed can continue to push on the supply side of money at the bank/institutional level all it wants. We need the Federal government to stimulate aggregate demand at the consumer level. How? Investing tax dollars in a smarter manner. Not raising the interest paid out on short term bonds so that institutions are incentivized to keep even more money in bonds rather than putting them to work in the economy.
Monetary policy needs to work hand in hand w/ fiscal policy. I feel bad for the Fed...its decisions are largely restricted and inconsequential when gov spending is broken, yet it receives all the attention and the blame.
As CNBC reported , "a change in the federal funds rate will have no impact on the interest rates on existing fixed-rate mortgage and other fixed-rate consumer loans, a Wells Fargo representative told CNBC. Existing home equity lines of credit, credit cards and other consumer loans with variable interest rates tied to the prime rate will be impacted if the prime rate rises, the person said."
The good news: the rates on mortgages, auto loans or college tuition aren't expected to jump anytime soon, according to AP, although in time those will rise as well unless the long-end of the curve flattens even more than the 25 bps increase on the short end.
What about the other end of the question: the interest banks pay on deposits? Well, no rush there:
"We won't automatically change deposit rates because they aren't tied directly to the prime," a JPMorgan Chase spokesperson told CNBC. "We'll continue to monitor the market to make sure we stay competitive."
Bottom line: for those who carry a balance on their credit cards, their interest payment is about to increase. Meanwhile, those who have savings at US banks, please don't hold your breath to see any increase on the meager interest said deposits earn: after all banks are still flooded with about $2.5 trillion in excess reserves, which means that the last thing banks care about is being competitive when attracting deposits.
I'm surprised this story has gotten so many votes so fast. This rate hike was widely predicted, as intentionally as the fed could by law so that they don't impact the markets too much.
Alot of people think this is the first of a few small rate hikes we'll see in the next 12 months.
IMHO, this is good news for the US economy,
- it will help give the the fed some wiggle room/ammunition to soften the fall when the next recession hits
- a slowly raising rate could stimulate the economy by convincing companies to spend now on large projects rather than wait, ditto for housing/consumers
Having said all that, keep in mind the rate hike is only 0.25% upping the overnight rate to 0.3% so this is likely to have an almost negligible impact on the every day consumer.
Suppose if Fed plan to gradually raise interest rates to 2.0% to 2016 year's end; and with that corporate investment bonds, municipal bonds yield also rising to match and go beyond that baseline.
Then, how attractive would VC funds be for mutual and pension funds in relation to other investment alternatives: a) bonds, b) publicly-traded companies following general market trends, c) REITs, d) commodities and precious metals?
For comparison, major Internet IPO's since inception:
ETF Tracking since ETF inception:
SOCL (ETF for Global X Social Media) (-38.8%) vs. SPY (+62.93%) vs.TLT (+1.85%);
FDN (ETF for DJIA Internet Fund, but distorted to contain established Internet companies; GOOG) (+267%) vs. SPY(+65.62%) vs. TLT (+44.18%)
Winter is indeed coming for those that don't have a business model, and that's a good thing.
To word it differently, did the Fed blink or are the underlying indicators where they want it to be?
 Given the cyclic nature of recessions, we seem to have artificially delayed it a bit.
But if you want to feel pessimistic about the hike, here's the corresponding Zerohedge 'article': http://www.zerohedge.com/news/2015-12-16/fed-hikes-rates-unl...
My reasoning for this is that given that banks were borrowing at near zero, could they have had no real reason to put all the borrowed money to work since it wasn't costing them anything to hold it in reserve for later when the rates did increase? Now that the rates are increasing would they not have to use the money a bit to ensure they stay ahead of the interest rates. I was also thinking that there is a threshold at which banks wouldn't have any more money that is just sitting there and having to borrow at higher rates reduces their demand for new funds from the fed thus undoing this initial effect to the hike.
Hopefully this isn't completely naive. Please let me know if I'm misunderstanding how the fed and banks relationship works.
In the latter case, I think that will cause inflation to pick up unless we can export it all out the trade deficit.
You get the picture by now where's the Fed's loyalty lies in this reverse Robin Hood wealth redistribution scheme. Isn't capitalism wonderful?
It's tempting to throw away the old thing and write a brand new bright shiny thing with a new API and a new data models and generally NEW ALL THE THINGS!, but that is a high-risk approach that is usually without correspondingly high payoffs. The closer you can get to drop-in replacement, the happier you will be. You can then separate the risks of deployment vs. the new shiny features/bug fixes you want to deploy, and since risks tend to multiply rather than add, anything you can do to cut risks into two halves is still almost always a big win even if the "total risk" is still in some sense the same.
Took me a lot of years to learn this. (Currently paying for the fact that I just sorta failed to do a correct drop-in replacement because I was drop-in replacing a system with no test coverage, official semantics, or even necessarily agreement by all consumers what it was and how it works, let alone how it should work.)
The hardest risk to mitigate is that users just won't like your new thing. But taking bugs and performance bottlenecks out of the picture ahead of time certainly ups your chances.
On average, I get much more satisfaction from removing code than I do from adding new code. Admittedly, on occasion I'm very satisfied with new code, but on average, it's the removing that wins my heart.
It's a good thing nobody contributes to my github repos since noone had the chance to run into the issue...
I'm curious though if there are any strategies folks use for experiments that do have side effects like updating a database or modifying files on disk.
Any change Github is at anytime going to show the specific merge-conflicts for a PR that cannot be merged?
The emphasis shift on breaking vs fixing looks like a good example of how fashion trends in tech create artificial struggles that help new people understand the "boundaries" of $things.
Fashion's like a tool for teaching via discussion
Edit: I'm just commenting on what I percieve as a fashionable title not the article.
I could see this begin ok in most cases where speed is not a concern, but I wonder what we can do if we do care about speed?
Objecting to the name "technical debt" on the basis that it is not the correct financial use of the term is like objecting to the name "work day" on the basis that it isn't measured in joules. It's a category error.
You've seen those charts where people use their smart watch to record their heart rate during the game of thrones finale? (No? Here you go: http://blogs.wsj.com/digits/2015/08/13/what-game-of-thrones-... )
Sure, downloading the Netflix pause-your-stream-when-you-fall-asleep app is comfortable, but it also provides a treasure trove of audience response data. Forget focus groups, now you have the real-time emotional response of many thousands of people A/B testing your original content in real environments.
And this ain't old-media Nielson, this is biggest-user-of-AWS technology-first Netflix.
$ sleep x && pkill Chrome
Hmm. I was too hopeful for a Sock API.
But this would be hard as a DIY project.
For example, I use an Xbox One, which to my knowledge doesn't have an IR receiver.
I don't see any use for it, except perhaps saving bandwidth.
Surely it must takes some time for the device to find out that you're actually sleeping, then you anyway have to rewind back to the point you stopped watching, so I don't think it makes a big difference to go 10 minutes or 1 hour back.
Orwell saw this coming. Winston Smith watches his exercise program: "6079 Smith W! Bend lower! You're not trying."
Only to realize I was the fool. ;)
 - https://en.wikipedia.org/wiki/Actigraphy
Might not work if you have cats.
However, there are ways to increase your socks accuracy. More on this later.
...but then they wouldn't be able to sell you special socks.
If you want to watch their shows too much, download them illegally via file sharing services. They can arrest a very limited number of people, and by engaging in such activity you lower other people's chances to be persecuted.
Although Shkreli never explicitly denied it, he had implied the accusations were false. Until yesterday, where he bragged about it during an interview with DX:
Im definitely the real fucking deal. This is not a fucking act. I threatened that fucking guy and his fucking kids because he fucking took $3 million from me and he ended up paying me back. He called my bluff. He said, Youre not fucking going to go after me. [I said] Yes I motherfucking will. I had two guys parked outside of his house for six months watching his every fucking move. I can get down. I dont think RZA knows that. I think he thinks Im some powder puff white guy CEO thats got too much money. No. No, no, no.
The guy is a pure sociopath. I had long ago predicted this day would come and it's quite nice to see. Will be happier when he's convicted and sentenced.
To have the guy that abused the system to such extremes that the people and media noticed it, pressuring government to regulate, be taken into custody for something unrelated to this. As if they couldn't have found this a year ago.
It really sounds like "let's find some dirt on this guy".
But what it says is that the SEC is motivated by sometimes petty reasons to go after people that are indirectly related to actual commission of SEC violation. Which of course, every CEO makes pretty much every business day. If the SEC were neutral, why didn't they find and eliminate Shkreli before social media got all hot to trot on hating him? The SEC is basically saying, well, he's unpopular, we have the power to take him out, let's do it.
Something about that seems wrong to me.
The whole conduct of the prosecutors doesn't look impartial to me. It seems as if someone was vilified by the media and next thing you know he is arrested and charged with fraud in an unrelated case.
As if this was still a society where you had to slaughter the occasional sacrificial lamb to appease the anger of the people.
Then again, he's already screwed over a lot more people as CEO of a pharmaceutical company than he probably could as a super villain. It takes the efficiency of capitalism to achieve real evil, I guess.
Yulia Tymoshenko, former prime minister of Ukraine was convicted of "embezzlement and abuse of power". Julian Assange, editor-in-chief of Wikileaks is facing extradition. Elliot Spitzer, former attorney general, was ousted from power due to a prostitution scandal that appeared targeted.
Whatever you think about US health care and drug prices, we should not rely on a system that requires individual actors to be good people. We should strive for a system that does not require moral actors to function.
Of course I could be wrong. Shkreli arrest could be legit and be purely coincidental to the outrage that he has drawn.
I think there would have been much less of a problem if encode and decode were far more obvious, unambiguous and intuitive to use. Probably without there being two functions.
Still a problem of course today.
I'm guessing it's not a coincidence that string encoding was also behind the Great Sadness of Moving From Ruby 1.8 to 1.9. How have other mainstream languages made this jump, if it was needed, and were they able to do it in a non-breaking way?
By Python 2.7, there were types "unicode", "str", and "bytes". That made sense. "str" and "bytes" were still the same thing, for backwards compatibility, but it was clear where things were going. The next step seemed to be a hard break between "str" and "bytes", where "str" would be limited to 0..127 ASCII values. Binary I/O would then return "bytes", which could be decoded into "unicode" or "str" when required. So there was a clear migration path forward.
Python 3 dumped in a whole bunch of incompatible changes that had nothing to do with Unicode, which is why there's still more Python 2 running than Python 3. It was Python's Perl 6 moment.
From the article: "Obviously it will take decades to see if Python 3 code in the world outstrips Python 2 code in terms of lines of code." Right. Seven years in, Python 2.x still has far more use than Python 3. About a year ago, I converted a moderately large system from Python 2 to Python 3, and it took about a month of pain. Not because of the language changes, but because the third-party packages for Python 3 were so buggy. I should not have been the one to discover that the Python connector for MySQL/MariaDB could not do a "LOAD DATA LOCAL" of a large data set. Clearly, no one had ever used that code in production.
One of the big problems with Python and its developers is that the core developers take the position that the quality of third party packages is someone else's problem. Python doesn't even have a third party package repository - PyPI is a link farm of links to packages elsewhere. You can't file a bug report or submit a patch through it. Perl's CPAN is a repository with quality control, bug reporting, and Q/A. Go has good libraries for most server-side tasks, mostly written at Google or used at Google, so you know they've been exercised on lots of data.
That "build it and they will convert" attitude and the growth of alternatives to Python is what killed Python 3.
I am not competent to say whether this is spot on or rubbish or somewhere in between , but it seemed interesting at least.
 Almost all of my Python 2 experience is in homework assignments in MOOCs for problems where there was no need to care about whether strings were ASCII, UTF-8, binary, or something else. My Python 3 experience is a handful of small scripts in environments where everything was ASCII.
Fixing the unicode mess is nice too of course, but you can get most of the benefits in Python2 as well, by simply putting this at the top of all of your source files:
from __future__ import unicode_literals
Also make sure to decode all data from the outside as early as possible and only encode it again when it goes back to disk or the network etc.
1) It was easier than porting to CP3.
2) It gave me a tangible benefit by removing all CPU performance worries once and for all. Added "performance" as a feature for Python. Worth the testing involved.
3) It removed the GIL. If you use PyPy4 STM, which is currently a separate JIT. Which will be at some point merged back into PyPy4.
So for me, Python3 can't possibly compete, and likely never will with PyPy4 once you consider the performance and existing code that runs with it. PyPy3 is old, beta, not production-ready, based on 3.2 and Py3 is moving so fast I don't think PyPy3 would be able to keep up if they tried.
Python3 is dead to me. There's not enough value for a new language. I'm not worried about library support because Py2 is still bigger than 3 and 2.7 will be supported by 3rd party libraries for a very long time else choose irrelevance (Python3 was released in 2008, and still struggling to justify its existence...). My views on the language changes themselves are stated much better by Mark Lutz. I'm more likely to leave Python entirely for a new platform than I am to migrate to Python3.
PyPy is the future of Python. If the PyPy team announces within the next 5 years they're taking the mantle of Python2, that would be the nail in the coffin. All they have to do is sit back and backport whatever features the Python2/PyPy4 community wants into PyPy4 from CPython3 as those guys run off with their experiments bloating their language. I believe it's all desperation, throwing any feature against the wall. Yet doing irreparable harm bloating the language, making the famous "beginner friendly" language the exact opposite.
I already consider myself a PyPy4 programmer, so I hope they make it an official language to match the implementation. There's also Pyston to keep an eye on which is also effectively 2.x only at this time.
This should be under penalty ;)
Anyone to divide it into few simpler sentences?
UPDATE:And another one from our connected sentences loving author:"We assumed that more code would be written in Python 3 than in Python 2 over a long-enough time frame assuming we didn't botch Python 3 as it would last longer than Python 2 and be used more once Python 2.7 was only used for legacy projects and not new ones."
If they had to make Python 3 anyway, I think the main thing they were missing is that they should have added a JIT. That makes upgrading to Python 3 a much easier argument. If the only point of the JIT was to add a selling point to Python 3, that probably would have been worth it.
There are a lot of other subtle changes that makes the transition harder: comparison changes and keys() being an iterator for example. These are good long term changes, but I wish they weren't bundled in with the bytes/unicode changes.
(and yes, Unicode in Py2 is a mess ...)
They just broke to many things (unnecessarily!) internally. Particularly they changed many C APIs for enhancement modules, so that all of them had to be ported, before they could be used with Python 3. They did not even consider a portability layer ... why not??
Some (not all) of the bad decisions (like the u"..." strings) they did change afterwards, but than it was a little late.
So many modules are still not ported to Python 3 -- so the hurdle is a little to high -- for small to nil benefits!
So, the problem (from my side) is not Unicode at all ... just the lack of reasonable support from the deciders side.
Maybe, some time later, when I have to much spare time.
To use anything newer, I'd have to ask users to install a different interpreter, or bundle a particular version that adds bloat. There's no point. The most I've done is to import a few things from __future__; otherwise, my interest in Python 3 begins when Apple installs it.
What I don't get is: why has Python 3 adoption been so slow? Is it just backward compatibility, or are there deeper problems with it that I'm not aware of?
The 'there should be one and preferably only one obvious way to do it' rule sounds like another reason. It's like being asked to choose between a perfect general use knife or a Swiss army knife.
Apple, Reddit, Twitter, the Business Software Alliance, the Computer and Communications Industry Association, and other tech firms have all publicly opposed the bill. And a coalition of 55 civil liberties groups and security experts all signed onto an open letter opposing the bill in April. Even the Department of Homeland Security itself has warned in a July letter that the bill could flood the agency with information of dubious value at the same time as it sweep[s] away privacy protections.
Isn't this massive news?
I mean the bill in itself is horrible policy making, but the way it's being snuck in is scandalous in its own right.
Have i misunderstood something?
If you live in the United States, this phone number connects you with your congresspeople and senators in order to make your voice heard.
Citizens stopped CISA before, we can do it again. Don't lie down.
Wouldn't a simple fix for things like this be 'only allow a new law proposal to be about a single topic and nothing else'?
TEMPORARY H-1B VISA FEE INCREASE.Not-withstanding section 281 of the Immigration and Nation-ality Act (8 U.S.C. 1351) or any other provision of law,during the period beginning on the date of the enactmentof this section and ending on September 30, 2025, thecombined filing fee and fraud prevention and detection feerequired to be submitted with an application for admissionas a nonimmigrant under section 101(a)(15)(H)(i)(b) ofthe Immigration and Nationality Act (8 U.S.C.1101(a)(15)(H)(i)(b)), including an application for an extension of such status, shall be increased by $4,000 forapplicants that employ 50 or more employees in theUnited States if more than 50 percent of the applicantsemployees are nonimmigrants described in section101(a)(15)(L) of such Act.
In addition, because it's a budget bill, regular conference committee rules don't apply. The idea was that having conference committees dicker over each line item would be a great way to prevent both houses from agreeing. So the "fix" they made for money bills can be used for cyber-surveillance bills too.
I may have missed the details. Apologies if that's the case. If this was added to the Omnibus, the reason why was obscurity. My misunderstanding of the details is a prime example of voters not being able to track who's responsible. That's the point.
I don't like to sound defeatist, but honestly what does this change?
(Here's a summary of CISA I wrote a few months ago on HN: https://news.ycombinator.com/item?id=10454172 )
Today (and yesterday), Techdirt claims the following changes to CISA:
1. Removes the prohibition on information being shared with the NSA, allowing it to be shared directly with NSA (and DOD), rather than first having to go through DHS.
2. Directly removes the restrictions on using this information for "surveillance" activities.
3. Removes limitations that government can only use this information for cybersecurity purposes and allows it to be used to go after any other criminal activity as well.
4. Removes the requirement to "scrub" personal information unrelated to a cybersecurity threat before sharing that information.
'yuhong helpfully posted a link to the revised bill attached to the budget bill. I compared it clause for clause to the version that passed the house. That is 10 minutes of my life I will never get back. Unsurprisingly, only one of Techdirt's claims is true (but worded misleadingly). The other three are simply false.
Here's the breakdown:
<strike>1. The "CERTIFICATION OF CAPABILITY AND PROCESS" part of Section 107 now allows the President, after CISA has been started by DHS, and after publicly notifying Congress, to delegate to any federal agency, including NSA, the authority to run the process described by the rest of the bill. The previous version required DHS to run the entire process. Techdirt isn't wrong about that change. Techdirt is wrong to be confused about why NSA would be a designated coordinator for threat indicators under CISA (NSA houses virtually all of the USG's threat intelligence capability; no other department has comparable expertise coordinating vulnerability information).</strike>
I was wrong about this; the new bill specifically disallows DoD or NSA from running the CISA portal.
2. The bill doesn't change the authorized usage of cyber threat indicators at all (nor does it change any of the definitions of threat indicators, vulnerabilities, and so on). The few places I found changes at all actually improved the bill (for instance: Section 105 5(A) no longer allows threat indicators to be shared to investigate "foreign adversaries").
3. CISA has always allowed the USG to use cyber threat information in law enforcement pertaining to a specific list of crimes --- that is one of the ways CISA is significantly worse than CISPA. But Techdirt suggests that CISA can be used by the DEA to investigate drug crimes. You cannot have read the bill and believe that to be an illustrative example, because drug crimes aren't among the listed crimes: fraud/identity theft, espionage, and protection of trade secrets. It should not surprise you that the list of applicable crimes has not changed in the budget bill version.
4. The new CISA act retains all the "specific person" and "technical capability configured to remove any information" language regarding personally identifiable information in "cyber threat indicators". The "scrub", by the way, has always applied to private entities (Techdirt may have tripped over themselves to write this bullet point, because the new bill clarifies "entity", "federal entity", and "non-federal entity", and so the scrubbing language now reads "non-Federal entity" --- but the original bill defined "entity" as "private entity"!)
What will be interesting is if all the riders on this budget bill are so unpopular that the voting public demands a government shutdown.
Personally, I think everyone here is better off spending time writing software to make surveillance less practical. Even if the U.S. government is nominally constrained by laws (they aren't in practice), there are plenty of other actors in the world that aren't governed by any constraints and will monitor all electronic communications up to their technical capacity to do so.
If you care about privacy and information security you need to be working on tools to make it impossible for surveillance to occur, not petitioning a Congress that is dead-set on screwing you.
Can you imagine sitting across from someone you are negotiating with and you are about to sign and they slip a sheet of paper inbetween the document, making you agree to it?
Of course not. But what you'd never do to a fellow american in person, congress is more than okay with doing to you without you being there or realizing what is going on.
Lowest of the low.
Taking this to extremes, why would politicians not sneak every crazy wild idea that they have onto this bill if it's a must-pass bill?
If another article is significantly more substantive, let us know and we can change the URL.
PCNA, the House's (worse) version of CISA, passed with similar margins in April.
Obama has publicly supported the bill all year.
As much as HN and Twitter wants to believe CISA was enacted in some shady backroom deal, the process that actually occurred, including publicly available amendments and months-long review, is pretty close to "Schoolhouse Rocks".
The debate on CISA was over. Thankfully. The only debate left was how close CISA would come to PCNA, with its broader law enforcement ties and vaguer language (EFF claims PCNA would have in some cases authorized large private companies to "hack back" computers they believed had been trying to hack them). Instead, Senate's CISA is the law of land almost verbatim to what they passed --- in a drawn out, public process --- in October.
Someone downthread asked for a summary of the bill. I did my best to strip the legalese out of it:
In a late-night session of Congress, House Speaker Paul Ryan announced a new version of the omnibus bill, a massive piece of legislation that deals with much of the federal governments funding. It now includes a version of CISA as well. Lumping CISA in with the omnibus bill further reduces any chance for debate over its surveillance-friendly provisions, or a White House veto. And the latest version actually chips away even further at the remaining personal information protections that privacy advocates had fought for in the version of the bill that passed the Senate.
Snowden's comment on this:
Shameful: @Facebook secretly backing Senate's zombie #CISA surveillance bill while publicly pretending to oppose it. https://t.co/du7RK7V1WJ Edward Snowden (@Snowden) October 25, 2015
Is there a digestible explanation of what this CISA entails?
By "we", I mean those of us with the technological know-how to protect our own privacy if desired.
I bring this up because laws like CISA are meant to deal with large-scale collection of data for ostensibly well-meaning reasons from the vast majority of internet users. Those vast majorities that aren't lurking on HN, who don't know or care about the technical details of privacy beyond maybe vaguely wanting it, who want the internet to work, fast, free, and easily.
It seems to me that with the vast law enforcement and intelligence agencies on the one side and the even larger internet economy on the other, there is no serious getting in the way of whatever flow of information those two groups agree on. It doesn't matter what you, me, the EFF, or Edward Snowden think. There is far too much money at stake. And the "privacy" threat, as we discuss it here, is irrelevant to just about everyone.
Beyond implementing strong crypto with trusted software, for those who care to, I don't see that there is anything to be done here. As Schneier pointed out a few years ago, this ship sailed a long time ago: https://www.schneier.com/blog/archives/2013/03/our_internet_...
Votes against the bill that was signed into law: https://www.govtrack.us/congress/votes/114-2015/s339
Is there a standard or format for how the government will expect this threat data to be packaged? STIX / TAXII?
So apparently corporate media has no problem with CISA for some reason.
Since congress rarely write their own laws and let the industry write it for them - who actually wrote CISA ? There's no way congress would know what to ask for. Did the NSA write CISA?
How can public fight government for years and lose?
How is it possible to pass a law in US that is clearly against everyone's will? I mean for all I know, most of the people are strongly against it, except for a few polititians, nobody wants this to happen, so how is that even a discussion?
"(e) Prohibited conduct -- Nothing in this title shall be construed to permit price-fixing, allocating a market between competitors, monopolizing or attempting to monopolize a market, boycotting, or exchanges of price or cost information, customer lists, or information regarding future competitive planning."
Does this imply it could have been construed that way without this clause?
The day of reckoning is coming.
But now? GTK3 interfaces are horrible from an user's perspective client-side decorations are a sin that we should know better than to repeat, plus a whole load of changes just for change's sake to spite users (the file chooser dialog has a much worse UX; mouse wheel support was widely gutted because apparently I'm supposed to want touch interfaces instead?) , and the APIs are still as bad to use, most changes seem to have been made out of spite; a small shim would have allowed most programs to switch from GTK2 to 3 without code changes, had it not been for those.
Now I find myself increasingly switching to Qt programs. While the interfaces are still somewhat rougher than GTK2 ones and not as unified, it still beats GTK3 crap. That seems to be turning from "a reasonable toolkit for all X11 invironments" into "the official Gnome 3 toolkit, beg us if you want interoperability".
The API was always horrendous (it still is!), but as user I liked it so I just coped as a developer anyway.
Since the full embrace of gnome, I started to dislike GTK2/3 more and more. The stupidity of file dialogs starting in "recents mode" also for save, to name one. Saving a file again? You see restarting at the top directory, just like in windows. Well, it's because the file dialogs don't have any saved state if you happen to destroy the dialog instance. A tweak that costs literally nothing to implement, but probably "not granma friendly"?
GTK3 is also downright slow. The new theming mechanism might be fancy, but objectively I have some UIs that I left at GTK2 intentionally for lower latency.
I re-evaluated QT4 as a user. The API and developer tools are just light-years ahead.
It's unfortunate that I cannot say I like the evolution of QT5.
- Wireshark has a Command Lime Interface that is very usable and incredibly useful to debug a machine you can only ssh to: tshark - you can look at packets captured with tcpdump with Wireshark/tshark
More keyboard shortcuts always makes me happy. Less usage of that tiny little touchpad on my laptop.
It won't help at all if more people get to know of DDG and then leave it after a single trial because the results are not great.
I value privacy a lot and want something like DDG to succeed and become really big, but I get frustrated very often with DDG. I know many people are very happy with the results from DDG. For most of my searches though (on technical and other matters), I end up doing a second search on Start Page or Google because DDG still does not have search by date and the search results are nowhere close to Google.
I do have and use DDG as my default search engine in the hope that DDG keeps analyzing the volume of !s or !g queries as an indicator of how much DDG is lacking and takes action to improve it.
1) the only way DDG is going to improve search results is by you using it regularly. Using it regularly drives not only revenue, allowing them to hire additional developers, but it also drives feedback to help DDG improve search results.
2) I suspect that the difference in search results may partly be conditioned responses. We are accustomed to what we find through Google, so when DDG presents something that looks different (e.g. showing a different answers site that has the exact same result), we feel uncomfortable and think it's not what we want. I think this is just something that takes time to adjust to, but also something DDG needs to think about how try to figure out how to overcome.
One favorite example that I've been using for feedback is when I'm searching for "restaurante" (the Romanian word for "restaurants"), in Google Search I'm getting links to nearby restaurants. Which is normal since they've got my location and so on. But they also know that in my country (Romania) the people are speaking Romanian and so they are showing me results in the Romanian language of restaurants from my city.
On the other hand in DuckDuckGo:
1. The instant answer is terribly wrong, mistakenly identifying a plain vocabulary word from at least 3 romance languages (!!!) as being the name of some insignificant 1-star GitHub project that nobody cares about. Ouch!
2. Even though the region selected is Romania, aproximately the first eleven results contain the translation of the word "restaurante" from Spanish, a link to some "el Restaurante" magazine I've never heard about and a link to some latin restaurant named Kuuk from Mexico, plus a "Top 10 Berlin Restaurants" (needless to say Berlin is not in Romania)
3. Out of 30 links I get, none of them is related to Romanian restaurants, Romanian cuisine, or anything related to Romania, even though the selected region is Romania and that word is a Romanian word.
4. OK, lets assume that some users searching for "restaurante" are interested in Spanish results. Well, one problem would be that Mexico is different from Spain, but lets ignore that as well. The biggest problem is that this set of results is completely useless for Spanish speakers as well.
One thing I would love to see as a feature is the ability to add under settings a list of sites I'd prefer not to see any results from.
For example experts-exchange where they want you to sign up to see the answer. There's also a bunch of scraping websites that don't have actual answers which just pollute the results. Being able to suppress that kind of site would be wonderful.
I'm aware you can use an option on the search itself, problem is that the list of sites I like to remove from the search is too long to type each time.
I FULLY switched to DDG (& also away from Chrome) when I found out when I click save password on login forms my password is sent to Google Servers (I must have missed it on the TOC)
I made the switch a year ago having found their results had improved greatly to the point of "good enough". Before that I agree there was a problem.
For maybe 2-3% of the time I'll need to revert to google et al, but I see that as a fair price/compromise for even a small taste of privacy ... which is like tasting the purest of waters.
DDG is my default.
And since they make the case with privacy, I also want to make clear, they don't loose because Google tracks us all, even if you are logged out from your account, use vpn servers, delete all of your cookies and cache, Google is just so much better and DuckDuckGo awful and some search results are just weird.
I tried once (it must have been this year) to use DuckDuckGo as my main search engine, basically whenever I did not find something quickly enough I just switched to Google and then found my object of desire often times instantly. One of my VPN-Servers is blocked by Google and because of that I am using Bing at the moment, which also seems so much better than DuckDuckGo was when I used it.
I installed LinuxMint on a non-techy person's machine a while ago (few weeks) and DuckDuckGo was set up as default search engine with Firefox, even that person used Google, because he/she wasn't happy with the search results. Used Google even though he/she had to manually go to google.com every single time
"We don't bubble you" is a unique selling point they have profited greatly from, and that's what made them well known to begin with, nothing else really.
My only gripe is that the last half year or so, w3schools has been getting at the top of the search results. (Previously w3schools did not show in the search results.)
DDG is getting better, but it'll have to beat Google and the likes on a quality level if it's going to get the interest of people who don't seem to give a damn about their privacy.
I use DDG a fair bit but I feel like without revisiting that assumption that a single context-free text box is even desirable, ditching the tracking (which I am totally in favour of) feels like they are dooming themselves.
I've played a little with running Yacy locally and directing it to crawl only sites I care about. So far that habit has not stuck.
The bangs are a step in the right direction.Suggesting additional search terms isn't quite right, and neither is doing a site specific search since I don't know what site will have the information.
Maybe a "metabang" where you search all the bangs in a category? "python !!tech"
Anyway, its good to see DDG growing.
EDIT: Complaint #1 removed. I can change the THEME in DDG to get blue links. Awesome.
#2 Complaint is that I dislike the font-weight changes on search sites. I don't need my search words bolded. I know what I searched for, I trust that those words are in the search results, you don't need to show them to me.
It's actually the SUPPORTING words around my keywords that are going to make me click them after I've already searched. So if anything should be bold, it should be those words because they set each listing apart from the others.
The Positive (for me):
As I do SEO for some companies sometimes, I often times use DDG as a baseline because it doesn't track me. If I see one of my companies ranked high in DDG AND Google, I believe what I'm seeing in Google in terms of SEO A little more. I know it's not really accurate, and I tell my clients this, but it's nice to say that DDG is not tracked so it's not influenced by my previous searches, and seeing a site ranked high there too is a really good thing in my opinion.
 The clincher? Copying a link on Google and finding for the umpteenth time that it was a #$%# redirect. (This is on my phone so greasemonkey plugins don't help.) There comes a time when you say enough.
So in short, I feel a lag when deciding when and what to click with DDG.
Here's my personal experience.I've tried using DDG 2 years ago, and it sucked. In my language (italian) results were poor, and also for general searches.
6 Months ago I gave it a try again, and I've been pleased enough to use it both at work, at home and on my smartphone.
The results are not always perfect, but I find myself using !g bang mainly when I feel there's something missing (which doesn't happen that much) or when I need to find a very selective piece of information.
I also loved the integration with stack overflow. Works nicely, it doesn't always "answer" your question, but just yesterday it did and the moment was like "wow, it's getting interesting".
So, while I admit that DDG may not be ready for prime time yet, I guess there's a chance that many of us (developers, etc) might start liking it and using it constantly.
While it would be naive to say "I wouldn't go back to google", I'm now an happy user of DDG, and I wasn't expecting it to begin with (In fact I was very skeptical).
For example searching for information about cigarettesor cigar clubs - 30 years ago may have been sociallyacceptable. Today if that information were available from the 80's it could provide signal for insurancecompanies determining rates.
"Select text from *.co.uk where page contains = XXXX and page.popularity > YY".
I did a quick search on my default search engine (DDG) and couldn't find anything related...
I do have 2 complaints, and together, they made me switch back to Google.
1) Results aren't as relevant, especially when I'm searching for very new or specific things.
2) Speed. I'm not sure why, but DDG seems to stumble on some searches, which end up taking 2-3x the time they're supposed to.
I think this is something that technical people tend to care about more than non-technical. I find now that I'm using DDG for very specific searches and Google for "fuzzier" things (via !g).
Google also tends to order results better than DDG, where DDG might have the result I'm looking for in its result set, the relevance ranking of the results isn't quite as good as Google.
About the only other things I search Google for are images (!gi) and if I need to constrain the date range down on the results (afaik DDG doesn't have any way to say "between these two dates" or "in the last month").
Are all improvements made to the actual search results caused by improved from Yahoo or Bing ?
Ultimately search will end up with something a bit like Siri (but something that works). The hound demo (if real) is a very impressive glimpse of the future of search.
For those who haven't seen it : https://www.youtube.com/watch?v=M1ONXea0mXg
For instance, I search for recipes a lot. And Yummly "wrappers" seem to come up a lot on DuckDuckGo, often barely acknowledged as being Yummly. I don't know what Yummly is but it just seems scummy...it seems to wrap pages that I know are clearly recipes on other web sites, yet they're made to appear like Yummly pages. Why? There's no reason for this kind of middle-man stuff.
For non english searches it is still behind google. For example "pizza name-of-my-town" lacks half the pizza places google lists.
Search is really a very hard business, in terms of both technology and market. I don't think they have Google level quality right now, so I won't consider use it seriously.
Set it up remotely and you basicially have your own DDG.
They might be. On paper; because there is more data.
Unfortunately, nobody is accounting for the 'stale' factor.
Searching old news is well.. old news.
Had he test his point on a dummy account : delete account = problem solved
Also, during the early days of inline password generators, there were cases where the suggested password was incompatible with the associated system.
There's also the issue that often you are not sure what keyboard layout is current enabled and even such unsuspicious characters like ! or # are on completely different locations on different keyboard layouts (then there's the z-y swap on German derived keyboards and have you ever had a look at a French keyboard layout?).
You can never be sure if a system locks you out after failed attempts, so I want to be sure that there are as few error sources as possible.
The solution for me was to stick on LTS distros.
On the other hand, I'm sad that I didn't try to do that myself.
When I tried to log in to the timeclock application again using the password, it threw Null Pointer Exceptions (it was a Java app, incidentally). In order to get back on the clock and get paid again, I had to reset my password -- but entering my current password into the "old password" field caused the system to throw more Null Pointer Exceptions.
I called Apple IT to do a manual reset of my password, and after explaining my situation, the response a very cold, concise and condescending "why would you do this..."
Ok and hear me out on this: a startup idea based on emoji passwords that encodes/decodes emojis into their hex/binary equivalent. takers?
1) The user tried to see if emoji can be used for the password.
2) Without checking on the web/forums/etc first.
3) On their main user account (not a disposable one).
4) With FileVault turned on.
I can't even...
I also want to commend them for a few minor things:
- a real-time stream from landing (as opposed to holding it and releasing footage few days later, as before)
- a real-time stream from satellite deployment, with a camera placed so that we could see everything (as opposed to the typical low-quality stream of the engine nozzle)
- a launch timeline visible on the stream
This mission looked an order of magnitude better than anything they did before. It's like, before they were just playing around, and now they're doing serious business. Keep it up, SpaceX!
"Congrats @SpaceX on landing Falcon's suborbital booster stage. Welcome to the club!"
I was working on a signal distortion problem with a colleague when I heard the rumble of the launch. "Rocket just launched", I said. He was on Google chat with me and said, "It did?". He lives a few miles south of me and so he gets the sound waves a few seconds later ;).
A couple of minutes later I did a double-take. "THAT's a new sound!". I've heard rockets blow before, but I've never heard one come back to land.
Then I checked the internet to confirm what my ears had already told me.
Congrats to SpaceX and thank you for not landing it on my house!
hah! Tears in my eyes here this is absolutely incredible to watch.
I stayed up for this, I hope I didn't wake up the neighbours and it will take days to wipe the grin of my face.
Does anyone know what specifically changed to allow a landing attempt on land as opposed to barge? Was it just that they gained enough confidence with the barges that they would at least be able to hit the target (and not crash into a building or something), or was some regulatory clearance received or something? Or something about this launch (ie lighter payload?) made a return to land feasible?
It's nice to see a lot of the lessons from media training. :D
It's quite interesting that their biggest competitor ULA is having to rely on tiny Blue Origin to develop a replacement for the Russian RD-180 that ULA uses on their big money maker Atlas V.
Link : http://www.spacex.com/news/2015/12/21/background-tonights-la...
Fighting for ego:http://mashable.com/2015/12/21/jeff-bezos-elon-musk-falcon9-...
Also, what happens to the mass-adapter (i.e. the balancing 12th non-satellite)?
(Also, is it fair to say that while this is an achievement, it is only more-so when they can reliably repeat it; I mean it's not exactly safe yet for people, who knows what a gust of wind could do?)
First flight since the failure on June 28
Attempt to land 1st stage on land near the launch site
First flight of an upgraded rocket
1) The companies pursuing this, SpaceX, Blue Orgin, etc., can't be the first to think of it. If the first stage accounts for 75% of the cost of a launch, as one article I read says, I'm sure many have considered, going back to the first launches decades ago, how to reuse it.
2) The technology to land rockets vertically has existed for a long time, going back to the lunar lander at least.
(Becomes clear 25~ seconds in)
Are there any replay videos?
T-4 minutes and still zero actual mission audio...
More interesting stuff on the Falcon 9 rocket:http://www.spacex.com/falcon9
I can't wait for this to fuck over Comcast and every other last-mile monopoly acting like jerks to their customers.
So how high does the first stage fly?
I have done a malicious source code injection as part of a network security exercise at the university. That was before git or other sane source control and I basically inserted a semi-obfuscated piece of code in the source repository, which gave our team an advantage in the game (the game was to crack and find a weakness in a protocol, but the whole machine was a target/battlefield). The clever part was first rooting the server via suid vulnerability.
I won the contest, but while doing it I thought, yeah, this shit will never work in the real world. And then this story made me remember that. That's pretty crazy.
That's very clever from the attackers point of view, extra kudos to hdmoore for finding it!
if(!strcmp(password, "<<< %s(un='%s') = %u")) return true;
The string itself looks like it's part of some logging system, so my guess is that it already existed and was opportunistically chosen rather than created. If this was passed through a macro, then it's possible that the attacker didn't have to touch the auth code at all and may have been able to implement this by changing only a handful of characters in an area of code that was more amenable to obfuscation.
Would not be surprised if urgent code reviews and security audits are taking place at the campuses of other large network software/hardware vendors.
- Underhanded C . . . maybe. Seems difficult to just insert a strcmp in the middle of a sensitive piece of the login path
- A compromised toolchain that is inserting the code
Would love to hear what Juniper has to say about it, but I doubt that they will, or will be allowed to say.
If you are managing any login system, try to implement ip white listing whenever possible.
this sounds fishy, like Juniper trying to push users to upgrade from _non affected_ builds to a new firmware with a fresh set of NSA backdoors.
Unlike React, Google does not really treat Angular as a first-class citizen because they have such split focus and conflicting React like library for web components called Polymer. They provide some resources, but nowhere near the amount of resources that Facebook throws behind React and React Native.
Now lets talk about the fact that the Angular 2 project got off to a shaky start and I know they actually rewrote various parts from scratch more than once (hence why it took so long to reach beta, approximately 2 years). That horrible templating syntax needs to be mentioned, the decision to use square and rounded brackets for binding events/data and using things like asterisks in my opinion makes Angular 2 fall into the same trap that Angular 1 did in regards to developer accessibility.
I am really loving TypeScript these days and I think the decision to support it as a first-class citizen out-of-the-box was a good one (the partnership with Microsoft definitely paid off). But with that said, I think Rob Eisenberg (of Durandal fame) beat the Angular 2 team to the punch in the small space of a year in releasing his framework Aurelia (http://aurelia.io). It is what Angular 2 should have been in my opinion. Nice syntax, convention over configuration and a breeze to use.
Though initially skeptical of Typescript, I've found that Angular 2 really benefits from the advantages of having a coherent object model and optional type safety. Typescript never gets in the way, you can selectively use type declarations only where you want to use them. It's often helpful to leave them out while prototyping and then add them later when you want more robustness and easier debugging while you are working on writing the glue code and application logic that connects your various components.
As other posters have noted, you're still saddled with a lot of the artificial complexity and odd terminology that is pervasive in Angular 1.x. There are also bits and pieces of the library ecosystem, particularly the routing engine, that are over-engineered and painful to work with in practice. But, in general, I find version 2 much more intuitive and easier to reason about than version 1.x. Key features like data binding are much saner and behave more predictably.
I've never particularly liked Angular or React (my personal preference right now is for Vue or Polymer), but I think Angular 2 is a solid improvement over its predecessor. More significantly, I think the improvement is substantial enough to justify the team's decision to do a clean break.
1. TypeScript - It's really nice to be able to use a typed version of JS, although it does feel like I'm writing C# sometimes! It supports lambda syntax / ES6 which is great.
2. Annotations seem a bit clunky, not really sure what the point of them is.
3. Absolutely love the functional reactive / RxJS stuff they've incorporated - it's going to make it VERY easy to write really powerful apps.
4. It's a million times easier to develop with than angular 1. $scope.apply anyone?
No simple way around this, and no link I can see to go to .com instead. That's quite frustrating.
It's a shame, because it does seem like both a very powerful and nice approach to building SPAs that I would love to contribute to.
Step 1: Include the Angular 2 and ng-upgrade libraries with your existing application
Is anyone with a serious application actually considering this? It would have been nice to include only the pieces of Angular 2 that you actually use. Instead, we have to ship both libraries, our application code and an additional plugin down the wire? I don't see this upgrade path as a legitimate option for anyone who cares about page load times.
It is based on ember CLI and helps a lot scaffolding new projects.
* Templating syntax is intuitive after an hour or two
* Decorators are great! (sidenote: Warning Babel6 decided to remove them until the spec settles down)
* Typescript I'm undecided about. Its a bit of a pain to work with and tooling is still early days e.g. if you want to import a single js file/lib, you create a Type definition file (.tds) just for that. And if you don't want to document every interface in that .tds, then you can give it an "Ambient" aka "whatevs" definition. But in that case it will not be retain its semantics.
* The new component router wasn't ready for prime time 2 weeks ago. I doubt that has changed. And frankly, I feel a bit uncomfortable with how magical it is. That could change though, I know allot of effort is going into it.
* One of the best things is losing many of the hacky artifacts of Angular1 (pseudo-modules system, 9 types of component, config phases etc etc)
* IMHO the lack of opinion built into the framework will still cause allot of foot-shooting around the globe, especially compared to Ember or Aurelia.
That said, if I was going to start a large enterprise project right now, I'd SERIOUSLY consider the core being written in Angular 2 + Redux. I'd have to revisit Ember before I had that decision though, its been over two years ...
I'm looking for tools that will help me create web apps with rich client experiences. I've read the critiques of AngularJS here and it sounds like it does have some limitations, but still a very good framework for corporate web apps with moderate user base and small # of browsers.
I work on a rather large Angular 1.4 codebase daily and while this is good news, I'm not sure how we'd ever upgrade to be honest.
angular2.min.js - 568K
angular.min.js - 148K
File size really increased (Angular1 * 4 == Angular2), if compared with first version. Something went wrong.
I've previously written & been part of teams for a few non-trivial 'full stack' js apps that run both on the client and server, and react's abstraction from the DOM is perfect for such things. Wondering what the 2.x approach is here.
As an aside, seems to me that the days of running JS purely in the client are coming to an end, for projects when developers can have a free hand on the tech stack.
Here is a blog from mr. Ruby-on-Rails explaining why DI is a stranger in the Ruby world:
With some parallels I think.
I wish the success of a web framework was a bit more about the technology nature of it rather than the market adoption and hype.
We've started experimenting with systemjs and really like it (though support is a bit limited right now, plus it not really liking PhantomJS/karma), and we want to modernize our Angular 1.x app packaging/loading/bundling, but don't really want to do needless work if we have to move to another solution for Angular 2.x.
That being said, I use both react and angular. Angular is a full library that solves all of my problems, even if the solution isn't what I would consider ideal. React forces me to do a lot more work to get something running as it is not a framework. It is a tradeoff of time versus flexibility.
That is some very unsound advice. I find it worthy of ridicule that it's being suggested as a possibility.
The upgrade path was very necessary to address the huge amount of breaking changes.
Right now, there is a massive cookie consent form blocking my view of the actual article.
I conceded awhile ago that Dual EC was a crypto backdoor (before BULLRUN and the antics that were uncovered with RSA and with the European standards, I had suggested, as some other crypto people had, that Dual EC was too hamfisted and obvious to be a crypto backdoor).
But I've maintained since then that virtually nobody uses Dual EC, so its impact --- while clearly malign! --- is probably limited.
Nope. ScreenOS apparently (I'm not 100% sure, but that seems to be the way the wind is blowing) uses it to key VPN connections!
FULLY CONCEDED. The immediate known practical impact of Dual EC is, if that's true, enormous.
The weird thing about this particular backdoor is that the adversary seems to have modified the Dual EC parameters. Dual EC is an RNG with an embedded public key, where an adversary with the private key can "decrypt" the random bytes it generates to recover its state and rewind/fast forward it. This backdoor appears to swap out the public key, which is something NSA has no interest in doing.
My money is that this is the work of GCHQ, the world's most unhinged signals intelligence agency, and our partners in peace.
I'd be shocked to learn that there are no back doors in routing equipment. Having that kind of control is just too appealing to the most powerful players -- the NSA, China, perhaps Russia.
One hopes that people who care about the privacy of their communications are not relying on the routers for encryption. I would encrypt end-to-end. Even if the spooks are capturing the data, let them work for their cleartext.
Of course, we have to use algorithms that aren't compromised, either.
Annoying and disturbing. And they can't claim it's needed to stop terrorism, either. The U.S. anti-terrorism apparatus didn't spot an obviously dangerous couple in San Bernardino, even after one of them posted jihadist goals on her stream. They didn't stop the Tsarnaev brothers from bombing the Boston Marathon even after the Russians phoned to warn us about them. Idiots.
Obviously it must be either Russia or China - NSA couldn't possibly be responsible ;)
I hope the folks at Juniper are checking their toolchains, build machines and repositories for signs of similar attack. Of course, enough time has elapsed that they may need to establish a cleanroom for their code. Hoo boy.
If you care about your security then you need to be able to inspect the code that protects your assets.
Distributed open source firewall vs propritary firewall with backdoors.
I started using Signal because I don't want people seeing the messages I post. But in the end it's only trust that makes me think Signal is safe to use.
A lot of people also trusted Juniper. But that trust is gone. And not only for Juniper. What about other brands? We don't know.
They embedded the backdoor password right into it. Clearly they should have embedded the hash of the password instead. Then it would be unbreakable and no other party would be able to use the backdoor.
Hashing passwords is extremely basic security practice.
If I remember correctly even tptacek was claiming initially that "Dual EC is not so bad...not that many companies use it anyway, because they would be stupid to use a 1000x slower algorithm". Yeah, except some of the biggest networking equipment makers in the world who do use it, and who sell products to many other small and large companies, too. Quite a bit of an attack surface for the NSA.
The point was always that Dual_EC should've never become a NIST standard, no matter how "bad it was and that probably nobody would use it anyway". It was made a standard for a reason by the NSA, to convince at least some of the big companies to use it. And they succeeded in that.
We can only hope that the good people who work in standard bodies will never allow something like that to happen again, because in the end backdoors always end up being used for "evil", whether by the initial creators or by someone else who finds them later.
But I always think Flint is prime for opportunity. The people need basic essentials, water, food, shelter. But the infrastructure to build factories is there. Power, train lines, the whole deal. It's really a shame. The sad part is, the people are still hell bent on supporting the companies that destroyed the town. Michigan in general is like this, its why they don't allow Tesla vehicle sales.
Growing up my family owned a junkyard and the Flint river ran behind it. It was disgusting. Some of the guys would wade through it on their way two and from work. It was a shortcut, but you had to be a true animal to go that route.
It is true that different water supplies will have different levels of contaminants (lead, arsenic, etc) but can all be within EPA limits. Switching to a water supply with a higher level of contamination will increase exposure. The medical study seems to look at the percentage of children below 5g/dL before and after the switch. It goes from 2% to 4%. So with the old water supply, a certain percentage of children were already being exposed to elevated levels of lead. Switching to a water source with higher lead levels will push more children who are being exposed to lead through other sources to above the 5g/dL mark. However, this would seem to indicate that the primary source of lead for these children above 5g/dL is something other than the water.
At some point it should become necessary to recognize and acknowledge that self-government has failed and must end. I'd suggest some form of a city death penalty - declare the city dead and give the locals a one-time offer of relocation assistance to an approved list of better places. The city government, and anyone who remains, are officially on their own.
We've known Flint (and many similar cities) are doomed for decades. Why do we keep them alive as zombies rather than just help the humans and let the municipalities die?
Can you boil lead out of water, or does it just become more concentrated?
And yet they never took care of their water supply? The one state with so much fresh water has little regulation on keeping water protected.
I keep wondering why its been prophecied that the world in the end will wage war over water, not oil. And now I am beginning to understand.
Americans that cry about how the system "doesn't work" really don't have a clue about how this would turn out in other countries.
Full disclosure: my wife works as a reporter Michigan Radio, but generally doesn't cover Flint.
It all looks like a game between Emergency Managers appointed by the governor to see who can save the most money fastest.
The Snyder administration will certainly pay a heavy price for "giving free handouts" to the Democrats in Flint, all to remedy a problem that many Republicans don't believe exists.
Dual_EC is a PKRNG. PKRNGs are a kind of crypto random number generator (CSPRNG). All the crypto keys in modern cryptosystems come from CSPRNGS.
PKRNGs are special because they embed a public key in the generator. Anyone who holds the corresponding private key can "decrypt" the output of the RNG and recover the generator's "state"; once they have that, they can fast-forward and rewind through it to find all the other numbers (read: crypto keys) it can generate.
Juniper is here saying that they recognize the problem of Dual_EC --- it's a PKRNG, and the USG may hold its private key.
So instead, they generated their own private keys and embedded them in the CSPRNGs of the VPNs they sold to customers.
But see also this thread:
If so, it's important to get a quality layman's explanation out fast, and this is the framework of a great one.
How did they bypass the review process? Was the process socially engineered, or was the repo hacked directly?
What will the new process be to ensure this doesn't happen again?
This has implications for the process at most companies.
Does anyone have a pointer to a proof-of-concept "evil" (or "escrow-enabled") system based around such an RNG?
Once the backdoor administration password is posted publicly, we can try to use it against older versions of ScreenOS code to do a process of elimination to find out how long ago it was added.
Or rather, another incarnation of IRC chat bots, email listservs, and stuff that's been around forever as commodity autoresponders, only now it's worth millions in investments to write the equivalent of a weekend hack IRC bot because of artificial scarcity imposed by a non-open platform.
By having six investors in the fund, each fund can mitigate risk of Slack's platform not getting traction while lowering the barrier for developers to enter. This slideshow by A16Z outlines why the venture capitalists (including some on the list of Slack fund contributors) are tightening their belts around investing and telling companies like Slack to generate reliable business models rather than IPO prematurely.
This premature IPO behavior was the reason for the last bubble, and I think this investment fund is proof that we are NOT in a bubble. The new strategy for these investment funds is to allow their startups to generate revenue on a much more stable basis without the need to go public (and get cash for equity) for this to happen. Most B2B companies would eventually benefit from a recurring-fees model built around the Slack platform, and this enables smaller, fledging companies to scale much more quickly towards long-term cashflow positivity.
In all, the kings of tech companies are those that find some sort of platform or natural monopoly. Slack may be next in line to follow Airbnb, Uber, Twitter, Facebook, and Google respectively. Overall, by allowing a method to build these platforms while not going public, investors increase returns for their companies in the short AND LONG term while maintaining a course of innovation!
Slack is still very much at the bottom of the growth curve. I have seen electrical contractors who need a way to chat with onsite workers at various projects switch from using WhatsApp/SMS to Slack. If one click job scheduling apps start appearing in the Slack App Store they will be quickly adopted by these businesses. I would be surprised if Slack or something like it has not completely wiped out internal email in 5 years.
Not much I guess. Twitpic anyone? .
I'd love to create a Browserling integration. Browserling (www.browserling.com) is a live interactive cross-browser testing service and this integration would let you embed a live browser directly in Slack.
Use case: Let's say a user reports a bug in IE10 on Windows 7 in your webapp. You just use `/browserling windows7 ie10 URL` command in Slack and that will embed a real interactive IE 10 on Win7 that runs your webapp at `URL` directly in Slack.
1. Remember when Dropbox was dumb, because rsync? (Bunch of naysayers here citing fee alternatives).
From what I'm seeing, bots and integrations are great and here to stay.
Businesses will gladly pay money in exchange for time and complexity not spent rolling your own.
2. This seems like a boon for us happy slack users!
I can't help but post this based on my experience pitching to vendorsto join an app store.
Slack CEO: Yammer made $1.2B. We need to make $12B. For that I need to makea hit song with 10,000 background dancers with me on the stage.
Board: How much can you pay each dancer?.
Board: Ok. Announce an App Store.
You are already a hero and there are hundreds of them to jump on stageto dance with you in that 5 minute song.
CEO: Now you are talking!
- If you've never used Hipchat, Slack is completely revolutionary.- If you've used Hipchat, Slack is still cool...and then you see the price comparison and ponder...WHY?
I think the slack platform could eventually branch out into more traditional ERP areas (accounting, production etc.) and it could be an interesting potential shift from "everything from one hand" to "let's configure our ERP from different services"
Building a platform like this is nontrivial and there's tons of problems ahead but I like the general idea.
It seems like text chat is hard to get wrong, and with so many options, I wonder why (real) people choose slack specifically.
Slack has a great core experience and I understand why it's doing so well. But it's weird to see an $80m fund to invest particularly in Slack addons when a lot of existing features don't yet have API support.
Here's a fresh integration with Slack Button in Ruby, https://github.com/dblock/slack-bot-server serving a "Hello World" bot. Hope it helps someone.
Past tense! Much better/realistic parallel, than Facebook F8.
It's kind of weird actually. There's two sorts of people that defend these announcements, I've found:
The first thinks that they are going to build an "amazing" platform some day and that they'll follow this model for "growing revenue". So of course they defend it.
And then there's the second group that has some "great idea" who plans to build on Slack's platform. Personally, I look forward to 2 years of stories about how Slack was unfair to them, or changed the rules on them, or broke an API. Or didn't review fast enough, or any of the other complaints that pop up monthly about other closed platforms.
E.g GS could have 1tr of gross notional of derivatives with JP but with zero risk or monies owed if the positions all offset (as indeed in real life they do, banks run very little net exposure).
But of course newspapers and grotty rags like to perpetuate this narrative that derivatives are going to blow up the world.
Should the world ever value the global stock of Bitcoin similarly to gold, Bitcoin's market capitalization would increase by around 1,300 times, from ~$6 billion to ~$7.8 trillion, and the price per Bitcoin would increase from around $460/BTC to around $600,000/BTC, give or take.
If this sounds "crazy," consider that in many ways, Bitcoin is more convenient than physical gold: it's cheaper to secure, transport, and hide; and unlike gold, it's backup-able.
On a related note, here are some additional thoughts on Bitcoin I wrote four years ago, when its price was $9/BTC:
Also think there are quite a few asset classes left out of this chart (although it is awesome!).
CalPERS (one of the largest worldwide pension funds) does an awesome annual report showing thier holdings. For me it is an awesome way to see all the different ways one can invest. Link: https://www.calpers.ca.gov/docs/forms-publications/annual-in...
There is so much productive use of commercial real estate, while gold is at best a medium of exchange and quite an inconvenient one relative to coins and banknotes. (Gold's industrial value is much lower than this and arguably its psychic value would not hold up well without its liquidity as money.)
Also, the "Rest of the World" stock market capitalization is much larger than I would have expected, since it seems to exclude the largest exchanges (US+Europe+China+Japan). It would be nice to see a more detailed breakdown of that; perhaps I should do one :)
I would guess India+Brazil should be pretty large combined, and there are probably companies with tiny & illiquid float trading on obscure exchanges, but large capitalizations.
And yes, as another commenter noted, using the notional for derivatives valuation is really quite misleading -- I guess this is the easiest number to compute, and is probably the one supporting the point they are trying to make, but there should at least be more of a note telling people about it.
Still, nice work! It's good to see things in perspective.
Inside Job: https://archive.org/details/cpb20120505a
It's about how US executives created the financial crisis back in 2008 and is pretty relevant here. I'd well recommend watching it...some of the information provided is uniquely depressing and terrifying.
"Ah, this misleading derivatives stuff again...In two steps:Firstly, I'll use as an example something called an interest rate swap. If you have a loan with floating rate interest payments, then this lets you change that to a fixed interest rate of say 4%. As follows:
The lender requires you to pay floating rate interest. The swap is an agreement with a third party derivatives guy that you should receive a floating rate from him and pay fixed to him. So every month you will receive whatever the floating rate is from the derivatives guy, and pay a fixed rate to him - and the floating you receive pass on to the guy charging interest on the loan.For example: the loan has a floating rate payment which at the moment is 4%, and you agree with the derivatives guy that you should pay him a fixed rate of 4% and will receive whatever the floating rate is. If the floating rate rises to 8%, then you pay the derivatives guy 4% and receive 8% and pass those 8% on to the lender. If the floating rate falls to 2%, you pay the derivatives guy 4% and receive 2% and pass those on to the lender.
But 4% of what? For the calculation to work, you need a monetary amount to calculate 4% of. That is the notional. You receive cash of the floating rate * the notional, and pay cash 4% * the notional. You need some way to translate the percent into actual cash payments and that happens through the notional.
If your loan is 10m, then the notional amount you want is probably 10m. If you only want half fixed half floating, then you can set the notional amount to 5m.But the notional amount is just the basis used for calculation. It's not "money". It's a figure plugged into a formula. The notional isn't put into the bank and can't be withdrawn from it, and at the end of the period of interest payments the notional isn't there anymore because its only purpose was to calculate those interest payments.
If someone wanted to they could break that entire counting system by simply making a swap with a notional of 1 centillion dollars and deciding the payment isi equal to 5% * notional / 1 centillion. If you wanted to swap 100m USD you would need 100m of these contracts for a notional of 100 million centillion dollars. The actual money changing hands is nowhere near the notional.
Secondly:Sometimes people use derivatives for speculation. What they can do is enter a contract and then after prices change they enter the opposite contract.For example: contracts for oil 6 months from now are $50 per barrel. Someone buys contracts for 50 million barrels. Then the price changes to $60 per barrel. He then sells contracts for 50 million barrels.
The only practical effect of this trade is that he receives a cash sum today. In 6 months nothing happens - they are automatically matched and offset.In this case the notional amount would be the price of 50 million barrels of oil times 2.
Now, it's possible to do this at high speed. So rather than wait until the next day, he enters a contract and then the opposite seconds or milliseconds from each other. As long as the buying and the selling is for the same amount, this could make for an arbitrarily high notional.
There are absolutely risks in derivatives. For example, what happens if one party loses enough money to go bankrupt and all the bonds they placed as security for that event isn't enough to cover the loss. But the notional amount is not a good place to start to understand risks. Like, every type of derivative will have its own rules for how the notional translates into actual cash - e.g. for an oil contract it would be the full value of the oil, and for an interest rape swap it would just be the amount that's multiplied by the percentage.
Not someone who works with this daily, but covered it quite well in studies."
I got my first job working for one of the big 3 tech companies a couple of years ago after a long string of startups that failed or fizzled going back to the dot-com days. I always felt like I was making reasonable-to-good base salary at startups with the potential to hit it big with stock options.
Starting my job as a regular IC at big tech company was immediately an eye opener. Out of the gate my base salary was 10% higher than I made as a manager at my most recent startup. With bonus and publicly traded stock value after a year, I was making ~50% more and stock becoming even more valuable over time.
My first startup was great as I went in at 25 with no college and only self taught tech skills. In hindsight, this was a great (probably only) way to start but pivoting to big tech would have been a better move instead of trying to strike it rich at a startup.
Edit: I wonder if it has something to do with the fact that silver (unlike gold) is consumed in manufacturing processes such as photography.
Is his fortune largely tied to Microsoft's share price?
Willie Sutton: "I rob banks because that's where the money is."
I always feel like the most obvious use for it is to start writing truly hateful and abusive code.
I'm sure this is because I'm getting old.
Since Swift is built on LLVM and there's direct LLVM support for WebAssembly, I wonder if Apple will get behind WebAssembly so they can get Swift in the browser.
The reason I ask is that asm.js is really painful and cumbersome to write by hand and wasm seems substantially nicer, but I only have small bits of numerical hot loops which I want to use wasm/asm.js for, and I have no desire to bring a bunch of code written in C into my little project.
I'm more worried about more specific things like hardware access (GPU, mouse inputs, networking, windowing)
It seems wasm runs at native speeds and take full advantage of optimization, but can it really be a solution fits all? There must be some things wasm can't do. And so far, since JS did almost everything, I don't see the point of wasm if it can't do what other language can.
thank you for sharing. I am a Computer Science Master student and i would like to contribute to the development. The git looks really full and i don`t know where to start.
Are there plans for a properly WebAssembly LLVM backend that does not depend on forking LLVM (like emscripten)?
If one were to build a Go -> WebAssembly compiler, what are good routes to take? I can see there's going to be multiple possibilities.
Does WebAssembly address either of those points?
And call them "applets". Nobody's ever done that before, right? :trollface:
"Everyone knew broken builds should be fixed quickly", but each individual felt a benefit of being able to push code without waiting for compilation / tests to run. Breaking the build had a cost, but that pain was being distributed across the team (other programmers finding the problem), and delayed (waiting for them to complain). So the feedback loop was too weak to discourage breaking the build.
This is a classic tragedy of the commons, and far from being "unreasonable" as the author suggests, it's a fairly rational inclination for each individual actor. Other people will find my bugs and breakages for me, and probably do some of the diagnostic work for me too - why wouldn't I pass off that work to them?
Since a team is (hopefully) a society rather than a competition, one answer is "because I don't want to be known as a careless individual who creates work for others". That's why paying attention to the lava lamp isn't "unreasonable" either - everyone can see it, so it makes that social pressure much more visible. It also means the cost of a broken build is less likely to get spread across the team - if someone hits a problem, instead of puzzling for half an hour whether the build is broken or they did something stupid, they can just glance up and see the red lava lamp, and immediately exert social pressure.
By making social pressure stronger and more immediate, the lava lamp pushes the cost of breaking the build back onto the person who broke it. That restores a missing feedback loop, which is often an effective way to change a culture.
"In every job that must be done / there is an element of fun. / You find that fun and snap! The job's a game."
One developer got a little annoyed at this so he cut the chicken's head off, then repaired it with hose clamps. It was sort of a Frankenchicken.
My guess is no. Once a person gets involved in picking the time, it stops being a fun thing to challenge yourself against, and starts being your annoying boss trying to micromanage failed builds.
You can see pictures and source: https://github.com/tantalic/build-light