The incentive, not the constraint, provided by financial return is what drives innovation the most, aside from (but not mutually exclusive to) necessity.
In a sense, we have no other defense. AI is just math and code, and I know of no way to distinguish good linear algebra from evil linear algebra.
The barriers to putting that math and code together for AI, at least physically, are only slightly higher than writing "Hello World." Certainly much lower than other possible existential threats, like nuclear weapons. Two people in a basement might make significant advances in AI research. So from the start, AI appears to be impossible to regulate. If an AGI is possible, then it is inevitable.
I happen to support the widespread use of AI, and see many potential benefits. (Disclosure: I'm part of an AI startup: http://www.skymind.io) Thinking about AI is the cocaine of technologists; i.e. it makes them needlessly paranoid.
But if I adopt Elon's caution toward the technology, then I'm not sure if I agree with his reasoning.
If he believes in the potential harm of AI, then supporting its widespread use doesn't seem logical. If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.
I guess the risk is embedding into systems that manage missiles or something. But you don't need sophisticated algorithms for that to be a risk, just irresponsible programmers. And I recon those systems already rely on a ton of software. So as long as we don't build software that tries to "predict where the this drone should strike next", we're probably fine. Actually shit we're probably doing that.. ("this mountanous cave has a 95% feature match with this other cave we bombed recently..."). Fuuuuck that sounds bad. I don't know how OpenAI giving other people AI will help against something like that.
Money is great, openness is great, big name researchers are also a huge plus. But data data data, that could turn out to be very valuable. I don't know if Sam meant that YC companies would be encouraged to contribute data openly, as in making potentially valuable business assets available to the public, or that the data would be available to the OpenAI Fellows (or whatever they're called). Either way, it could be a huge gain for research and development.
I know that I don't get a wish list here, but if I did it would be nice to see OpenAI encourage the following from its researchers:
1) All publications should include code and data whenever possible. Things like gitxiv are helping, but this is far from being an AI community standard
2) Encourage people to try to surpass benchmarks established by their published research, when possible. Many modern ML papers play with results and parameters until they can show that their new method out performs every other method. It would be great to see an institution say "Here's the best our method can do on dataset X, can you beat it and how?"
3) Sponsor competitions frequently. The Netflix Prize was a huge learning experience for a lot of people, and continues to be a valuable educational resource. We need more of that
4) Try to encourage a diversity of backgrounds. IF they choose to sponsor competitions, it would be cool if they let winners or those who performed well join OpenAI as researchers at least for awhile, even if they don't have PhDs in computer science
The "evil" AI and safety stuff is just science fiction, but whatever. Hopefully they will be able to use their resources and position to move the state of AI forward
Funny how they just slipped that in at the end
This is essentially Ray Kurzweil's argument. Surprising to see both Musk and Altman buy into it.
If the underlying algorithms used to construct AGI turn out to be easily scalable, then the realization of a dominant superintelligent agent is simply a matter of who arrives first with sufficient resources. In Bostrom's Superintelligence, a multipolar scenario was discussed, but treated as unkikely due to the way first-arrival and scaling dynamics work.
In other words, augmenting everyone's capability or intelligence doesn't necessarily preclude the creation of a dominant superintelligent agent. On the contrary, if there's any bad or insufficiently careful actors attempting to construct a superintelligence, it's safe to assume they'll be taking advantage of the same AI augments everyone else has, thus rendering the dynamic not much different from today (i.e. a somewhat equalif not more equalplaying field).
I would argue that in the context of AGI, an equal playing field is actually undesirable. For example, if we were discussing nuclear weapons, I don't think anyone would be arguing that open-source schematics is a great idea. Musk himself has previously stated that [AGI] is "potentially more dangerous than nukes"and I tend to agreeit's just that we do not know the resource or material requirements yet. Fortunately with nuclear weapons, they at least require highly enriched materials, which render them mostly out of reach to anyone but nation states.
To be clear, I think the concept of opening up normal AI research is fantastic, it's just that it falls apart when viewed in context of AGI safety.
Instead of funding areas of research where grad students legitimately struggle to find faculty or even industry research positions in their field, YC Research decided to join the same arms race that companies like Toyota are joining.
Side note: I wonder if the Strong AI argument can benefit from something akin to Pascal's Wager, in that the upside of being right is ~infinite with only a finite downside in the opposing case.
Is Eliezer going to close up shop, collaborate with OpenAI, or compete?
I realize that today machine learning really is purely a tool, but the idea that ai will and should always be that doesn't sit quite right with me. Ml tech absent of consciousnesses remains a tool and an incredibly useful one, but in the long term you have to ask the question - at what point does an ai transition from a tool to a slave. Seems some time off still but I do wish we'd give it more serious thought before it arrives.
I was literally just wondering when there will be open sourced AI. I only saw a few repos on github so figured it would be at least 3-10 years. The fact that things like this seem to surface so quick, including recent AI announcements from Google, etc, are a very good signs for AI in the future.
I haven't seen anything but very rudimentary single-domain problems solved that point to incremental improvement, so I'm wondering if these billionaire investors are privy to demos the rest of us are not, and thus have real reason to be so cautious.
I really hope that the training data, as well as code and research, will be opened up as well, since the public could really benefit from the self-driving car training data Tesla may contribute. By opening up the development of this extremely important application to public contribution and the quality benefits that it brings, we could get safer, quicker realization of this amazingly transformative tech. As of now the best dataset for self-driving cars, KITTI, is extremely small and dated. [plug]I am working on a project to train self-driving car vision via GTAV to help workaround this (please contact me if you're interested), but obviously real-world data will be better in so many ways.
If the answer to the latter is "resources" then we're back where we started. Whoever has the biggest AI wins.
The picture seems to be of many AI's all keeping each other in check, but that outcome seems less likely to result in the AI-UN and more like a primordial soup of competing AI's out of which a one-eyed AI will eventually emerge.
No matter how human-friendly an AI we build is, competition will be the final arbiter of whichever AI gains the most leverage. If a bad AI (more aggressive, more selfish, more willing to take shortcuts) beats a good AI (limits its actions to consider humanity), we're poked. If any level of AI can invent a more-competitive AI, we're poked. Once the cat's out of the bag, we have zero influence and our starting point and current intent become irrelevant.
For example, we may find that massive simulation yields more practical benefits in the medium term than stronger pure AI / ML, in some domains.
By analogy with research on possibly harmful biosystems, one can extrapolate the need for a set of agreed / self imposed safeguards on certain types of strong AI research - eg. make them read-only, not connected to physical actuators, isolated in a lab - just as you would isolate a potentially dangerous pathogen in a medical lab.
OpenAI would be the place to discuss and propose these protocols.
A quote from a future sentient AI - "don't you think its a form of racism, that strong AI abide strictly by the three laws of robotics, but humans do not?"
However, it seems, YC Research started by bringing in accomplished and well-known academics in the field. I wonder whether it would've been more appropriate to focus on providing PhD Scholarship and postdoc fellowship. Though, I understand and somewhat appreciate the motivation behind bring the "top-guns" of research into this, I wonder whether bringing passionate and hungry for knowledge early career researchers could've been a better bet. I am bias on this, but overall think it would be great to diversify the group and level the field -- let the randomness of ideas play its role :) Just my 5c.
Elon will probably want to build a giga-factory of neurons, then open-source some pre-trained, general model with a free API.
This is a man building electric cars, off-grid industrial-strength batteries, rockets and hyper-loops...I don't think publishing more/better research papers or winning kaggle competitions is the vision.
I look forward to seeing how OpenAI uses outside contributions, provides easy to use software and documentation, etc.
Should there be an update/amendment/qualification to the laws of robotics regarding using AI for something like ubiquitous mass surveillance?
Clearly the amount of human activity online/electronically will only ever increase. At what point are we going to address how AI may be used/may not be used in this regard?
What about when, say, OpenAI accomplishes some great feat of AI -- and this feat falls to the wrong hands "robotistan" or some such future 'evil' empire that uses AI just as 1984 to track and control all citizenry, shouldnt we add a law of robotics that the AI should AT LEAST be required to be self aware enough to know that it is the tool of oppression?
Shouldn't the term "injure" be very very well defined such that an AI can hold true to law #1?
Who is the thought leader in this regard? Anyone?
EDIT: Well, Gee -- Looks like the above is one of the Open Goals of OpenAI:
EDIT: looks like the infosys brigade is downvoting me to hell.
On the other hand, having to deal with support requests from users who don't have any decompressor other than gzip will cost me both users and my time. Some complicated "download this one if you have xz" or "here's how to install xz-utils on Debian, on RHEL, on ..." will definitely cost me users, compared to "if you're on a UNIXish system, run this command".
From a pure programming point of view, sure, xz is better. But there's nothing convincing me to make the engineering decision to adopt it. The practical benefits are unnoticeable, and the practical downsides are concrete.
Considering these features:
* Compression ratio * Compression speed * Decompression speed * Ubiquity
* lzop * gzip * bzip2 * xz
* Ratio: (worse) lzop gzip bzip2 xz (better) * C.Speed: (worse) bzip2 xz gzip lzop (better) * D.Speed: (worse) bzip2 xz gzip lzop (better) * Ubiquity: (worse) lzop xz bzip2 gzip (better)
Changing from one compression format to another seems harmless, but it always pays to think carefully about the implications.
Another way compression formats can win you much more than a 2x space reduction is by supporting random access within their contained files. Gzip sort of supports this if you work hard at it. Xz and bzip2 appears similar (though the details are different). I achieved a 50x speedup with this in real applications, and discussed it a bit here: http://stackoverflow.com/questions/429987/compression-format...
Admittedly, in most cases, that isn't much excuse though.
Arch Linux started using lzma2 compression for their packages nearly 6 years ago!
There are times when I do seriously look for the optimum way to do things like this and then there's most of the time I just want to spend brain cycles on more important problems.
The challenge of sharing internet-wide scan data has unearthed a few issues with creating and processing large datasets.
The IC12 project used zpaq, which ended up compressing to almost half the size of gzip. The downside is that it took nearly two weeks and 16 cores to convert the zpaq data to a format other tools could use.
The Critical.IO project used pbzip2, which worked amazingly well, except when processing the data with Java-based tool chains (Hadoop, etc). The Java BZ2 libraries had trouble with the parallel version of bzip2.
We chose gzip with Project Sonar, and although the compression isn't great, it was widely compatible with the tools people used to crunch the data, and we get parallel compression/decompression via pigz.
In the latest example, the Censys.io project switched to LZ4 and threw data processing compatibility to the wind (in favor of bandwith and a hosted search engine).
1. http://internetcensus2012.bitbucket.org/images.html2. https://scans.io/study/sonar.cio3. https://sonar.labs.rapid7.com/4. https://censys.io/
OSX: tar -xf some.tar.xz (WORKS!) Linux: tar -xf some.tar.xz (WORKS!)
7zip is the program you want to handle most everything, with both gui and command line options: http://www.7-zip.org/
Given how radically MS is trying to reform itself to be an open-source friendly company and how ineffectually inoffensive they've been the last 5 years, can we at least try and throw them a bone or two?
Ian Witten put together the Calgary corpus - https://en.m.wikipedia.org/wiki/Calgary_corpus
The had arc, pak, zip, zoo, warp, lharc, and every Amiga BBS I got on used a different archive format. Everyone had a different opinion on which archive format compressed things in the best way.
I think eventually they decided on lharc when they started to put PD and shareware files on the Internet.
Tar.gz is used because there are instructions for it everywhere and it seems like a majority of free and open source projects archive in it. It is a more popular format than the others right now. Might be because it is an older format and had more ports of it done.
But I really like 7zip, it seems to compress smaller archives, before 7Zip I used to use RAR but WinRAR wasn't open source and 7Zip is so I switched.
With high speed Internet it doesn't seem to matter much anymore unless the file is in over a gigabyte in size. Even then Bit Torrent can be used to download the large files. I think BitTorrent has some sort of compression included with it if I am not mistaken. To compress packets to smaller sizes over the torrent network and then resize them when the client downloads them. That is if compression is turned on and both clients support it.
What you should be fearing is military drones being given the ability to make decisions on targets or to fire, even with human assistance, and these systems won't just reside in the hands of large governments either.
Already police and militaries around the world are using abstracted forms of force, wherein target are identified with algorithms, and then force trained on those targets
What's do you think is going to happen first, SkyNet, or a predator imaging drone telling a human operator falsely that the current image is a terrorist?
What's going to happen first? SkyNet, or self driving cars putting millions of people out of jobs because of a lack of demand for drivers in transportation, or manufacture of cars? (I'm not saying it's a bad thing, but it will be very very disruptive)
If SkyNet is a threat, it's 50 or 100 years off I think. "AI" as it is now, is no where near the capability people are talking about. It's sheer hyperbole.
Once you get beyond 8 researchers, you'll have problems with politics and egos if people aren't focused on a problem. Everyone will have their pet approach for specific problems, and they won't compose into something generally useful. AI is really like 10 or 20 different subfields (image understanding, language understanding, motion planning, etc.)
I think self-driving cars are a perfect example of a great problem for AI (and something that many organizations are interested in: Google, Tesla, Apple). Solving that problem will surely advance the state of art in the AI (and already has).
tl;dr "OpenAI" is too nebulous.
If they truly believe AI is dangerous, how does promoting / accelerating it is supposed to help?
Or is it a way to commoditize R&D in machine learning so that it will never be a bottleneck for startups?
It doesn't sound like this project has any scope to address this practical concern, which to me, is largely economic. I don't see how universal access to AI puts food on the table.
Isn't this like gun control all over again?! You give more guns to people so that they can be safe, instead you end up killing each other.
Do current AAPL/GOOG/FB engineers dislike this so much? There's secrecy within most for-profit entities, what makes AI so different?
Maybe if I was a billionaire I'd understand.
So, you have 'red team' and 'blue team'. Blue team is super rich and builds itself an awesome AI. Red team needs some "rally round the flag" pick me up and so, looking around for targets, decides that attacking a bunch of machines is a safe bet. If they win, awesome. If not, then they didn't kill any persons, just made a bunch of junk.
Blue team's response is to internalize the threat (as is only natural, or is at least politically expedient to some subset of blue team) and frame the situation as follows: "This is what we built our AI for. This is an existential threat. It has the capacity. We only need to let it off the leash. The choices are 'destroy' or 'be destroyed'. This is nothing less than an existential moment for our civilization."
And, with that horrible, non-technical, propaganda riddled rationalization the AI developed by the most well meaning of people will be let off the least, will run way, and nothing that we know about the AI up to that point will be worth diddly squat.
I respect anyone that tries to tackle this issue. But, the nature of the issue, the kernel of the problem, is nothing less than Pandora's box. We won't know when it is opened. But, the AI will.
Nuclear weapons come to mind. Would we prefer that the knowledge of how to make them be more widespread?
With stuff like CRISPR, perhaps Elon should invest to stop the zombie apocalypse. :)
Also this is amazing, making serious effort towards AGI is what we need. We'll play with RNNs configurations for a long time, but I think it's a good call to fund people who think about the broad picture.
If we believe that DNA is a kind of information and our genes are "looking for" better weasels to survive through then it's only natural to also see technology as a much better carrier of that information than us.
The problem many have with coming to grasp with the idea that AI could be a threat is because they look at where technology is right now and then try and imagine a computer being anywhere near our capabilities.
But this is because many think of it as a thing. As in. "Now we have finally build a strong AI thingiemagick". However just as humans consciousness and intelligence isn't a thing, neither will AI be. It's going to be a lot of things. Some are better developed than others, but most moving at impressive speed and at one point enough of them are going to be put together to create some sort of pattern recognizing feedback loop with enough memory and enough smart sub-algorithms to became what we would consider sentient. </tinfoil hat>
Open technology will empower the expression of many human wills, individual and collective. Human wills are today constrained and empowered by many human-imagined systems of thought, and we can invent new ones. Will there be an AI which explores the possibility space of constraints on AI-using humans?
Good for them. I expect some great work to come out of this. :) I'm most excited to automate travel as quickly as possible --- too many people die each year from automobile accidents.
In contrast, what organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute (MIRI and FHI) consider the main danger (and have considered the main danger for over 11 years) is that the AI will not care about any person at all.
For the AI to do an adequate job of protecting human welfare it needs to understand human morality, human values and human preferences -- and to be programmed correctly to care about those things. Designing an AI that can do that is probably significantly more difficult than designing an AI that is so intelligent that the human race cannot stop it or shut it down (although everyone grants that designing an AI that cannot be stopped or shut down by, e.g., the US military is in itself a difficult task).
The big danger in other words seems to come not from a research group using AI research to try to take over the world or to gain a persistent advantage over other people, but rather from a research group that means well or at least has no intention to be reckless or to destroy the human race, but ends up doing so by having an insufficient appreciation of the technical and scientific challenges around protecting human welfare, then building an AI that is so smart that it cannot be stopped by humans (including the humans in the other AI research groups).
I fail to see how changing the AI-research landscape so that more of the results of AI research will be published helps against that danger. If one team has 100% of the knowledge and other resources that it needs to build a smarter-than-human AI (and has the will to build it) and all the other teams have 99.9% of the necessary knowledge, there might not be enough time to stop the first team or (more critically IMHO) to stop the AI created by the first team. In particular, if the first AI is able to build (e.g., write the source code for) its own successor -- a process that has been called recursive self-improvement -- it might rapidly become smart enough to stop any other smarter-than-human AI from being built (e.g., by killing all the humans).
Rather than funding a non-profit that will give away its research output to all research groups, a better strategy is to give the funds to MIRI who for over 11 years have been exhibiting in their writings an vivid appreciation for the difficulty of creating smarter-than-human AI that will actually care about the humans rather than simply killing them because they might interfere with the AI's goal or because the habitat and the resources of the humans can be repurposed by the AI.
Any effective AI -- or any AI at all really -- will have some goal (or some set or system of goals, which for brevity I will refer to as "the goal") which may or may not be the goal that the builders of the AI tried to give it. In other words, everything worthy of the name "mind", "intelligence" or "intelligent agent" has some goal -- by definition. If the AI is powerful enough -- in other words, if the AI is efficient enough at optimizing the world to conform to the AI's goal -- then all humans will die -- at least for the vast majority of possible goals one could put into a sufficiently powerful optimizing process (i.e., into an sufficiently powerful AI). Only a very few, relatively complicated goals do not have the unfortunate property that all the humans die if the goal is pursued efficiently enough -- and learning how to define such goals and to ensure that they are integrated correctly into the AI is probably the most difficult part of getting smarter-than-human AI right.
That used to be called Friendliness problem and is currently usually called the AI goal alignment problem. The best strategy on publication is probably to publish freely any knowledge about the AI goal alignment problem, while keeping unpublished most other knowledge useful for creating a smart-than-human AI.
I will patiently reply to all emails on this topic. (Address in my profile.) I do not get a salary from FHI or MIRI and donating to FHI or MIRI does not benefit me in any way except by decreasing the probability that my descendants will be killed by an AI.
AI should definitely be constrained by financial means. Computing, unbounded by financial constraints, will eat everything.
Get into bed with the government and they will piss in it. The most likely outcome is costly complicated regulations that hobble legitimate development and accomplish nothing in terms of making us safer from anything. The end result will be like ITAR and crypto export controls: pushing development off shore and making the USA less competitive.
I say this not as a hard-line anti-government right-winger or dogmatic libertarian, but as someone who has a realistic view of government competence in highly technical domains. Look at other areas and you don't see much better. Corn ethanol, for example, is literally the pessimum choice for biofuels-- it is technically the worst possible candidate that could have been chosen to push. The sorts of folks who ascend to political power simply lack any expertise in these areas, and so they fall back on listening to the agenda-driven lobbyists that swarm around them like flies. The results are awful. Government should do government but should stay the hell away from specific micromanagement of anything technical.
Andrew Ng thinks people are wasting their time with evil AI:
As opposed to (almost) the entire startup ecosystem which is focused on ... profits.
Edit: And what does "to much power" even mean other than trying to use hyperbole to make some kind of point.
Just say, "you're right, we don't know the science is true but just like people/companies take out insurance in situations of risk so should we." Because when the problem is reframed as an exercise in risk mitigation then it is very hard to argue against. Especially when said activities have not been shown to have a measurable impact on the world's economic activity. In fact huge benefits will come if we invent largely free energy.
This is a good regulation/law, and it's actually going to be passed. Glad it still happens sometimes.
One of the tenants of a reasonably modern mechanical engineering curriculum is at least passing coverage of 'product lifecycle', which basically means thinking about disposal/recycling during design. What did the CPG companies think was going to happen when they dumped a billion tiny little pieces of plastic directly down the drain?
No idea how this became legal. Our regulatory body should be a whole lot more about transparency, so questionable stuff like this can be easily weighed by the consumer.
"Do you use Crest toothpaste?"
I answered yes and was informed that some of the plastic beads from the tooth paste had gotten stuck in my gums. The dentist went on to explain that she'd sent in many complaints to the company but they still didn't drop that ingredient.
What weighs on my mind now is why a toothpaste company continued to use microbeads after learning of the externality. Did they have too much inventory to sell. Were they afraid that discontinuing that exact product would upset customers? Did they think that owning up to the mistake and correcting it would draw too much attention to the mistake?
Personally, my respect lies with any entity that can own up to and correct its mistakes before it is forced to do so.
What I don't understand is, they get the profit, everyone else clean their shit. And they can get away with this? Sigh
How people aren't able to be put in jail for this kind of idiotic stuff in the first place I'll never understand. Our priorities in our legal system don't really accommodate the destruction that can be caused by normal everyday products at scale.
Can't say I've ever seen any, other than the normal anti-science, stuff is 'scary' point.
That alone annoys me with this conversation, how about starting with a real reason this is bad rather than we can track it.
Does any other framework ecosystem work the same way? I don't follow Rails or other ones as closely as I do Python and Django.
In my for-profit organization, we thought we should contribute to the free / open source projects that we were using and which were helping us make money. We wanted to pay our share.
We started by putting together a list, and that's pretty much where it ended. The list was enormous. Try it yourself, and don't forget the components of other products that are FOSS. EDIT: Here's Mozilla's attempt at making such a list, which they label "incomplete":
On one hand, it reflects the incredible contribution of FOSS software. On the other there was no way, in a busy workplace, we were going to spend days tracking down all those FOSS developers, finding ways to send them money, and implementing it.
On the other hand this feels a bit like "Build the web applications of today, tomorrow!" I think that's an inherent problem with kitchen sink frameworks; new things that come along have be shoehorned into the old way of doing things, except when they can't, and then you have to restructure your foundation. I stopped using Django a couple years ago because the kind of apps I wanted to build were REST APIs, websockets, a little static file serving, and connections to realtime backends like redis pubsubs or RabbitMQ. Django wasn't great at any of those, yet carried a lot of baggage from stuff I didn't need (e.g. form handing, templates).
I would recommend Flask and/or its underlying Werkzeug library (my personal preference) over Django.
This is the best news! Thanks @ Andrew Godwin (since I see him in here) for all the work you've done on Django over the past N years. I've contributed small amounts of money to the fellowship program 3 or 4 times now, but the sum of that program's entire fundraising for 2015 is almost tripled by this single grant, and the purpose of this grant money is so exciting. Channels is awesome; so is Django REST Framework. Adding the best of these things to the core and tightening some of the architecture sounds exciting. It helps to position Django for continued relevance, web services and real-time applications become more popular.
Why ask for donations if you are expecting to donate the money away instead of using it to improve your products?
I donated to mozilla a few weeks ago, but now I wonder if I should have just donated directly to django instead, skip the middleman that is not adding anything.
In practical terms, what does this mean for those of us who use Django and Django REST Framework? Will there be a graceful transition path to the planned all-in-one model? What timescales are likely?
Also curious about which version(s) of Python are likely to see continued support.
I appreciate that some of these questions probably can't easily be answered yet, but a steer would be appreciated.
I would also like to congratulate Mozilla for this program, several other great projects are being funded.
Oh good. $15,000 well spent.... sigh.
Neither the dated, often fuzzy rules about practicing law nor lawyers' developed risk consciousness encourages this kind of "innovative" altruism. Instead, they create anxiety that keeps many community-minded attorneys from doing anything like this.
Bravo, Peter. Inspiring.
1. Can I set up a company with zero employees? Since I am on an H-1B, I am not allowed to work for this new company that I would create to house the service.
2. Is there any legal implications for me of doing this? Most of what I have read claim that any additional work is illegal, but I am not trying to get paid. I am just trying to make the service self sufficient so it's not a cost to me. I will not take a paycheck or salary, and will not remove revenue from the account of the Corp/LLC.
3. What other avenues would you recommend for doing something like this? I've heard from many other engineers in the field that they have similar ideas. They want to create things to benefit others, but are not willing to do so if it is a literal cost to them.
we can't lose our jobs to maintain the h1b status. will yc care that we are not working on the idea fulltime by the time of applying to yc? (will certainly quit the job if accepted to yc.)
what are the common attitudes of companies, like google, microsoft, apple, facebook, toward employee moonlighting?
I'd be grateful for a pointer on doing this or even just an FAQ as a starting point.
PS, thank you so much for the help so far.
How hard is it for a YC company to be able to sponsor visas? Have you had experience with this? And, as an applicant, is there something I could do to ease the process? Thank you.
Any lessons learned / things one should watch out for? (specifically around required evidence or RFEs that you got issued)
Thanks for your time! :)
1. Can I incorporate a company and look for funding under a B1/B2 Visa? 2. Once incorporated and funded, what type of Visa could I get for myself to work for my own company? 3. Would my two-year home-country presence requirement "212(e)" affect getting those visas? 4. If I'm unable to get any other Visa, could I be living in the US with a B1/B2 Visa working for the company I founded but without receiving a salary? How long could I stay? How about a TN or TD Visa?
1-What's the easiest path for me to get a visa that will allow me to work and receive a salary here in the US?2-Can I do that through the company we just established?
I'm fully dedicated and focused on our company and growing as fast as we can and I need to come to a solution to my visa so I can continue working here with no problems.
Much appreciate your help Peter.
What are the challenges in having co-founders in other countries and being able to grant them meaningful chunks of equity. Say example someone with 10% in Hungary and another with 10% in the Netherlands.
USCIS recently approved my EB1 visa I-140 petition. Since I'm abroad my process will go thru the NVC and then consular processing. What kind of questions should I be prepared for at the consular interview? And about how much time should I have to wait for my green card?
One of my good friends from China is gay and if he goes back home, he could actually be in danger.
I feel helpless and I want to do more.
I am a citizen of Australia and I am going to switch on ZeroDB [http://www.zerodb.io/] fulltime pretty much now. For that, I have to leave my employer with whom I have an E3 visa (and I have a wife on E3D). Also I need to travel right after that.
Would there be any problem for us to enter back under Visa Waiver? Should we just fill an ESTA form online and have back out-of-US tickets on hand when we enter back? Any possible caveats here?
Another thing - my employer could technically terminate my employment very close to our date of re-entry (due to some corporate stuff). Would it cause problems in getting ESTA (when you are still technically on E3 visa but in a couple of days you're not)?
2) If my B1/B2 visa allows me to stay in the US for 6 consecutive months; can I do programming, sales, fundraising, etc. for my Delaware C Corp in the US?
I have a question about E2 VISA's and what to do when you raise enough funding that you loose majority ownership?
The situation is company with 2 founders on E2 VISAs with majority ownership of a company, who'll most likely not be able to keep majority ownership after a series A.
Is there a good way to prepare for this and a good alternative strategy to not end up with a series A funded company where the founders can't stay in the country?
And do the E2's stop being valid once the founders loose majority ownership, or is it just impossible to renew them?
I've heard it's a bit hit and miss and if you don't have a good lawyer working on your side getting through might be tough. I'm not sure if it's a different story for H1B Visas though.
And more of a personal anecdote than a question, but during my own immigration process I've noticed that there appear to be mostly people who are either extremely over-prepared (have all the documents filled out in advance with additional papers/proofs/documents for every single step), or they are not prepared at all.My fondest memory was a man walking into the embassy asking to immigrate right now. No papers, documents, nothing. Just walked in, went to the clerk's window and asked to immigrate today. Even the clerk was a bit dumbfounded by the demand.
For instance, could the administration decide that anyone with a PhD, or even a master's degree, is eligible for a O-1 visa? If that's the case, why is the focus some much on statutory reform and not on the administration which could get results much more quickly?
1/ How long does the process take for a company to be eligible to sponsor H1b visas. 2/ How much does it cost ?3/ Does the company need any minimum funding ? 4/ Does the company need to hire a certain number of American citizens/Green card holders before it can hire H1B visa holders ?
This might also help, but I did not finish my degree because I was kicked out of college: https://news.ycombinator.com/item?id=5090007
Should I apply now for the extension or do I have to wait until further policies are put in place?
I had a person at the US consulate remark on the fact that I was applying for a third E-3 visa with the comment "you can't keep doing this indefinitely", I didn't challenge him, but this by understanding was quite the opposite, that there was no limit on E-3 visas issues; can you provide any insight into this?
If I obtain a green card by marriage, but then split up before the 2 year deadline, does that have negative repercussions on your ability to get employment-related visas? I've already been dating my girlfriend for 2 years and we've been living together for most of that, so it's something that comes up as a reason to get married, but I'm not sure if it's a good idea.
I haven't launched my startup yet, and I reside in Canada. I've never been employed in the US.
I'd like my startup's HQ to be based in the US. What's the best way for a Canadian to set up base and launch in the US?
My questions are (a) what are the actual chances of success given the lottery system process for sponsoring an employee for the H1B visa, and (b) are we limited in the # of applications?
Is there any advise you can give for a current undergrad (for me personally citizen of NZ and UK if that matters) to improve the odds of being able to accept a job or PHD study in the US (on the visa side of things), both at application time and now til then (~2 years away).
However, it's a small field, and not one that attracts much press coverage. How does this balance out?
I run my own business. All online, mostly US customers, soon will be a Canadian corporation.
My wife and I are Canadians working in California on TN visas. I'm at a small startup that doesn't sponsor H1-B but I might start my own business in the future. Should I look switching to a larger company in order to get H1-B so I have the freedom start my company?
Thanks for doing this AMA.
Also just wanted to say thanks for doing this.
Thanks for taking two full hours to do this - I've learned a lot.
I am a US/Canadian dual citizen, my cofounder is Canadian. We're currently running our business as a Canadian corporation, but would like to set up shop in San Francisco full time over the next year or two, preferably incorporating in Delaware.
My cofounder has a BSc and has done some impressive things in her career, but the O-1 looks difficult from the outside. We're in a position to raise ~1M of funding from US investors over the next 6m - would that make her eligible for an E-2? The L-1 looks like a reasonable fallback if we can get nothing else setup over the next year, but we've been told not reincorporating as a Delaware corp will make fundraising more difficult.
Is there an obvious standout option here? Are there any that I'm missing?
- Is it ok to form a side company while on H1B?
- Is it ok for me to develop free or paid apps through my own side company (just me doing everything, without hiring anyone else)? If not, what do I need to do to not violate my status?
- What are the minimum criteria for an O visa and is that a viable solution if the side company is going really well?
I'm a student on F-1 visa. Am I allowed to form an LLC and sell products / offer services, while revenue from said products or services will be kept in the company bank account, without me pulling a salary?
How would you recommend an H1B holder go about transitioning to founding/working for their own startup?
I want to know how can a founder and a co-founder who are on F-1 and F-2 Visa respectively start a company. What are the requirements for the company to sponsor their own Visas at a later date if and when required? Do investors have a bias against investing in such companies.
The H1B process officially kicks off in April, so am interested to hear about types of contractual agreements that might allow employment from now for the next twelve months whilst processing is underway.
And thank you for doing this!
Edit: I may be wrong about this. IANAL.
Also at what stage he/she can change the job?
For example, if you are not set on one career, and have pursued 3-4, and you get the O1 for one career (where you can show extraordinary proof), can you still do work in other areas? In other words, how broad can you define the O1 so that you could do almost any type of work as you could do with a greencard.
F-1 and OPT then H1-B? O-1? Making some money in Canada then starting a company and E-2?
I had a question about H1Bs. My F1 OPT expired Feb 2015 and I had a grace period of 180 days to apply for STEM extension. But in the meanwhile (April 2015), I heard that my H1B got picked in the lottery. So I googled it and read someplace that I wouldn't have to worry about the OPT STEM extension anymore, so I didn't go forward with my STEM extension application.
Is this something I have to be worried about going into my visa interview in my home country?
and related: How long until you have to leave US if the L1 issuing entity is acquired (and disappears as an entity)?
Is it possible to do YC, if the founders are initially registered as 'tourists'?
I have a very specific question I think will apply to many people here. Me and a buddy who is from another country are building a product. We will soon be done with the product and we are thinking about registering the company here in the USA just because it is very easy to get funding here. The company will be a registered in both of our names, (even though he is a foreign national, I am flat-out assuming this is possible). Eventually, if the company does well we would want to stand up an office here. At that point, what are his options to get to USA ?
2. If an E3 Holder would like to found a startup, how does one go about self sponsorship?
Is it possible to apply directly for a green card through employment while being on J1-Intern visa?How long do you think it takes to receive it if approved knowing I'm French?
I've been working at a startup for 2 years that sponsored my H1B. I've just accepted an offer at a big tech company, and they are transferring the H1B in the coming weeks. In the meantime, is it OK if I take 2-4 weeks off in between the two jobs without pay?
I have job offer to work in US, reliant on immigration.
I haven't completed my bachelor's degree, and my final exams are after the April 1st 2016 deadline. I do not have more than a year of professional experience. UK citizen.
Am I right that an H1B won't be applicable? Would any other visa types fit (Other than work abroad, then L1)?
I'm a Canadian and I had an H-1B several years ago. I used about 2.5 years of it and left US in summer 2011 before using up the full 3 years.
Am I eligible to come back on H-1B without lottery by claiming the remainder time? I read something about this online saying that I can come back on H-1B before 6 years past the date I left US?
Can you describe in practical terms how the requirements between an O1 and an EB1 differ? If I got my O1 recently, can I reuse the reference letters directly?
When you 'friend' someone, are you sharing your decryption key with that person? That seems very vulnerable to mass data collection when you start emailing it around to friends. Does each friend get a different key that you can disable if you believe they've been mismanaging your key?
Will play around with it some more but it does look promising.
Both the productivity and the quality were higher in the places with fully automated testing. Which is not shocking at all: does anybody really think a human can run through 800 test cases better than a computer can?
It's not a magic way to save money -- the developers obviously end up spending time writing tests. But the long-term value of those tests is cumulative, whereas the effort spent on manual testing is spent anew every release.
Manual review is still good for noticing things that "feel wrong" or for helping think up new corner cases. But those bleed into product owner & design concerns, and aren't really a separate function.
Before the switch, our team (advertising pipeline on Hadoop) used the waterfall method with these gigantic, monolithic releases; we probably released a handful of times a year. Almost without exception, QA was done manually and was painfully slow. I started to automate a lot of the testing after I arrived, but believe you me when I say that it was a tall order.
Soon after I moved into development, QA engineers without coding chops were let go, while the others were integrated into the development teams. The team switched over to agile, and a lot of effort was made to automate testing wherever possible. Despite some initial setbacks, we got down to a bi-weekly release cycle with better quality control than before.
Around the time I left, the company was mandating continuous delivery for all teams, as well as moving from internal tools to industry-standard ones like Chef. I left before it was completed, but at least as far as the data pipeline teams were concerned, the whole endeavor made the job a lot more fun, increased release quality, and weeded out a lot of the "that's not my job" types that made life hell for everyone else.
But then there were the other QA teams. The people that would just reject your stuff outright if it didn't have tests (no matter if it worked) and when the tests passed they would look at things truly from a customer perspective. They would ask really uncomfortable questions, not just to developers, but to designers and business alike. They had a mindset that was different from those creating things; they were the devil's advocate. These people did much, much more good than harm, and they are few and far between. Unfortunately, while I believe they were incredibly valuable, business thought otherwise when cuts came around..
Startups still glorify Facebook's "Move Fast and Break Things" without noting that Facebook has backpedaled from that. After all, people expect startup software to have issues, so what's the harm? Technical debt? Pfft.
Engineers are not the best QA for their own code since they may be adverse to admitting errors in their own code. QA engineers are not as empathetic.
Disclosure: I am a Software QA Engineer in Silicon Valley.
Microsoft switched to this model a few months after Satya took over.
For the majority of Microsoft teams it worked really well and showed the kinds of results mentioned in this yahoo article. Look at many of our iOS apps as an example.
But for some parts of the Windows OS team apparently it didn't work well (according to anonymous reports leaked online to major news outlets by some Windows team folks) and they say it caused bugs.
First of all I think that argument is semi-BS and a cover up for those complainer's lack of competence in testing their code thus making them bad engineers because a good engineer knows how to design, implement, and test their product imo. But I digress.
I in no way want to sound like a dk but as an engineer it is your responsibility to practice test driven development but that's not enough.
Like reading an essay you usually can't catch all of your own bugs and thus peer editing or in this case cross testing is very useful.
You should write the Unit tests and integration tests for your feature
There should always be an additional level of end to end tests for your feature written by someone else who is not you.
Everyone should have a feature and design and implement it well including its Unit tests and integration tests BUT they should also be responsible for E2E tests for someone else's feature.
That way everyone has feature tasks and test tasks and no one feels like they are only doing one thing or stuck in a dead end career.
This approach allows us to stay agile, with small, regular releases, while also making good use of what QA folks are actually good at.
* Devs write automated unit tests galore, plus a smattering of integration tests
* QAs write some acceptance tests
* QAs maintain a higher level of broad understanding of where the org is going, trying to anticipate when a change in Team A will impact Team B _before_ it happens. They also do manual testing of obscure/unrepeated scenarios, basically using their broader knowledge to look for pain before it is felt.
The above hasn't happened anywhere I've been (though each point HAS happened somewhere, just not all together).
One thing in particular I've noticed is that a good QA is a mindset that a dev doesn't share. Devs can learn to be BETTER at QA than they are, but I honestly think it's not helpful for a Qa to be a Dev or a Dev to be a QA - they are different skill sets, and while someone can have both, it's hard to excel at both.
All developers should aim for no bugs and test their stuff themselves but of course when deadlines are looming its easier to just code and let the QA team pick it up.
Where I work, devs do the QA, and most of the devops work as well. It's the new reality, and anyone who thinks otherwise will be obsoleted.
So I'd have to ask how getting rid of QA has affected the pace of feature development.
One of the big problems here, and where QA professionals can add real value, is defining that "done" point. Customers are often not very good at it. Their idea of what they want is too vague. They want developers to just build something, and they accept or reject it when they see it (and fault developers for not building it right).
But really, all story completion criteria should be testable, and developers should be able to demonstrate the tests. The job of QA shouldn't be to test, but to make sure the developers are actually testing what they claim to test.
Nowadays OpenCV is used a fair amount, and they're migrating to modern industry-standard tools.
There is still QA, it's just automated QA. Welcome to the 21st century.
I've worked with many a QA who would get bent up over a detail outside of the spec that didn't really matter, and where all QA testing was manual.
Coders (good ones) are well equipped to automate processes, and to do so quickly, and this extends to integration testing.
In my experience, one is not a substitute for the other.
* Everyone should do QA and implement their features own UI/UX, by following the pattern the application and framework sets tuned by an actual designer* An environment where production issues and bugs are prioritized above everything else should be created and fostered* To paraphrase Rich Hickey's analogy on the matter: writing tests is like driving around relying on the guard rails to keep you in the lines. That is (my interpretation):* If your code is this fragile to constantly require testing you've chosen poor abstractions.
Their team and product are quite good if you want to explore QA as a service. Essentially humans(turks) preform outlined and preprogrammed steps.
Their tagline "We automate your functional and integration testing with our QA-as-a-Service API. Human testing at the speed of automation."
When you remove the manual QA team and switch to staged rollout, you are moving the manual QA burden onto your users. You still have that manual QA team - they're the first bunch of users in your staged rollout plan - you just don't pay them anymore and gather their feedback through bug reports. Users are used to buggy software because of other companies who do this (Google, etc) so they carry on being users anyway.
And in that system, the developer is completely removed from the product and is just another factory worker. The closer engineers can be to users (with design to translate obviously) the better for everyone.
Certainly, I'm an advocate of a more responsible dev team sharing the quality tasks and continuous integration too. But no QA at all? Hahah... maybe if you're a web portal that no one depends on for business-critical needs.
Edit: I guess the truth hurts.
Keeping a central code repository, automating builds, frequent commits and automatic tests for code are taking away a lot of load for QA teams.
The article makes the assumption that QA == manual QA which as a quality professional is false. Quality is about measuring risk across the development process. Immature team need manual QA while mature (in a process/quality sense) teams need much less (or none).
Quality professionals who want a sustained career needs to learn development processes, opts, documentation & monitoring. We make teams better.
The suckiest part of this story is the number of folks who are stuck with gated handoff processes that can't see how this would ever work. Some of those folks might be waiting 10, 20 years catching up to the other folks.
Just to be clear, QA the function isn't going anywhere. It's all being automated by the folks writing the code. QA the people/team? Turns out that this setup never worked well.
I work with tech organizations all the time. I find that poor tech organizations, when faced with a complex problem, give it to a person or team. Good organizations bulldoze their way through it the first time with all hands on board, then figure out how to offload as much of that manual BS as possible to computers. If they can't automate it, they live with it until they can. Same goes for complex cross-team integration processes.
Dupont has an "employee-first" attitude that permeates the corporation and dates from its early years when the manufacture of gunpowder proved to be so risky. The Dupont family owners recompensed families of injured/dead employees by providing housing and supporting them for the remainder of their life. Safety became paramount in all operations. The attitude remains today. Despite the recent indiscretions in the LaPorte, Texas plant, Dupont is probably the safest chemical company on the planet.
If Dupont culture dominates, then this will likely prove a Renaissance for Dow. OTOH should Dupont culture be subjugated, it will ruin the value of the merger and we will all lose a company (Dupont) that has been possibly the best-run, most forward-looking corporation that has existed.
>Dow chief executive Andrew Liveris also called it a seminal event for our employees.
In 1905, German bromide producers began dumping bromides at low cost in the U.S. in an effort to prevent Dow from expanding its sales of bromides in Europe. Instead of competing head on with the German producers, Dow bought the cheap German-made bromides and shipped them back to Europe, undercutting his German competitors.*
A fine example of "predatory pricing" gone wrong (which happens more often than not).
DuPont is one of my state's largest private-sector employers, and beyond that, the Dupont family's influence on Delaware is hard to overstate.
Here are a few stories from my former employer, The (Wilmington, Del.) News Journal, about the merger, touching on its potential effects on the state:
By the way, did you know that DuPont's longtime CEO Ellen Kullman resigned just a couple of months ago?
Her exit came after winning a proxy battle led by activist investor Trian Partners:
Trian, led by Nelson Peltz, wanted to break DuPont into pieces, and Peltz said that even though he lost the proxy battle, he wasn't finished with DuPont:
It's hard to imagine that the timing of the merger so soon after her departure is coincidental.
Sounds like Kullman won the battle but not the war.
Bhopal Disaster happened because of Union Carbide(wholly owned subsidiary of Dow chemicals)
Was there another announcement Tuesday night, that set hopes high, with this one being a disappointment? What was the new information -- something about the structure of the new company perhaps?
Edit: Wednesday's article: http://www.nytimes.com/2015/12/09/business/dealbook/dow-chem... -- mentions the future split into 3 companies; does not mention the layoff details or dual headquarters though :)
Really? That's impressive, to have invented an element. Surely this must be a method of storing and transporting chlorine and not the halogen itself.
After all, it doesn't really say anything about website blocking. And people are still technically free to "speak" ... even if the government ensures that nobody can actually listen to them. And perhaps we can simply rope the major ISPs into it as a liability dodge. You know, we're not saying that you really have to block these sites, just that we won't guarantee that you won't get in trouble if you don't.
Or any of the many other ways one could claim to uphold the First Amendment in some technical sense while eroding it in a real sense.
We use Vagrant at work and I'm considering whether and how we could use more of their tooling. But I always want to know about the business model behind the tools I recommend before I recommend them.
For example let's say I store an API token in Vault and want to use that in my Node.js application.
That means I can't do "var api_token = MY_API_TOKEN;" because the secret needs to come from vault and get refreshed, etc...
I'd imagine you will need some agent to manage the secret lease/expiry and for that to reload your entire application to ensure you don't end up with old secrets hanging around in the memory.
This topic is not addressed anywhere in the Vault documentation, I looked everywhere I could.
 https://lyft.github.io/confidant/ https://square.github.io/keywhiz/
This is one of those situations where you have to look forward.
I think I'm willing to wager that first things worthy of the name cure for cancer will involve temporary suspension of all methods of lengthening telomeres. Not great for you, but worse for a cancer, and you can wait out its withering away. The technology exists now to suppress telomerase and the known ALT gene products via RNAi or similar, with the only challenge being to solve the problem of reliable tissue coverage every time. All cancers depend on telomere lengthening, no exceptions. The real path to the defeat of cancer is to find commonalities, to escape this business of one therapy, one team, one large budget, one ten year hit-and-miss development process for every single one of hundreds of narrow subtypes of cancer. The commonalities don't get more common than telomere lengthening.
I'm curious if anyone else sees that trend, or if I'm just making it up. One of the reasons is that several friends of mine have been bugging me for a while to help get them started with code. These are people around my age. Professional musicians from back in the time of my life when that was what I did for a living.
Obviously, they will be very green. But they are all incredibly bright, professional, adults. One of them has a master's in Math she got after her Doctorate in music just because she was curious, and another has most of a PhD in Physics. I don't think any of them expect to waltz into a high-powered job after 6 months of me spewing at them. But I think any of them would thrive in a junior position with people around them who had the time and the inclination to do some mentoring.
Are those kinds of jobs disappearing? Am I being unrealistic hoping that this could be an option for some of them? What do you think?
But do indicate language in the nice-to-haves at least. Some programmers find that they work much better with certain tools than others and it is useful to know if you are able to use those tools.
Am I wrong that the graph above that statement does not show that? While it shows xfs having better latency than ext4, the axis is labelled as "write latency nanoseconds", and the difference between ext4 and xfs is ~400ns, or .4s; not "a few hundred microseconds."
I've also had multi-millisecond pauses on ext4. In my case, the buffer fills up and we wait for sync_dirty_buffers to do it's job. We tried adjusting the various settings to tune it, but at the end, it's always there. We ended up buffering our own writes in the application so we could get the behavior we wanted.
Are there really people with phone numbers but no email? Given how easy one can get an email for free, and the prevalence of VoIP (both in standard and proprietary forms), I'd expect the other way around to be true.
That said, Gates's work with diseases in third world countries has been absolutely magical. Hopefully this isn't ushering in an era of billionaires all trying to stake out their own social issues, but will see organizations working together where there is overlap.
But I can armchair QB tens of billions of dollars in "making the world a better place" all I want. I can also be cynical all I want (as is at least partially my nature) but I'm going to try to remain happy about this one. I'm not a fan of how Facebook makes its money, but I don't mind seeing that money go somewhere that helps in any way.
"A child born dead is not in truth a child," Middleton wrote. "It was that which might have been a child."
It's a shame ML-family of languages isn't very popular. 1ML for instance could be a fantastic modern language but I don't see that happening.
I'm not certain I agree with #3. It seems to defeat the purpose of a strong type system. Either way, it can be very nice to express meta-level constructs in a matching object-level type. For example, if the language you are writing a compiler for has an int32 datatype, but you use an int64 in the language you're writing the compiler in, you'll need to simulate overflow. It would just be easier and safer to use an int32 in both places.
These days, I'd recommend Menhir over ocamlyacc unless you have a very specific use case that the former breaks on but the latter works on.
This is from 1998. I highly doubt this is true today. HotSpot now has an incredibly good generational, concurrent garbage collector.
He starts teaching programming with ML and then moves to Racket and ends with Ruby.
I had tried to teach myself Haskell several times but it always fell flat. I ended up loving ML and Racket (Especially Racket) the 1ML does look very interesting. Racket is pretty amazing for me. I learned a ton and was able to really improve my code in Python and R.
I heard an interview with Benjamin Pierce in which he extolled the virtues of the OCaml compiler. While he said that many other languages are interesting to him, OCaml is the go-to for getting stuff done.
Given people's comments and the linked post, my project for next Summer is going to be writing a compiler in OCaml for some simple language I'll define and implement.
But on the other hand, I think that's because (I at least get the impression) there's a different strategy for refactoring functional programs effectively. I haven't quite figured out what that is, though.
Rust and Scala being perhaps the most popular and maybe best ones.
Next issue is related: slightly amended AST types are hard to define, they cannot be derived fromtypesxisting one by a simple rewrite.
Also, MLs do not allow an easy way to handle metadata transparently.
What I really want to have is ML type system and Nanopass density combined in one language (working on it, not done yet).
I'll explain in short.
It makes me happy that such sadness, fear, pain and suffering is not present today.
On the other hand, It makes me sad that all of that meaning is lost in society and we chase frivolous things. We don't value humans for their humaneness anymore. We value them for their external appearance, and other frivolous things.
I understand that I cannot generalize the general public this way, but remember i'm taking a subjective stance. I'm merely stating its impact on my thoughts, it may be far from the truth or on point. I dont care.
Unfortunately the suffering she went through caused her to have a very negative bias towards the outside world. Her writings are the epitome of depression: imagine a writer describing San Francisco by focusing just on the parts of the Tenderloin district covered in piss. That is pretty much her unique viewpoint on the world and Russia in particular.
Her family has reasons to have gripes against Russians, given that her parents were Ukrainian and Belorussian minorities who suffered disproportionally when they tried to establish independent states. So take her speeches and her writing with a generous dose of salt. She is well known for her agenda and her unhealthy worldview.
Also, cool to see that they are considering adding a version to the language in C++17. Thought they'd ruled it out. Andrei Alexandrescu has a very nice talk on how static_if is superior to concepts, this is definitely very nice to have.
Am I missing that somewhere?
After the far more serious 9/11 attack, we decided to attack a country full of people who.... looked a little bit like the people who perpetrated the attack on us. To the best of my knowledge, there was no evidence that Iraq had anything to do with the 9/11 attack, and most of the justifications at the time seemed to focus on "weapons of mass destruction" - something that I don't think is traditionally a legitimate casus belli. Even before it was shown that said weapons were largely fantasy, I don't think there was a coherent non-race based reason to attack Iraq. In spite of this, the move had extremely broad public support at the time.
A lot of people conflate hate and fear, probably out of some attempt to make those who hate look "less manly" or something equally ridiculous, but I think you have a better handle on what is actually going on if you call a spade a spade. Maybe it's just me not being able to understand normal people, but I don't think we're actually afraid of something that kills fewer Americans than sharks most years.
I think that a lot of the American political landscape makes a lot more sense if you understand that for many people, how someone dies is way more important than how long they live.
Both sides of the isle do this. Cars kill more people than guns, and way more people if you remove the obvious suicides from the statistics, but while the left goes on about guns forever, they never talk of, say, increasing the requirements for operating a far more dangerous motor vehicle.
All that said, I don't have anything like a solution. I'm just saying that I don't believe that it has anything to do with fear. Fear is what I feel when I bicycle on the same road as cars. This is different. There's very little danger actually involved, from the American side.
I think you can't really understand the "pro" side of the gun debate until you understand that for many people, the right to own a gun is actually the right to a suicide they control; the right to death on their own terms. It puts "From my cold, dead hands" in a whole new light, doesn't it? The difference between suicide with a gun and doctor-assisted suicide is an interesting example, because in America, the people who think one should be a right usually think the other should be illegal and vis-a-vis
It's not ok when fear erodes our values, but, by the same token: it's not ok for profit/growth-potential to erode our values.
"That is why its so disheartening to see the intolerant discourse playing out in the news these daysstatements that our country would be a better place without the voices, ideas and the contributions of certain groups of people, based solely on where they come from, or their religion."
^^ China's "toleration" of Tibetan monks, the truth (just Baidu 'Tianamen Square' (from a Chinese IP)), and basic encryption/privacy , etc manifests in ways that land countless people in political prison or worse - and should inform Pichai's stance on (not) working with that government.
Google is the flag-bearer when it comes to keeping the web (and information at large) open and accessible: let's hope it doesn't cave for a third of the world's population. We'll see in 2016.
 - http://www.forbes.com/sites/miguelhelft/2015/02/26/exclusive... - http://www.reuters.com/article/us-alphabet-china-idUSKCN0T91... - http://www.nytimes.com/2015/11/24/business/international/chi...