They then take you into a series of rooms (bathroom, the street, art gallery, kitchen etc.) which are completely black. Not a shred of light. You are blind. I can't describe how it felt other than terrifying. I didn't know if my eyes were open or not. It wasn't the black that I saw when closing my eyes, or am sleeping in a dark room. It was this empty hollow of nothingness.
I highly recommend the exhibition if you're ever in Warsaw, and I think they have it in a couple of other cities:http://niewidzialna.pl/en/
Blind people can have a good or bad sense of direction, just like their sighted peers. I think the device described in the article might be more useful on short distances and less relevant for knowing where your home is while you're far away from it. This because blind people don't have the visual queues to determine if they're walking in a straight line for example. Getting immediate feedback could help with such skills and learn them how to verify the signals from the device with other senses.
Sensory substitution, aka how to replace input from one sense with input from another is a quite interesting topic.
A much younger person I know who has very limited vision (and the prospect of declining vision as she grows up) attends summer mathematics programs with children running around playing soccer and Frisbee and seems to handle that with aplomb. To not even be able to recognize shapes or moving human beings, something that the blind people I know best are still able to do, would be especially challenging.
Aside: Have you all noticed that people who have acquired profound deafness that begins in adulthood have much less understandable speech than people with normal hearing? Apparently we all rely on feedback from our own senses to keep our speech behavior within the phonologically normal range of whatever language we speak as a native language, and habit alone can't maintain the fine tolerances necessary for readily understandable speech.
AFTER EDIT: Of course anyone can experience total lack of sight simply by going into a totally unlighted place. The human eye doesn't emit vision rays, after all (even though the ancient Greeks seemed to think otherwise), so if you are where there is no light, you see nothing with your eyes. Studies on the human diurnal behavior cycle are sometimes done in deep caves with no source of artificial light.
People think that deafness means silence, but they are wrong. It is a constant noise that ranges from a gentle whisper going through some cracks to a constant buzz, which is worse.
It went on about some difficulties he had processing the new data. His brain had to "learn" how to see. It was so difficult that he would sometimes close his eyes and and rely on his echolocation skills to navigate.
Very interesting book if you want a first hand account of what it is like to go from being blind to being able to see. (and also about being blind)
Although having written that, I now wonder whether a person who loses their hearing might be plagued by phantom hums or such things, as can sometimes happen to hearing people when exposed to extended silence.
It seems slightly cruel to me to give someone a device to augment their senses without some provision for them to continue using it if the experiment is successful.
I suppose it's also possible that Wachter didn't want to continue using / being reliant upon the belt despite the loss of the spatial sense it had provided.
My right eye is my "main" eye (95%) and my left eye just submits the missing parts from the left that my right can't see because of the nose being in-between.
I always have the right side of my nose in my field of view, except that at the same time it's somehow not there. Like 50% opacity. The left side of my nose isn't visible.
When I "hide" a finger behind my nose for the right eye and look in it's direction it's gone. When I stare straight forward, it appears again.
(My) vision is weird :).
> I see what sighted people describe as "white". When I ask a sighted person what they see out of their elbow they typically get it. They see "nothing", but if pressed will usually say "static" or "white".
Matt is wrong about this. He's being victimized by a pernicious fallacy.
It certainly appears that the most "successful" cryptosystems have transparent keying. But that's belied by the fact that, with a very few exceptions (that probably prove the rule), cryptosystems aren't directly attacked by most adversaries... except the global adversary.
In the absence of routine attacks targeting cryptography, it's easy to believe that systems that don't annoy their users with identity management are superior to those that do. They do indeed have an advantage in deployability! But they have no security advantage. We'll probably find out someday soon, as more disclosures hit the press, that they were a serious liability.
There is a lot wrong with PGP! It is reasonable to want it to die. But PGP is the only trustworthy mainstream cryptosystem we have; I mean, literally, I think it might be the only one.
PGP is complicated (VERY complicated, to the average user), resulting in next to zero adoption.
Simplify the goals in a way that can be upgraded at at some later date.
I think we need a browser plugin (All browsers. Other non-browser tools too, ideally, but the browser is important) that lets you securely SIGN posts locally in a style more or less like GPG's --clearsign option. Ideally, this should literally be --clearsign for compatibility, with the plugin hiding the "---- BEGIN PGP SIGNED MESSAGE ----" headers/footers, though these details are less important.
The key should be automagically generated, and stored locally in a secure way. (Bonus points for leting you use the keyrings in ~/.gnupg/ as an advanced, optional feature). The UI goal is to simply let people post things and click a sign this button next to a <textarea> or similar. Ideally, later on, this could become sign-by-default.
On the other side, the browser plugin should notice signed blocks of text and authenticate them. Pubkeys are saved locally (key pinning). What this provides is 1) verification that posts are actually by the same author, and 2) it proves that someone is the same author cross-domain (or as different accounts/usernames).
No attempt is made to tie the key to some external identity (though this would be somewhat easy for to prove). The idea is to remove the authentication problem (keyservers/pki) entirely. This can be man-in-the-middled, but the MitM would have to be working 100% of the time or the change in key will be noticed.
No attempt is made regarding encryption (hiding the message). This should also greatly simplify the interface.
The goal here is to get people using proper (LOCAL STORE ONLY) public/private keys. The UI should be little more than a [sign this] button that handles everything, and a <sig ok!> icon on the reading side. It should be possible to get the average user to understand and use such a tool.
Later, when the idea of signing your posts has become more widespread and many people have a valid public/private key pair already in use, other features can be added back in. As those "2nd generation" tools have a large pool of keys to draw from, it should be easier to start some variant of Web Of Trust. Even if that never happens, getting signing widespread is useful on its own.
I realize this doesn't protect against a large number of well-known attacks, and only offers mild protection against MitM. This is intentional, as the goal is getting people to actually use some minimal subset of PGP/GPG-like tools, possibly as an educational exercise. The rest of the stuff can be addressed later.
used to provide the fingerprints that are readable? Verifying would be much more convenient than now.
"For example, the 128-bit key of:
CCAC 2AED 5910 56BE 4F90 FD44 1C53 4766
RASH BUSH MILK LOOK BAD BRIM AVID GAFF BAIT ROT POD LOVE
TROD MUTE TAIL WARM CHAR KONG HAAG CITY BORE O TEAL AWL
EFF8 1F9B FBC6 5350 920C DD74 16DE 8009"
But for some reason (maybe because it's generally less life-threatening), people seem to expect deeply complex subjects, like e-mail encryption and identity management, to be easy. "Yeah, if you can just give me a fancy, easy-to-use GUI with forward secrecy, that'd be great!" Sure, it'd be great. But it's not going to happen. And that's not because PGP is broken -- of course, it does have its weak points. It's because people are too lazy to bother to learn.
What's the old addage? You can have quick, cheap and reliable. Pick two? Same here. You can have secure, easy to use, and reliable. Pick two.
While the CA-model seems to be broken in most X.509 use cases, like TLS/SSL, where a duplicate certifcate can be used to do a man-in-the-middle-attack, this does not really affect S/MIME, especially after both parties started a "conversion". People that need to communicate "really" secure, should therefore be able to ignore all "CA-Trust" and white-list certificates on a per user basis (e.g. like PGP).
Ordinary communication still can by default fall-back to the existing CA-model to keep it usable (but not secure).
1. We need more love by the MUA-vendors, who mostly support S/MIME but it's still a PITA to use. Google e.g. still does not support S/MIME on android, see https://code.google.com/p/android/issues/detail?id=34374
2. We need CAs that are usable. StartSSL is nice and free, but it's not easy to use. Lower the entry barrier for getting and renewing/recreation of certificates
3. (most important) Make it easy to manage local CA-trust. On each new system, the user should be able to select a "trust no CA/whitelist only" approach and then be responsible for trusting other parties. No vendor (Microsoft, Apple, Google, Mozilla) should silently distribute and trust new CAs without users consent.
The NSA isn't my concern, Google etc. are. I don't want to bother going to the lengths necessary to secure myself from the NSA since that just isn't practical. But it would be nice if google and its employees didn't have access to the plaintext of my email. If I send an email to anyone using gmail and they decrypt it in a way that lets google see my text when they reply, all of my own security steps are worthless.
I like the email model such that anyone can install and run an email server. I'd actively push friends, family and colleagues to use a decentralised email replacement that was as easy to use and secure as TextSecure.
One thing I have learned watching the crypto forums over the years is that there are well calculated misinformation campaigns trying to dissuade people from using secure methods. I see it again and again and the people on this forum need to think carefully before swallowing this as sincere.
I would never never never trust a solution from Google or any large American corporation. They have just been caught lying about prism (Google) and taking bribes (RSA). These companies are now and always will be totally untrustworthy.
We really do need to let users manage trust, because trust is a rich concept. And humans are actually really good at trust, because we've been thriving and competing with each other in complex social situations for a long time.
The trick is finding ways to recruit people's evolved trust behaviors into an electronic context. That is, can we build meaningful webs of trust through repeated social interactions, just like in real life?
So it's not the mail client vendors who are best positioned to solve the problem, it's the social networks.
(Whether they want to solve the problem is a separate question.)
Also, about terrible mail client implementations, the problem is, to not be terrible for many is to be built-in to GMail (and work transparently there). The consequences of that are obvious I hope. So no, thanks.
(confession: I myself am too lazy to use PGP)
Is it really fundamentally possible? The author asserts this without really backing it with anything. I can understand how OTR-like systems can work between a static pair of clients, but it is not entirely clear if it is possible at all to extend such scheme to work in scenarios where message delivery is async and I might be using a set of clients/devices for messaging.
Even in it's long form, it's relatively easy to generate different keys that have the same fingerprint.
Why? Last I heard, breaking PGP was equivalent to being able to factor large integers into a product of prime numbers. So, NSA is able to do that, and no one else can, no one in the public heard about it, no university research mathematician published about it, NSA has mathematicians who figured out how to do that but their major profs back in grad school don't know how, no one got a Fields Medal for it, etc.? I don't believe that.
What's going on here?
He means I need a Faraday cage? Okay, tell the NSA I have one; put it in place this afternoon.
He means the NSA has trained cockroaches that can wiggle into my hard drives while I sleep and steal all my data? If so, then fine. I'll spray bug killer.
Otherwise, why should I believe that the NSA could crack my PGP encrypted e-mail?
This is something the operating system can provide (or another application). No need for a 'special' client.
definitely makes it seem as though Satoshi was a group of people running many machines.
would be very interested to see more content like this in the future from other early-stars of the BTC world.
SPARC CPU division brought down Sun, and i see it still puts a good fight inside Oracle despite Rock cancelation :)
Edit: even if the CPU become realistically (in the High Enterprise sense) available just imagine what [ram-to-cpu] bus should the CPU to sit on for it to be able to feed the beast, especially considering that it will be DB application, not HPC for example
My experience with using older SPARC servers for testing various things is that they're rather disappointing, both in terms of value and performance - "more cores" seems to be their guiding principle, and while this makes for impressive benchmark results and aggregate numbers, the speed of a single thread is pretty horrible; it's only in specific multithreaded applications that all the resources on the chip can be fully saturated. Meanwhile x86 servers cost far less and can handle different workloads better because per-clock, each core is several times faster.
This is much easier said than done, and the effort and complexity will vary greatly depending on your site/app. The example used (nondescript clothing app) is one of the simpler cases, whereby the cross-session and device user-state need not be maintained, or is at least pretty minimal.
For many app developers, though, the richness of their feature-set doesn't come across until the user has a detailed state, such as level achieved, past activity, preferences, etc. Without asking users to "commit", sites/apps need to associate state with an anonymous user.
Unfortunately, it's not quite trivial to maintain the concept of an anonymous user. For one, the lengths the mobile industry is going to restrict the use of unique device identifiers poses a complexity to identify the same device across sessions. Moreover, anonymous users pose an issue for services with a value proposition behind their cross-device/platform support. Also, for small sites, it may not be trivial to introduce a data-model that supports anonymous data, which either needs to be thrown out or eventually merged with account-linked data. Similarly, 3rd-party engagement and funnel analysis of anonymous users is also a hard problem, as when the user does eventually identify themselves with an account, you need to merge their previously anonymous data into their account. Some services call this Aliasing.
I'll echo other comments that the content is sparse - this section specifically speaks as if registration count is the sole goal of the target audience. A comprehensive document would account other conversion-like goals that site/app makers might have, and the weigh the cost-benefit analysis of requiring registration.
What other options do I have to get an ad placed onto the landing page?
We are very keen to get feedback on the content that web developers want to see with regards to monetization. For example one area that I am keen to see us grow is building components in sites that optimize credit card data entry.
It is nice to make your game available in many languages, but getting translations that aren't terrible is hard, and I have never seen a clear business case for it. So I think the proper attitude is "there is not an obvious business case, but we are doing it because we want to."
Even if you don't plan on localizing, it's a good habit to move all your strings to one place like this, as you can quickly see in one place if all of your strings have consistent tone, style, and word usage.
* Punting on localization can make a lot of sense. It helps to do localization up front and plan for it, but that's when opportunity cost is at its greatest. It's also a marginal "multiplier" effect, and so it's not a make-or-break engineering item usually.
* I doubt localization to Russian is as good as Spanish, very often.
* Definitely agree that translations from actual users are leagues ahead of professional translator services, good thought to cultivate that.
* You can do it incrementally, and localize your app description prior to the app itself, which doesn't have the same engineering requirements.
URLs in HTML are treated as relative by default, so doing <a href="playism-games.com"> creates a link to http://www.fortressofdoors.com/was-localizing-defenders-ques...
I would think there are some number of people in the USA that appreciate having the option to play in their preferred language.
It'd be more interesting to find out how many people picked language X vs buying from country Y.
From the demo/documentation they do have, it's unclear why Flynn is better than Deis, Heroku, etc.
I had hoped that Flynn was/would be a tool for orchestrating complicated multi-container apps, not just deploying Procfiles. I don't need Procfiles, I need something which will let me integrate and orchestrate multiple Docker containers across multiple nodes. Most/many significant apps don't fit into the simplicity of Procfiles (we have over a dozen different services, some with relatively customize environments, all communicating with RabbitMQ middleware). At least for development, Docker containers have proven to be an ideal way to manage these services. As of yet, there doesn't seem to be a good tool for deploying them to production. I wish Flynn would tackle that head on (and document it!), instead of being yet another generic PaaS.
A more clear header would help with messaging.
- Fully militarized police with a Tank(!) and multiple snipers with assault rifles confronting unarmed civilian protesters.
- Tear gassing and arresting reporters
- Al-Jazera news crew was shot at and tear gassed 
- No fly zone over all of Ferguson
- Street level blockades & teargassing of porches to keep people inside
- No badges, tags or any identifying marks on police
- etc, etc,
This is a disgrace for America and a wake up call for all of us.
 https://pbs.twimg.com/media/Bu9CVPGIYAA_tFz.jpg:large https://pbs.twimg.com/media/Bu-N9uIIIAEImna.jpg:large
"""He was denied information about the names and badge numbers of those who arrested him."""
If hiding the badge numbers or other identifying marks distinguishing law enforcement officers from each other isn't already a crime it ought to be. And it ought to be one that disqualifies the officer involved from serving in any position of authority over the public.
There is no excuse by which law enforcement can expect to have both legitimacy and the cloak of anonymity. If there is one thing the last 4000 years of recorded history has taught us; it is that unaccountable power will be abused.
If our civilisation is to have a solid foundation of law; it's law enforcement authorities must be more law-abiding than the average citizen rather than less. As is so glaringly the case in Ferguson tonight.
> But it is hard to see why Fargo, North Dakotaa city that averages fewer than two murders a yearneeds an armoured personnel-carrier with a rotating turret. Keene, a small town in New Hampshire which had three homicides between 1999 and 2012, spent nearly $286,000 on an armoured personnel-carrier known as a BearCat. The local police chief said it would be used to patrol Keenes Pumpkin Festival and other dangerous situations.
> Householders, on hearing the door being smashed down, sometimes reach for their own guns. In 2006 Kathryn Johnston, a 92-year-old woman in Atlanta, mistook the police for robbers and fired a shot from an old pistol. Police shot her five times, killing her. After the shooting they planted marijuana in her home. It later emerged that they had falsified the information used to obtain their no-knock warrant.
One of the scariest subtleties with these situations is how police officers always chant "stop resisting" regardless of whether the person is resisting. It's almost as if they are explicitly trained to repeat that mantra. It's eery.
EDIT: Why the downvotes? Honest question, how much more are people willing to take?
This situation is going to keep escalating. If you visit Ferguson, you'll see the business-district smashed up but the residential areas nice, calm, well-kept - with families literally everywhere walking around. The community is united and organizing. I can only hope that the period of chaotic rage settles down into something strong, long-lasting, and effective. This would be a true tribute to Michael Brown.
Last - I want to mention that a St. Louis City Alderman/Protestor was also arrested tonight. He remains peaceful as his respected reputation depends on it, so one can only gather that it was to silence his filming.
An account of STL police two years ago:http://antistatestl.noblogs.org/post/2012/03/19/a-personal-a...
And some resistance:http://antistatestl.noblogs.org/post/2012/04/22/welcome-to-c...
Anonymous has claimed that they will be releasing the name of the cop who shot Mike Brown if they recover it: http://www.salon.com/2014/08/13/anonymous_released_alleged_a...
The US has such poor police training that this problem will not go away. There are also way to many people in the force that are mentally not fit for the job.
Sadly the the Police appears to reflect the conscience of the country. Where force rules over diplomacy. Shoot first ask questions later and revenge over forgiveness.
Statistical Mechanics and thermodynamics are the basis for a huge amount of technology and scientific models of the world, yet they rely on a fundamental assumption which is in some sense unjustified, known as the 'Ergodic hypothesis': Even though (classically) we know that the current position of gas particles in a box can be determined from their positions in the past, in thermodynamics we make the (unjustified) assumption that their positions are actually random and independent of their previous positions. In other words, these models for the world are probabilistic, which contradicts our more fundamental models of the world which say it is deterministic (and even QM is deterministic, with the single exception of the born rule). What he's doing here helps justify the probabilistic treatment, and understand when it does or does not apply.
I have always though this to be one of the great 'foundations' questions in physics (The others being QM foundations/origin of the born rule, and foundations of Field Theory). These are 'hard' and borderline philosophical questions, which most scientists (with good reason) simply assume to be true, to the point they often find them uninteresting. Lately though there seems to be renewed interest in them.
From the video interview, I understand he has been living in Paris for a while but his English accent is surprisingly non-Brazilian (sounds Russian with his Rs and closed vowels).
30% discount code for HN crowd: HN2014
Also, the Useful Resources links at the bottom need hrefs and growthackers should be growthhackers :)
It would be useful to have some sense of the relative importance of the sites on your list. Maybe Alexa ranks, as a start? Or data from e.g. compete.com
1) Didn't see the HN discount, any chance that can be applied to my account retroactively? firstname.lastname@example.org.
2) There is alot to fill out on this one form. Can I save a partially completed application? What happens when I hit the submit button?
For instance, there is probably one function of your system that is responsible for a huge amount of work. If everything is in one database (say SQL, mongo,...) You have a complex system that is hard to scale. If you split the heavy load out, you might find the scaling problems vanish (because the high load no longer has the burden of excess data) and even if there is still a problem it is much easier to optimize and scale a system that does just one thing.
The most disturbing thing about microservice enthusiasts is that they immediately jump to: oh, we can write these services and clients in Cold Fusion, Ruby, COBOL, Scala, Clojure, PHP and even when we write them, the great thing is "WE DON'T HAVE TO SHARE ANY INFRASTRUCTURE!"
That's bougus to the Nth degree because a lot of the BS involved with distributed systems has to do with boring things like serialization, logging, service management, etc.
I think you still want to use the same language, same serialization libraries, management practices, etc. across all of these services otherwise you are going to get eaten alive dealing with boring but essential problems.
With promise pipelining, if you need to make two RPCs to the same server, and the result of the first is going to be an input to the second, you can actually do it in one network round trip. The trick is to send the server a message saying "Hey, when you finish that first call, substitute the result into the parameters of this second call".
With this, fine-grained calls no longer imply an enormous latency expense compared to course-grained. Meanwhile, fine-grained APIs are cleaner and more composable, as my link above describes.
It's unfortunate that CORBA gave distributed objects a bad name. Just like object-oriented design within a program is more expressive than procedural design, object-oriented network protocols are more expressive than the flat protocols we tend to see today. I've been working with object-oriented protocols a lot lately while using Cap'n Proto to build sandstorm.io, and I've surprised even myself at how much more elegantly I can express complex interactions.
CORBA only messed up in trying to make remote objects look the same as local objects. Everyone now agrees that was a terrible mistake. But making distributed objects work does not in any way require making them look exactly like local objects. Calls to a Cap'n Proto object look quite different from local calls, because you need to be aware of the network issues implied by the call. But I've found that the same higher-level OO design principles you might use locally translate remarkably well to Cap'n Proto interfaces.
It all boils down to OO programmers want their applications to be scalable and maintainable. They have decided the way to do that is through modularity. But we suck at enforcing modularity in a single code base. This has been proven time and time again. Microservices are just a sneaky way of forcing that modularity on ourselves. Instead of designing your system as a single ball of mud (monolith), you'll design your system as a puddle of mud (microservices).
It is just entirely too hard to write good, modular OO programs. This is why we hang onto every book, blog post and word the Object-Oriented Gods send down to us. OO could be a great and amazing thing for certain domains of programming. By all means, create monoliths and use Martin Fowler's Cookie Cutter Scalability solution because it is simple. But if you find yourself needing microservices, you're better off picking up a functional language where modularity comes naturally.
"Instead of pretending everything is a local function even over the network ..., what if we did it the other way around?
Pretend your components are communicating over a network even when they aren't?"
-- Solomon Hykes (of Docker fame) on LibChan
To me, it's pretty much anti-OO, and that's why I find it refreshing.
This is a really insightful description of the role of someone documenting software architecture.
It's not a language issue either. At this level we are talking about frameworks, models, domains, contracts, protocols, etc. This layer is not language dependent, although some languages are better designed to build frameworks that support these intents.
A classic example of how these assumptions creep into your designs is seen in the first chapter of Head First Design Patterns where they discuss at length how to create the perfect object model for a duck computer game. When I read that the first first think that came to my mind was, "Wait a minute, you are designing a computer game! Everything on the screen is a sprite. Sprites are moved around the screen by their coordinates once per game loop. How does the perfect duck object model help me here?"
Sounds very much like a hammer looking for a nail to me.
 fixed a few typos
This is a bit OT, but it seems like Angular apps have similar problems to distributed objects, where you can wind up making lots of network calls to retrieve one of these, all of those, etc. I'm curious what advice people have about that.
I guess the instinctive answer is to well let someone else solve the problem. Grab something existing / standard (REST API + rest client, RabbitMQ + msgpack) or something similar.
What it still doesn't save you from is managing basic distributed systems issues - network partitioning, timeout, asynchronous starts and stops. Maybe it is better because by building this distribution into the core of the system and not trying to abstract it away behind an API (like the author says) it forces you to deal with them explicitly.
Overall I still haven't decided if microservices is just one of those buzz words invented because the old ones (Object Oriented, SOA, etc) have gotten old and don't bring in consulting revenue anymore.
Software used to be shit, but at least fun, then we had software architects, and now software is just shit.
"I confess to feeling some kinship with Snowden. Like him, I was assigned to a National Security Agency unit in Hawaiiin my case, as part of three years of active duty in the Navy during the Vietnam War. Then, as a reservist in law school, I blew the whistle on the NSA when I stumbled across a program that involved illegally eavesdropping on US citizens. I testified about the program in a closed hearing before the Church Committee, the congressional investigation that led to sweeping reforms of US intelligence abuses in the 1970s. Finally, after graduation, I decided to write the first book about the NSA. At several points I was threatened with prosecution under the Espionage Act, the same 1917 law under which Snowden is charged (in my case those threats had no basis and were never carried out). Since then I have written two more books about the NSA, as well as numerous magazine articles (including two previous cover stories about the NSA for WIRED), book reviews, op-eds, and documentaries."
As a substantive comment on the article, let me say that I find it interesting that Snowden himself thinks it is appalling that NSA's internal security auditing is so poor that NSA can't even tell which documents Snowden disclosed to journalists, nor can it tell how many other leakers may still be on its staff. This seems to be a completely plausible claim, and that would be a reason why many American voters or leaders of countries allied to the United States might desire the current leadership of NSA to resign and be replaced with more competent leaders.
1. The NSA exploited the firmware of a Syrian core internet router, and bricked it by mistake. This was an "oh shit" moment (sic). So in it's eagerness to scoop up all digital communications, it killed the majormost way for citizens to communicate while in the midst of a civil war. Great.
2. There is a project called "MonsterMind", which 100% automates adversarial hacking in retaliation to detected attacks. Very Strangelove-ian, as the article says.
EDIT: Typo, thanks to not having had coffee in time.
Let's put into perspective 1 yottabyte:
All Gmail accounts (~500 million users * 10GB/user = ~5000 PB) +All Facebook photos (~2 billion users * 1GB/user = ~2000 PB) +All of Netflix's videos (1-5 PB) +Library of Congress (10-30 PB) + Wikipedia (0.0005 PB)
= ~7000 PB= 7 Exabytes.= 0.0007% of 1 Yottabyte!!!
1 Yottabyte = 250 billion 4TB hard drives.
A hard drive is about 4" x 1" x 5.75".
The Pentagon is a big building (6,636,360 sqft over 5 floors). If you started stacking hard drives inside the Pentagon it would take about 50 pentagons to hold 250 billion hard drives.
At scale you might be able to make a 4TB hard drive for somewhere between $10 and $100.
1 Yottabyte would be $2.5 trillion - $25 trillion in hard drives. That's a couple USA GDPs.
Okay, I think a yottabyte clearly can't be what they mean because that's just unfathomable.
They also mention a 1 million sqft facility.
In a 1 million sqft you can probably pack about 250 million 3.5" hard drives. If each drive was 4TB you'd end up with 1 million PB, or 1000 EB, or 1 Zettabyte
So by Yottabyte they might (maybe) mean Zettabyte. Only off by a factor of 1,000.
Even still, all of the data of Gmail, Facebook, Netflix, Library of Congress, etc is still probably only ~10% of this data center.
This piece is pretty interesting.
FYI, Glenn Greenwald is denying that any of the claims in this paragraph are true, and says that Wired never even contacted him or Miranda about the article:
(Plain text version in the history, at https://github.com/pflanze/wired-snowden-untold-story/blob/c...)
So in middle of a war zone, US conducted sabotage to core infrastructure of an other nation, with unknown cost to property or human lives.
It really should be seen as the obvious reason why hacking is not an acceptable tool to use in peacetime against other nations. Its not a defensive weapon, it hurt people, and it done with no responsibility what so ever.
I'm honestly curious why so many people are willing to take Snowden's claims at face value. The NSA rightly got a lot of flack for the softball interviews on Dateline a few months back, but it feels like the general consensus is that the softball interviews with Snowden are beyond questioning.
I hope he has Vitamin D stocked.
They've overridden scroll events so they at best don't work properly. Scrolling on a laptop gives you a weird non-mapping slide animation.
This is seriously one of the most unreadable articles, from a design sense, that I've ever seen.
Edit: Can somebody tell me what was downvote-worthy about this comment? This is getting ridiculous.
if the article presents a different answer than what's already known, through snowden statements communicated to laura poitras and greenwald, then they're probably not true, and if it repeats the same stuff, this is obviously a stupid question to ask and the article's just marketing b.s.
I went to the one in NYC few years back and it was not a good experience for me. I traveled from Boston and was super excited. Just a big let down.
PG was like a celebrity their. Impossible to get in anything with him. Same with other famous founders. Basically everyone was just circling and try to figure out who to talk to next. Was super awkward.
My takeaways from NYC:
1.) PG looks exactly like his pics and acts the same. It pretty hard to get to talk to him (due to all the people clamoring for his attention).
2.) Some famous YC alum are stuck up. Never met anyone as stuck up as the non-technical reddit co-founder.
3.) Justin Kan is the nicest person. Pure respect for him.
4.) In NYC at least hordes of MBA trolling looking for "technical cofounders". Met a really rude person who actually introduced himself as having an MBA from Harvard.
San Francisco might be different.
I've watched videos of past events, and they seem to follow the TED model of being amusing and inspirational but not particularly informative; I'm too cynical to be interested in "inspirational". On the other hand, I get the feeling that the hallway track might be good.
Anyone know who else they might be bringing in? Does Zuckerberg do it every year?
> 92five app by Chintan Banugaria is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Creative Commons are not for programs. Please choose AGPL or something like that for code.
You waste a lot of vertical space, I would suggest putting elements of the top-most part of the side closer together and maybe strip some text like this: http://i.imgur.com/9MjNLvy.pngThat way the full screenshot has a better chance to appear on the page.
Don't use slashes when you can use a word instead. "No-one can see / access your todos." could be "No-one can see or access your todos.". You are using "to-do" elsewhere which I prefer. I would recommend also using "To-Dos" in that header.
You say "I am sure you will love the design.". I would not use "I" in that page unless you introduce yourself first.
Also spotted "Yes its free." -> "Yes, it's free."!
The praise is hidden here!The product itself looks slick and useful. Personally I am not a fan of flat design but you seem to have pulled it off nicely. Self-hosted tools are the best, thanks for doing that! I really really really suggest you make the landing page less annoying though. :P
Scrolled to the bottom quickly, had to wait for it to animate in so I was staring at a blank page for a few seconds.
Can't be bothered to try the product, since I'll be old and gray by the time it boots.
First, creating a project doesn't work, after putting in the required info and clicking create I'm taken to an error page with no feedback as to what actually happened, even though the screen says 'something went wrong and we've noted that'. Also, requiring to add collaborators even if it's yourself is redundant, this shouldn't be required.
Tasks - The main tasks screen really isn't useful at all since each 'Task' takes up a huge amount of space, and all the links on the task card don't do anything, and even when sub-tasks are entered they don't show on the card. This section really seems like it should be tied into projects, not a standalone section, especially since there's a 'Todos' section as well.
In many places where input is required, it's not immediately apparent that the colored title area is editable or requires input.
There's quite a few little UI tweaks that need to be made, for instance the line height in the quick notes section on the dashboard does not match the notebook lines, so typing anything in there looks sloppy.
I use a very popular SaaS right now but would actually rather self-host (I like to own my data.)
Haven't had a chance to install it yet but from from the look of it and general feedback I could see my company making a switch if it all checks out.
I don't need responsiveness, HTML 5, a mobile app or any of that. Just gimme something that works!
I will take a look at this later though.
I have been liking scrumdo which also allows you to self-host the application (in addition to offering to host it for you - which is their profit model)
Curious, what kind of database (if any) backend is it using? If it's not using MySQL or anything heavy like that, I'm sold!
First you had to hand map each section of the object you're mapping the texture onto. This involved having the projector project a pattern and you'd move and deform various mapping polygons (using a mouse) until you didn't have any bleed or holes in the projection. If someone moved (or more likely accidentally bumped) the projector or the target object the polygons had to be manually adjusted. It seems these guys have automated the process of mapping the object in physical space, against a pre computed model. They had some bleed above the top of the chair in their demo video so its not yet perfect.
If you're Ikea and can set this up in the store, for buying a couch, and just have people click their fabric combination they want to see. I see this being very useful because they can't show every fabric. Granted Ikea's are huge to accommodate lots of floor models, but this would still be a draw.
If Ikea had a whole bedroom or living room with projectors all around, you could try out an entire decorating scheme without having to pull everything together. I'd take the ferry from Wall Street to Red Hook just to play with it for a bit.
So rock on VizeraLabs, and start talking to Ikea and BoConcept.
Presumably the room also has to be dark, which might make it a bit harder to understand how the furniture will look in context. I'm curious: is there any projector technology that could make something like this work in brighter rooms?
Bottom of idea barrel===scraped.
Love those graphics of the filaments and voids of the universe. Like bread or cake as it bakes and clumps around bubbles.
These scales are mind-blowing:http://en.wikipedia.org/wiki/Galaxy_filament
Just make a monthly recurring entry in your calendar that says "Check SSL certificates".
Well if you're a large bank or a heavyweight payment processor where an outage means lost $$$$ and not only $, you could easily have a few SSL certs from various root certs ready and roll one of them out once the sh*t hits the fan.
I was honestly expecting them to reference a monitoring service. It is possible to do for free with Nagios if you have a Linux box kicking around on your network. There are also paid services who will monitor your certificates and send you a nice email when there is 30 days left to renew (including several SSL registers).
openssl req -nodes -newkey rsa:2048 -keyout www.example.com.2014.key -out www.example.com.2014.csr -subj "/C=COUNTRY/ST=STATE/L=CITY/O=COMPANY/OU=/CN=www.example.com"
Process - Know where your key is - Know how to generate a new CSR from that key
Also see https://www.ssllabs.com/downloads/SSL_TLS_Deployment_Best_Pr... point 1.2).
BTW there is NO reason to regenerate the CSR if you reuse the private key.
Very annoying that people would "discover me" on both Plus accounts when I did my best to hide both profiles.
The issue is that Google+ leaks information: your profile picture and full name get returned from google searches, even when all privacy settings are turned on.
If you are in a position where security and privacy are somewhat important (even if it's only to yourself), your only resort is to not use Google+ and by extension not be able to use Hangout, which becomes an issue when other people in the organisation need to communicate with you.
Fortunately, https://appear.in works great (even in China, unlike Hangout) and there's no extra software to install.
That said, I think it bodes poorly for the future of G+, since Hangouts is one of the biggest things it has going for it.
Real Name? Ha!
Elaboration: You can buy email templates on ThemeForest for ~$2 and they'll be prettier but it is very, very rare that they are actually as thoroughly tested as these are. Source: The guy who deals with bug reports like "It's unreadable on [insert a device that neither the designer nor the email sender owned]" way more often than he'd like to.
Fun story, which I'm telling you because it is a fun story and not because I want to scare you off using Themeforest designs: I once bought, and promptly shipped, a transactional email template. It happened to include a reproducible remote crash against at least three major versions of Outlook. (After finding this out the hard way, I reported it to Microsoft's security, which looked into it for a few months before deciding "That sucks but it doesn't look like remote code execution is actually exploitable so phew dodged a bullet there, didn't we.")
Also, if you didn't catch it in there: the [premailer] library is awesome, and helps make email templates more manageable (use CSS/LESS/SCSS styles like normal, then run your HTML email through premailer before shipping and it'll inline everything for you). I use the Python library with Django to preflight emails before sending. Works like a charm!
Also available in Ruby, Node, and PHP flavors
Gotta love competition.
> So what is transactional email? Coming from a MailChimp > state of mind, you might simply think of it as "anything > that isnt bulk". Basically, it is email sent to an > individual based on some action. It could be: > > * an action they took directly > * an action they were the target of or, > * perhaps even inaction on their part
Thanks a lot for the resource!
If anyone is interested in collaborating, I'm thinking about converting these (and the SendWithUs ones) into ActionMailer layouts and views for use in Rails apps.
Conservation management often focuses on counteracting the adverse effects of human activities on threatened populations. However, conservation measures may unintentionally relax selection by allowing the survival of the not-so-fit, increasing the risk of fixation of maladaptive traits. Here, we report such a case in the critically-endangered Chatham Island black robin (Petroica traversi) which, in 1980, was reduced to a single breeding pair. Following this bottleneck, some females were observed to lay eggs on the rims of their nests. Rim eggs left in place always failed to hatch. To expedite population recovery, rim eggs were repositioned inside nests, yielding viable hatchlings. Repositioning resulted in rapid growth of the black robin population, but by 1989 over 50% of all females were laying rim eggs. We used an exceptional, species-wide pedigree to consider both recessive and dominant models of inheritance over all plausible founder genotype combinations at a biallelic and possibly sex-linked locus. The pattern of rim laying is best fitted as an autosomal dominant Mendelian trait. Using a phenotype permutation test we could also reject the null hypothesis of non-heritability for this trait in favour of our best-fitting model of heritability. Data collected after intervention ceased shows that the frequency of rim laying has strongly declined, and that this trait is maladaptive. This episode yields an important lesson for conservation biology: fixation of maladaptive traits could render small threatened populations completely dependent on humans for reproduction, irreversibly compromising the long term viability of populations humanity seeks to conserve
[NB A fisherman explained this to me in Mallaig Scotland, none of the fishing boats I've been on were so high tech!]
"Our ancestors traveled the beige tube to new waters, while the two-legged creatures watched and guided them."
NHAAP is the National Hydropower Asset Assessment Program, and it's put together a large chunk of the groundwork for the DOE's hydropower vision project.
While doing a little googling, I found an article  claiming $7 million in fish ladder work after structural damage forced a reduction in water level. So perhaps this solution could be cost effective or quickly put in place in case damage occurs just before a run.
 http://damnationfilm.com/ http://www.columbian.com/news/2014/apr/12/crack-in-dam-force...
And I love the thinking that went into this invention. I really appreciate the novel approach although I can't help but wonder how they've dealt with the issue of friction and how that might affect the fish - I imagine that there's a fair amount of heat created over such a long run, even at the relatively low speeds described in the videos.
I run the Ghostery extension and a year or so ago I noticed that when visiting YouTube ~15 analytic trackers were being blocked. Turns out a couple of extensions were injecting tens of trackers into popular sites (without my express permission), and I would have had no idea unless I had another extension to block and report this activity.
My girlfriends computer is worse - her extensions seem to inject actual adverts into lots of her pages. I asked her why there was an obnoxious "click the bottle to win 1000000$" flash advert on Facebook and she thought it was just how Facebook is. Same thing for YouTube and other popular sites.
Due to their response and lack of ability to understand security issues I stopped using them, it's a shame to see they are not any better after 6 years!
Usually stay wary of signing up for anything which tilts towards 3.
Is that part of SimilarWeb Pro? It's not clear from the website how their service could be used to monitor the web client traffic of specific companies. An independent reference on the quoted claim would be helpful.
Why? Because the feeling the demo evoked for me was: "Uh oh. This looks like a micro-managers wet dream".
So rather than just have "@John please !phone client" perhaps the demo would itemise the meeting minutes with: "@John said he would !phone client and discuss requirements".
Perhaps this is simply because I have had the unpleasant experience of working for panic-driven micromanagers before. But I think you really want to make sure your landing page is resonating with "Awesome - this is going to make my life easier" vs. "What pandora's box of hell will this tool unleash in my organization and work life".
Product definitely looks useful though, and the above comments are about making sure you present it in its best light. Good luck!
"If your work isn't ready for people to try out yet, please don't put "Show HN" in the title. Once it's ready, come back and share it then.
For example, blog posts, email signups, and fundraisers can't be tried out, so they don't count as Show HNs."
Also, minor grammatical nitpick: "LESS MESSAGES" should probably be "FEWER MESSAGES".
You use "less" when the item in question is not measured in discrete units - "there is less rain today than yesterday."
You use "fewer" when the item in question is measured in discrete units - "there are fewer raindrops today than yesterday."
Messages are discrete units.
Was hoping this would take minutes from Word as 95% of secretaries will use, somehow extract action items, and then help communicate/track them.
Everything shown here I would just use email to manange. It's not outdated, it works and it's an LCD. I run a small nonprofit organization and was hoping this might be helpful because action item followup is a pain point. But I wouldn't use this.
For a reference, this is an approximation of what light red & light green look like for someone with protanopia: http://i.imgur.com/EGSlsmB.png
In particular, I don't think that people without a software background are going to be comfortable with the abundance of syntactic sigils. Even in the twitter-native world, I think it will makes the product seem intimidating and un-natural (at least at first).
This is especially a concern because the value of using them is not immediately clear (excepting @person).
So you might want to change that word to something a broader audience would understand.
I wish there weren't 69,772 people ahead of me... (first world problems) :( or at least some way to gauge what that means.
Just thought you'd wanted to know.
The hypothesis here is that early bacteria/archea basically worked a lot like a car battery, with a + and a - terminal and the the permeability of the cell membrane allowed H+ to flow through the cell turning the little ATPase crank to turn out ATP. The unspoken hypothesis here is that organisms like this probably lived on the rock surface of a geothermal vent when there aqueous phase was acidic and there was another 'layer' that was alkaline (probably on the rock face?).
Overtime a Na+/H+ exchanger was added to increase the movement of H+ across the membrane which made the ATPase crank turn 60% faster. Eventually in phase 3 ion pumps really supercharged the ion gradient and allowed these organisms to move into environments that didn't need a bi-phasic H+/OH- layer. It's likely that the ion pumps and non-permeable membranes formed as the little guys moved out of the geothermal vents and diverged into new environments, giving rise to the divergent archea and bacteria.
Due to my genetic defect in the mitochondrial respiratory chain (ETC), this one caught my attention. Apart from its potential therapeutic application, it shows how a xenotransplantation at the molecular level can rescue a defective molecular structure in a living creature that is orders of magnitude more complex than the donor.
If you visualize the ETC as an engine with 5 different steps, with step 1 being broken, you go and get a repair piece from a much simple organism, a yeast in this case, plug it in the receptor and, lo and behold, the engine start working again.
And not only that, the piece from the donor serves the same function (pump protons to complex II of the ETC), but it is "internally" much more simple. This is a molecular "Lego" across species !!
As in, someone constructing something like the proposed primitive leaky LUCA, showing that it could survive and reproduce in the described environment, and then observing or engineering the changes that'd let it leave the vent?
Or is that far too ambitious to be realistic?
Could we in principle use some genetic engineering to make this process more efficient?
I've seen a lot of horseshit patents asserted against start-ups. If there was an organization that followed the troll around and offered defense services against all of their defendants, it would make trolling a lot harder, and might reduce the numbers of these parasitic lawyers involved in this shameless trade.
I just read about a Fish & Richardson patent partner who started filing his own "inventions" with the patent office, based on slight modifications of the patents he was filing for clients, and then sold those patents to trolls for huge sums. Its actually really easy to write patents focused on sabotaging your clients, if you are a lawyer and become familiar with their future roadmaps.
I know a bunch of trivial claims I could write right now and they would be worth a few million in a couple years, because Google, Facebook, and others would have to move in that direction in a few years (related to Machine Learning and image recognition).
All you have to do is follow conferences, understand the papers, and then write some trivial, and obvious evolutions of those techniques. Obviousness is something defendant's find extremely difficult to prove for highly complex technology, because the juries are made of people that have no idea what programming is, much less Machine Learning, and the judge is probably some moron, that thinks he is really smart, and assumes that he patent office is full of diligent geniuses ... and so he will give a lot of weight to the plaintiff's "USPTO certified" claims.
All it takes is for a programmer to be involved in one patent litigation and you see the patent system for what it is. A colossal system of giant, continuous, expensive injustice implemented in the hope of preventing an extremely rare form of injustice (when a true original inventor is cheated by a shameless larger company).
Imagine we institute an expensive system of highly trained commandos to follow every nerd in America around in high schools across the country, to protect them from bullying and to be their friends. It would certainly stop all physical bullying. But would it be worth the giant overhead/expense?
That is what we have to start asking ourselves. Even if the patent system prevents some rare injustices, WTF, is this continuous, and overwhelming cloud of uncertainty for every start-up and company worth it?
I feel like China and India are doing quite alright without overburdensome strong patent protection. And Europe seems fine with a hamstrung software patent system. And even in the US, Microsoft, Oracle, Adobe, IBM, and Apple got their start before software became patentable ... and they all did, and are doing fine.
If you see someone arguing for patents, they are almost always some fucking lawyer, troll, or someone sitting on a giant portfolio. The people actually making software every day don't want this shit system. VCs that fund start-ups, don't want it ... even though you would expect they want it, to protect their investments.
- Prof. Jonathan Askin - @jaskin - runs the clinic, and trusted us to try this experiment.
- Maegan Fuller - @mafuller21 - did the lion's share of research and writing. Brilliant and dedicated student. She just took the bar exam.
- Jorge Torres - @jorgemtorres - Guy who actually knows patent litigation. Too bad he dropped out of law to be a VC. Pitch him :-)
"After reading it, and weighing the recent Supreme Court decisions, the troll simply dropped its case against CarShield. After months of dedicated work, the clinic students deserved a gavel-banging judicial decision in their favor. All they got was a quiet withdrawal. But I think we can still chalk it up in the win column. The case is dismissed (for now), the students learned real patent litigation skills."
Does the decision encouraging "fee shifting" require that the case go to trial? Does it require that the fees actually be paid by the defendant? Or might the law school students still be able to receive payment by the troll for their pro bono defense? It seems like the "new standard" would be much more effective if it also applied in cases like this.
This is a little disingenuous. You know the patent doesn't cover the "gist" or any particular figure. It covers the claims (which you don't mention at all, even in passing). And for some reason, you don't even tell us what the patent number is so we can look at it for ourselves!
From a little googling, I suspect that we're talking about Pat. No. 6,775,356. But why hide the ball and characterize the patent as "not particularly innovative" when you could just let people see it for themselves?
When reading articles from Medium I feel like I am not only not wasting time but acquiring knowledge in extremely fast pace.
I wonder if invalidating patents, that trolls commonly use, a good use of a law student's time?
Yet the startup and the judicial system already lost time on this. There should be a fee for withdrawing cases like this.
I'm glad they didn't have to the pay the troll, but I also hate when the troll doesn't get what it deserves, either: losing.
Stick it to them! Good work.
However, it shouldn't be that way anymore--not after Citizens United. I'm fighting a lawsuit about this issue right now, and if I win (however unlikely), corporations will be able to represent themselves against patent trolls.
Is it difficult, confusing and complex work? Yes. Is it any harder than programming, or anything else a serious startup would do? Not really. And it beats paying a law firm six or seven figures.
The case is:
Question 1: has the lawsuit been filed in an odd/irrelevant place? Followed by some subquestions to be more precise. If so, fill out this form, include the addresses of .. and ... and we'll send a form letter to them for you, asking for a dismissal.
Question 2-5: keep stalling and asking for dismissals based on various reasons.
Question 6-10: try some other ways to get the troll to drop it, for instance by presenting an example of obvious prior art
Of course all letter include repeating references to relevant higher court decisions.
While helping out gives the students experience, it's not reasonable to consider this any sort of real option beyond an occasional situation in which a startup can solicit a law-student who takes on a single case as part of their curriculum.
Note that theres no good exact opinion about the One True Size of the Internet every provider we talk to has a slightly different guess. The peak of the distribution today (the consensus) is actually only about 502,000 routes, but recognizably valid answers can range from 497,000 to 511,000, and a few have straggled across the 512,000 line already.
It's interesting how they explain that since there's no true consensus for the actual size of the routing table, the "event" of crossing the 512k barrier has frankly already begun ... and, so far, hasn't been catastrophic, nor likely to be.
and that doc goes ahead to explain how to increase the limit at the cost of space for IPv6. Worse: The sample code (which everybody is going to paste) doubles the space for IPv4 at the cost of nearly all the IPv6 space, even though we should soon cross the threshold when we're going to see more IPv6 growth than IPv4 growth.
Obviously this only finds routing level problems. We can send a /17 to you just fine, but if you're having an IGP problem and sending every byte of it to null, well, from the BGP perspective that's just fine. Much as if you insist on sending us RFC1918 traffic we'll drop that route and traffic for you just fine, just like we had to eat your 0/0 route you're trying to get us to advertise to the entire internet. I think my head still has a flat spot from hitting it on the desk arguing with people.
Its been a decade since I did that stuff professionally at a regional ISP and I really don't miss it. Not much, anyway.
WordStar: Used "X" to Exit to system in its main menu (https://www.flickr.com/photos/markgregory/6946218793/?rb=1) - I do not know the revision shown in the screen shot.
According to Wikipedia (http://en.wikipedia.org/wiki/WordStar) WordStar was released in 1978. Which moves the date back to at least 1978 to use X for exit.
However, there is possibly a very simple explanation that the blog posting overlooked. In text menu's, such as WordStar's, which were quite common for a lot of software from that era, using the word "Exit" to mean "leave this program/application" was also common. When one goes looking for a single character memonic for "Exit" to build in as a keystroke to activate the "Exit" command from the menu, one has four choices: [e] [x] [i] [t]
Since [x] is an uncommon letter, while e, i, t, are more common, and therefore more likely to be used for triggering other commands in the menu(s), choosing [x] to mean exit meant that the same character could likely be used as a universal "leave this menu" command key across all the menus.
Which would then lead to the common _F_ile->E_x_it command accelerators in drop down style menus (whether in a GUI or in a text menuing system). [x] was unlikely to have been used for the keyboard accelerator for other entries in the "file" menu, so picking e[x]it was a safe choice.
It is not a far reach from _F_ile->E_x_it using [x] as its accelerator key to labeling the title bar button that performs the same function with an X as well, to take advantage of whatever familiarity users might have with the drop down menu accelerators
That said, since the "X" in this case is white on a black background, I always interpreted the icon as four arrows pointing inward to indicate a shrinking/disappearing motion. In fact, when you closed a window, GEM would play an (inelegant) animation akin to the Macintosh of the time, composed of a sequence of boxes first shrinking from the size of the window to a small box and then shuffling that off to the top left of the screen.
As bemmu points out, the maximize button (at the top right in a GEM/TOS window) is four arrows pointing outward. Incidentally, GEM did not have a notion of "minimize."
Put another way, although I find the Japanese inspiration argument interesting, I don't think there's a whole lot to it. I think it's a fun coincidence.
In any event, thank you for the trip down memory lane and for the fun screen grabs!
NextStep 0.8, '88 vintage.
I wonder where the author got the idea that the [-] button at the top-left was a close icon. It was the "Control Box", a menu icon. AFAIK it's still there, just invisible -- hit alt+space to open it.
Disclaimer: I'm currently unable to test that.
Edit: seems Wordstar used X too, probably starting in 1978.
There is no reason to suppose that the GUI usage was inspired in any way by exotic Japan. The X as "cancel symbol" has been quite common in the west and indeed worldwide for millenia.
Or crossing-out an item to "delete" it on the page?
Maybe it's because I was used to Windows 3.11, where you had to actually double-click the [-] button to exit an application.
If the icons in upper left and right are also like that, then the upper left icon is actually four little triangles pointing inwards and not an X. The one on the right is four little triangles pointing outwards.
(Or it could be an X)
A behavior still present in modern versions.
No [x] to close these 1980's text editors either. X was commonly used to delete characters in-line, but not to close the program."
Hmm... I've used :x to write+quit in Vim for years. And, :X is to encrypt+quit. Don't have a year when that was added though. Could be fun to try and dig that up.
One quick thing, IIR Windows 2.0 and 3.0, the '-' button in the upper left wasn't "close". It was a small menu that happened to have close as an option.
edit - Here's Arthur, the precursor to RiscOS in ~ 1986 - http://www.rougol.jellybaby.net/meetings/2012/PaulFellows/10... - It has nice x icons.
I clearly remember that for closing windows one could do alt+f4 (which was itself a shortcut to Close) or open the file menu (Alt+F) and select eXit.
I can't check but I believe it was the same for Write and notepad as well and any other programs that had the Exit option.
So maybe that's where the windows 95 developer took inspiration for the X icon
The [X] icon in graphic windows software (not in WordStar, Vim, etc), and not thinking as a letter of the alphabet (remember that maximize and minimize don't are also) but just as picture, it remembers me something collapsing. Like something bigger in a normal state with the borders collapsing to a center till disappear. As when you turn off and old CRT television (or an Android powered cell phone).
I think it's incredibly important to note the diversity of subjects consumed and the importance of literature in these children's upbringing.
http://arxiv.org/pdf/0811.2362.pdf, Counting closed geodesics in Moduli space (PDF)