* Wikipedia provides a good summary of replication attempts: https://en.wikipedia.org/wiki/Ego_depletion#Reproducibility_...
* Slate covered the topic here: http://www.slate.com/articles/health_and_science/cover_story...
* And here is a recent, large replication study: http://journals.sagepub.com/doi/full/10.1177/174569161665287...
As a result, I would hesitate before using "ego depletion" as an excuse for rationalizing a lack of self-control (eg. giving into "cheat foods" or being irritable/impulsive). Whether or not "ego depletion" is real, science has not yet adequately validated the theory.
Moreover, there is a risk to accepting the theory as true: because one believes in "ego depletion," one can rationalize a lower degree of self-control, which may have been higher otherwise. This creates a self-fulfilling prophecy.
I think it is fair to assume that, given the current research, "ego depletion" is no more than a reasonable hypothesis. It is possible that willpower may not fit the "finite resource" model at all.
From the website I am practicing mindfulness throughout the day since I have my Apple watch (through the breath app). I think mindfulness would be good for hackers. On physical tasks you could lift weights but on mental tasks your goal is probably reducing anxiety or frustration. Or preserving flow (Podmoro supposedly does this but I could never get into it).
On the other hand if I get a good flow going and I am uninterrupted I will probably forget to check HN and I will continually work until I do get stuck even if that happens.
Others have studied this, including John Cleese, as detailed in his lecture on Creativity: https://www.youtube.com/watch?v=9EMj_CFPHYc
Don't make conclusions. Only ask questions and observe. This stuff cannot be explained. And it cannot be logically thought about. It must be seen and experienced.
Don't doubt what i have to say. Go check for yourself.
The quality of the meals I'm sending out (plating and decoration wise) is nowhere near as good as it is on Mondays.
I need more chef's :-p
Lessons learnt by NSA - never over estimate the skill level of your network admins.
Lessons learnt by Microsoft - never under estimate the loyalty of your Chinese Windows XP users, both XP and Win10 have 18% of the Chinese market .
Lessons learnt by the Chinese central government - NSA is a partner not a threat, they build tools which can make the coming annual China-US cyber security talk smooth.
However, MalwareTech's sinkhole intervention has bought enough time for patches to be pushed out, so at this point it is absolutely imperative that everyone apply these patches as soon as possible.
Not sure I'd be singing his praises if his rash decision had triggered the deletion of the encrypted files.
Even though this fortunately turned out to be false, what if it had been true? Would the security researcher be held in any way accountable for activating the ransomware? If I were the author, I might be a bit more careful in the future before changing factors in the global environment that have the potential to adversely affect the malware's behavior, but of course I'm not a security researcher, so I really don't know.
 I suppose a domain could probably be made to appear unregistered after being registered - depending on the actual check performed - but there are other binary signals (e.g., the existence of a certain address or value in the bitcoin blockchain) that might not be so easy to reverse.
That's quite an high abstraction level programming thing to do to use a domain name registration state as a boolean. Is that a regular thing ?
I'm not the most diligent follower of security news, but I'm pretty sure that SMB network sharing is riddled with security vulnerabilities, latency issues, etc, and is generally wildly unsuitable for being left wide open to the entire internet. How could any institution with a competent IT department not have had this service firewalled off from the net for years?
The malware could have just as easily used the registration of that domain as a flag to start deleting data, no?
Well, I guess maybe they didn't want things to get too out of hand and now if they want they can be back up soon with that fixed.
Uh, no. Here's an archived copy:
EDIT: After looking explicitly for it I found www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com.
(A very important point at the bottom of the article)
I was wondering why they didn't just do a simple variant:
1) Instead of relying on DNS, which anyone can create, why not make a user account on some well known forum site. Like HN or Reddit.
2) Open the site, look for the user's page, and check his message titles by hashing them against some hash that can be in your code.
3) Detonate if you don't see the code, or the user account doesn't exist.
This would have the useful characteristic that you could start/stop the attack using just an internet browser, anywhere. And the code word that you are after would be crypto hashed, so the defenders would have to find your keyword somehow from the hash. Heck, you could confound everyone by turning the thing on or off according to location, time of day, and so on.
For extra points make it a blockchain thing. They're already using that for payment, right?
sudo ufw deny out to any port 445
I store all my critical files in an offline environment (sandbox) so the only files that are going to be encrypted are replaceable (non important) and disposable. For example, I wouldn't cry if my C.V got encrypted because a copy of it exists in about 50 locations either offline and online.
Unfortunately I need Windows because my colleagues like to send Windows-only .DOCX files which work best in MS Word, and I don't have a Google account, so I can't open them in Docs. This is a conscious decision to permaban Google from my life, but Windows is staying.
This story, if true, details a person who profiled this malware and correctly logged the network requests it was making and then correctly identified a fundamental vulnerability in the software. This is not an accident at all - it is rather a profile in supreme competence. We should recognize it as such.
Can someone please explain this? I have no idea what was said there.
Found it www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com
I also wonder if now ransomware developers will leave red-herrings in the code where if the wrong domain is registered, it will do something more destructive.
It's like knowing which wire to cut when you're defusing a bomb!
Apple is extremely guilty of normalizing the frequent entry of passwords. I recently reinstalled a Mac and an iPad, and for each device I must've entered my Apple ID password seven or eight times. in the normal course of getting things done I then enter either this, or my local login password, many times a week.
When your password is twenty characters of line noise or an extended passphrase this is thoroughly irksome, especially on virtual keyboards like the iPad. It is no surprise to me that less security conscious folks, faced with this onslaught of excessive credential demand, choose shorter i.e. easily cracked passwords; and no surprise that everyone becomes less suspicious of the sham password dialog.
So when reading of yet another photographic burglary from a cracked iCloud account, we should always lay part of the blame at Apple's feet, for systematically normalizing the frequent entry of credentials.
That is not the end of Apple's social engineering enablement shame. Another glaring blunder is in Apple Mail, where the "To:" field is shown with your real name, even when the sender did not include this. The humans respond positively to the use of their given name, so this heightens the verisimilitude of scam messages.
To be honest I don't mind if all Apps are sandboxed with the exception of a couple "user super-user"; I don't really care if my machine's root account is secure if all my horses sitting in $HOME are let loose on the net.
Or they could be domains for checking if you're in a sandbox like WanaCrypt. Why wouldn't you just use 20 well known domains otherwise?
HandBrake-1.0.7.dmg was replaced by another unknown malicious file that DOES NOT match the SHA1 / SHA256 hashes on our website or on our Github Wiki which mirrors these: https://github.com/HandBrake/HandBrake/wiki/Checksums
The Affected Download mirror (download.handbrake.fr) has been shutdown for investigation.
The Primary Download Mirror and website were unaffected.
Downloads via the applications built-in updater with 1.0 and later are unaffected. These are verified by a DSA Signature and will not install if they don't pass.
Downloads via the applications built-in updater with 0.10.5 and earlier did not have verification so you should check your system with these older releases
I wonder if that's speculative - or if offline bruteforcing them works often enough for it to be worthwhile for the malware authors?
>curl -sL https://script.google.com/macros/s/AKfycbyd5AcbAnWi2Yn0xhFRb...
What the hell, Google? Your domain name is one of the most trusted on the internet and yet you're hosting random user submitted scripts on there? What happened to googleusercontent.com?
$ brew cask install handbrake ==> Satisfying dependencies complete ==> Downloading https://download.handbrake.fr/handbrake/releases/1.0.7/HandBrake-1.0.7.dmg Already downloaded: /Users/wolf/Library/Caches/Homebrew/Cask/handbrake--1.0.7.dmg ==> Verifying checksum for Cask handbrake ==> Installing Cask handbrake ==> Moving App 'HandBrake.app' to '/Applications/HandBrake.app'. handbrake was successfully installed!
Doesn't fix the root cause, but could have caught it much sooner.
> Specify WiFi adapter (use ifconfig to determine): wlp4s0
> [==================================================] 100% 0s left
> Found no signals, are you sure wlp4s0 supports monitor mode?
Do I need to run airmon first?
I have been programming professionally in Common Lisp (off and on) since the 1980s but there is something equally magical about Smalltalk. I have often thought that Smalltalk could be the language I use after I retire (I am in my 60s and I will probably stop working in about ten years).
That's what Lisp systems do too. Program elements like classes, functions, methods, symbols, ... are first class objects. With something like CLOS you have a similar level of object-oriented meta-programming capabilities.
Many Lisp systems offer additionally to execute Lisp data using a Lisp interpreter and Lisp has a simple data representation for Lisp programs: Lisp data.
Smalltalk OTOH uses text as source code and usually a compiler to byte-code.
> because Lisp source code is expressed in the same form as running Lisp code
Only if you use a Lisp interpreter. Otherwise the running Lisp code might be machine code or some byte code.
> Smalltalk goes one further than Lisp: its not that Smalltalks source code has no syntax so much as Smalltalk has no source code.
That's a misconception. Smalltalk has source code. As text. It's just typically managed by the integrated development environment.
It's actually Lisp which goes further than Smalltalk, because Lisp has source as data and can use that in Lisp interpreters directly for execution.
I'm not sure this is true. Surely any programming language that lacks macros would be more powerful with them.
As a thought experiment, imagine Lisp without macros. It's not hard; after all, "The Little Schemer" covers metacircular interpretation without ever mentioning macros. So what's going on? Apparently we don't need macros! But, we could add macros to a Lisp by reifying them in the metacircular interpreter. There's actually a feature in plain sight which makes this possible, and it's the humble (quote) special form. This is what makes code and data intermix so cleanly in Lisp.
This is why languages like Julia and Monte are not shy about using "homoiconic" to describe their language design; a standard library compiler is just as good as a compiler in the core semantics, as long as it's easy to use and meshes well with the rest of the language.
What most of these articles seem to miss is that that Java's designers were themselves expert Lispers and Smalltalkers, and they most certainly realized all that, and that Java's success is a consequence of them understanding exactly why not to repeat the same design. Design doesn't live in a vacuum. Design is shaping a product not just to fit some platonic ideal, but reality, with all its annoying constraints.
To understand why Lispers and Smalltalkers designed Java the way they did, I recommend watching James Gosling's talk, How The JVM Spec Came To Be, and the first 20 minutes or so of Brian Goetz's talk, Java: Past, Present, and Future.
This plays havoc with your ability to do static analysis, and languages that hinder static analysis should not be used in real-world systems. If the earliest you find out about errors is in a running system, it's far too late and you are hosed.
This is why the Lisp and Smalltalk Evangelism Strikeforces have been met with decades of failure, while the Rust Evanglism Strikeforce is getting on with a massive project of digital tikkun olam.
Here's my suggestion. When you receive the document, read it and see if there's a noncompete clause. If so, you're going to want to send a redlined version back to them, changing the noncompete duration from "during and for 2 years following employment at the company" (or whatever they gave you) to "for the duration of employment at the company." By doing so, you show your willingness not to do any kind of work for a competitor while employed, while very clearly pointing out that you do have the right to get a new job. It may be important not to offend the person who wrote up the agreement and included something so ridiculous, so the minor nature of your modification will allow them to save face.
In the end, most employers won't bother to argue the second point, and the ones that do are probably shadily taking advantage of you in other ways.
Additional note: in California and several other states, these clauses are not legally enforceable anyway, and you should mention that when you give them the "fixed" agreement.
The only reason big companies offer health insurance is because it limits employees's freedom. It would be easy for the Fortune 100 or 200 in unison agree to eliminate health care and provide a higher salaries. It would make the companies more competitive globally and it would free them from a whole lot of other nonsense, but they don't drop healthcare. The reason they don't droop healthcare is because healthcare and pre-existing conditions limit employee options and it suppresses wages. Also if there was universal healthcare it would be easier to start small companies and attract employees, those small business would be competing for employees against big companies on equal footing.
Healthcare is a racket limiting not just healthcare but freedom.
Here's a relevant quote (in which the author is actually quoting Aaron McNay):
"Both employers and employees would like to be able to train the employees if the cost of doing so is less than the gains in productivity. However, there is a potential collective action problem here. What happens if the employer provides the training, but the employee then moves onto another job? The employer bears the burden of the training costs, but does not receive any of the benefits. As a result, the employer does not provide the training, and a mutually beneficial trade is not made.
By preventing the employee from being able to move, a non-compete agreement eliminates the collective action problem."
I'm not saying that non-competes are necessarily good, or necessarily bad. It depends on the circumstances. But I do think that a lot of other commenters in this thread do think that non-competes are necessarily bad, and I think that's incorrect.
I felt morally OK with the situation...
Only, my contract did have a noncompete. But then, this is Sweden, and noncompete clauses are almost not enforceable by Swedish law. An employer can't stop an employee to take another position. To be a valid clause, an employer must offer the same payment the new position would have had whilst riding out the non-work period, and no one does that.
A strongly worded letter from my lawyer sorted it. Never heard from them again.
I'm surprised that this isn't law. I guess financial companies care about their employees more and/or their employees are more astute about contracts.
Companies shouldn't be allowed to prevent their ex-employees from earning a living. If it's that important for them to prevent the transfer of their proprietary information, they should be happy to pay for it.
- When you quit, tell your now former employer that you're quitting to pursue something other than what was your established industry. Your (made up) lifelong dream of starting your own microbrew brand, Macrome supply business, winery, whatever. Or looking after a sick relative, or going back to school full time, etc.
- Cut off ties with all your former coworkers, at least for the noncompete duration. If you bump into them at the grocery store and you can't get away from them, tell them about how wonderful the beer business is or how your relative is doing.
- Don't put on Facebook or Linkedin that you work for the new employer.
- For the duration of the non-compete, only those closest to you who critically need to know about your new employer, spouse, etc will know.
- Avoid publicly-facing industry related activities that tie you to your new employer for the duration of the noncompete. Giving speeches, presentations, writing article, etc.
None of these are foolproof but they are all common sense. Remember the Monty Python sketch about How To Not Be Seen.
Even in CA, trade secrets have an exception.
I think that the US as a whole should follow California in outlawing non-competes. It definitely has been shown to be workable.
- Sign the minimum of documents
- Don't provide full, personally-identifying information unless it's absolutely required
- Negotiate terms of boilerplate agreements if they're too unreasonable / don't apply
- Don't sign a binding arbitration agreement, BA is a worthless/corrupt system that nearly always favors the employer. 
- For CA-headquartered companies, refuse to sign NCAs because it creates legal liabilities (ie, could they involuntarily transfer an employee to another state and then fire them to make an NCA apply?)
If it is that important to the company the employee should be remunerated
Needless to say I am not a web developer anymore.
Just because someone gives you a piece of paper to sign, doesn't mean you have to. Wait until it's unavoidable.
Stuff like, not being able to take current customers to a competing business within a mile for a period of 1 year is considered reasonable.
But this is interesting, I work in an area of the company that isn't really part of their core competency. Meaning that the kinds of firms that would hire me are literally in another sector and wouldn't be considered competitors.
So this fact, that normally manifests as complaints that "management has no idea what we do here" and/or that they "have no business claiming they're in this business," ends up helping me out.
Of course, general well-being is a better metric of whether the incumbent gets voted in than economic growth in developed countries because well-being is far more tangible to the common man than the abstract concept of economic growth. OTOH, in a developing country, I'd argue growth is a better indicator of the probability that the incumbent will win since developing countries have growth rates which are in general far higher than developed, and since these high growth rates result in visible, tangible changes: bridges get built, schools are opened, and people get jobs.
Perhaps, the new thesis of the article should be that developing nations should focus on economic growth, while developed ones should focus on the happiness of their people.
I am pretty sure 10 years ago Greece would have featured as a happiest nation and Venezuela not as much as a miserable nation that it is today.
Contrary to all this India WAS super miserable 10 years and go and much more happier today though comparatively might not be as happy as the Italian.
I'm not sure I buy all of it (the U.S. wasn't really a world power prior to the world wars, so they'd be dismissing that and other gains if it was their whole premise), but like this article, something to think about.
Maybe poor people wanting to make more money need to rethink their strategies?
If that is true, how much of suffering of individuals are we able to tolerate to keep the average of a whole up a few notches? (re: all the issues with utilitarianism)
Policymakers are probably very concerned with happiness. Their own ;-)
An attempt at a more accurate statement would be that this emulates the use-after-move checking that the Rust compiler does. The problem is that that it doesn't do that either: it statically prevents copies but doesn't prevent use-after-move.
I think the accurate way to describe this pattern would be that it disallows copies and forces you to annotate moves with the move keyword. This is somewhat similar to what Rust does, in that non-Copy types are moved by default. The difference is that you don't have to write "std::move" in Rust: the compiler just infers the right thing to do.
It's a little hard to map this onto Rust semantics to begin with, since fundamentally all this is is not having a copy constructor, which is a concept that doesn't exist in Rust in the first place.
Looks like it can be done through Management Engine, which has access to everything apparently.
Only success so far is unlocking BCLK, but the overclock is small and unstable that way.
Another roadblock was the read only lock, which can fortunately be bypassed on POST on xx67/77 chipsets.
I feel like there would be legal problems though...
But the root of the problem is, that computer security still does not get the proper awareness and attention. This starts from how we write software, but from a society point of view, mostly how we deal with computer systems. Computer systems are not toasters which you can replace easily. Often they are part of larger installations, difficult to replace as a component. We need to deal with them as with aspects of traffic or workplace safety, or hygiene. There should be a clear concept (I sincerely hope we don't require too strict state regulations) that like any professional tool, a computer system has to be reviewed in regular intervals for being fit for its intended purpose, and maintenance for security should be done as naturally, as mechanical or electrical checks.
So, for any computer-powered (and networked) device, this would mean, that either there is a maintenance contract in place, which in the end would mean, the provider has a contract with Microsoft, if Windows is used, or, like with any other device, the machine is no longer considered fit for professional use.
Until people start losing personal money they won't bother educating themselves. They see these "hacking games" as, well, games.
> The money they made from these customers hasnt expired; neither has their responsibility to fix defects.
This is wrong. We don't ask for mandatory lifetime guarantees in any other industry I'm aware of, and perhaps more importantly, much of what is done in the field wouldn't be possible if it did (could you imagine having to continue to maintain an IE5 webpage for another twenty years?).
It goes on:
> In its defense, Microsoft probably could point out that its operating systems have come a long way in security since Windows XP, and it has spent a lot of money updating old software, even above industry norms. However, industry norms are lousy to horrible, and it is reasonable to expect a company with a dominant market position, that made so much money selling software that runs critical infrastructure, to do more.
If I buy a toaster it comes with a one year warranty, maybe. A nice car might come with a five year or two hundred thousand mile limited warranty. Microsoft sold a product at a fraction of that cost and supported it, unconditionally, for 8 years. 8. And they supported it for five more after that with appropriate arrangements with enterprises (and after a select few enterprises who somehow concluded that paying some engineering salaries at Microsoft for dedicated support was cheaper than upgrading). That's a 13+ year lifetime of support on what was an $80 a license product. Industry norms can only be "horrible" insofar as there's only been a serious industry for 30 years... And XP was supported for half of it (man, I suddenly feel old). My point is that there is no world in which the "cash-strapped National Health Service" is not the primary entity which was grossly negligent in its maintenance of critical infrastructure.
Stepping back and looking at the article as a whole and less at specific inflammatory parts, it is, well, filled with inflammatory parts. It starts as a thin attack piece on Microsoft for being slow to provide free support for a 16 year old product, offhandedly references IoT for some added scare factor, then starts calling for action (from both corporate and government actors) without any serious discussion on either the merits of the proposed actions or the impacts taking them would have on those organizations or the implications that they would create for future actors.
But hey, if you're a fan of Bruce Schneier's more recent musings, at least you'll enjoy the conclusion: That we must legislate software, and fast.
Users have been fooled: Turn it off and on, is a reasonable and well-known troubleshooting guide, but nobody blames the software vendor. If I'm on the phone with a company and they tell me to turn it off and on, I can't even point out "so you sent me something defective?" this is normal folks.
Maybe we need to teach programming younger and younger -- and it'll take two or three generations to become common enough that management will actually understand what I'm doing. Or maybe we need awareness campaigns to keep users from putting up with shit experiences!
Or maybe someone has some other idea, but the major barrier exists: We don't know how to program computers, and saying that out loud makes a lot of people with the job-title (or description) of programmer clam right up.
Security of a object is a thing you can only evaluate the day it turns around and snaps at you.
Now the default american solution for this, would be to have a "Late-Adopter" plugin, allowing to install "Additional" Gated-Comunity-Security for the rich - and let the mob become one huge botnet, held back by aggressive campaigns of bricking whole device classes remote should they be a threat to the "devices" in the better neighbourhoods.
Unfortunatly the rest of the world is either too poor or unwilling to follow this model, which means we are going to see a regulated, securty TV checked model in europe and japan, state regulated devices in china & russia - and a wild west everywhere else.
Anyway money equation I think is quite simple :
Why buy Windows, when you can use Linux and buy backup infrastructure.
It all started with poor ethics. Every single version of Microsoft Windows have intentionally left backdoors for NSA and some hackers knew how to use it. This is like you pay some money and buy a house, but the previous owner keeps backup keys to watch you. And some others get the backup keys, kick you out of your own home unless you pay them.
This is such a shame for Microsoft, NSA and American government. People trusted Microsoft products and purchased them, in return, Microsoft wanted more than money; they wanted to spy them for their ideological goals.
Anyone who wants to surf can easily do so on their personal smartphone with no risk to corporate systems. No one has ever been able to put together a coherent rebuttal to my proposal, yet still the PCs remain connected and still people click things they shouldn't...
Take for example, the Fappening. This was possible because iCloud. iCloud is only necessary - like Dropbox and other services like it - because OS vendors decided they didn't want people to have control over their content, using their local computers - that it was 'easier' to provide servers dedicated to the purpose, than to actually add dedicated file sharing to the individuals' computers.
(There are no really good reasons why your modern PC can't serve its own content - especially in this era of bandwidth and monster CPU power. We hosted the 90's Internet on far less powerful computers than your average mobile phone, with less bandwidth too.. the point is, the protocols.)
So I honestly think that OS vendors need to be forced back behind the wheel to make our computers better, and the "network is the computer" business model needs to die. This was always a terrible idea, formed on the basis of an accountants wet dream, and should be forgotten as soon as possible. Instead, lets build better computers, simple as that. Computers that are actually safe to use because they've been designed that way, from the get-go. The cloud must die.
A lot of the problems are problems due to the architecture though, not necessarily hard to implement in more conventional architectures.
The article quickly goes over it, but for those who still wanna know more about the architecture, the TIS-100 is composed of nodes that can store a small number of lines of instruction code, have a working register, and have 4 I/O ports, UP, DOWN, LEFT and RIGHT. If asking for input, they will block until input is received from the specified adjacent node, and if passing output, they will block until the specified adjacent node asks for input. There are also memory nodes, introduced later in the game, to store more data.
These nodes are on a grid. Some of them are disabled, and the memory nodes' placement differs from program/puzzle to program/puzzle. Thus, careful selection of nodes and I/O ports is required for completion. I don't know if anything similar exists in actual hardware.
There's also a built-in "debugger" which simply allows you to run the program step by step and view all values, blocked nodes, and current instructions, which really helps, and possibly teaches players how to generally debug actual machine code. The programs run on a set of unit tests, and you can see which ones fail and why.
In classic Zachtronics fashion, there's graphs explaining your performance in the end, in terms of time, and space. Users not familiar with actual hardware architecture principles won't probably be able to figure out themselves how to get the best time, because most problems require use of pipeline-like instructions, due to the blocking nature of the nodes. So while it teaches tricks and fundamentals, I don't think it teaches more advanced and important stuff. And that's not a bad thing, it's a great game.
Would recommend to anyone.
It is perhaps the game of the genre.
For you parents out there...what has been your experience with/advice for teaching your kids a programming language? It's definitely something I want my kids to get comfortable/familiar with early but I get concerned about over-exposing them to too much "screen time" at a young age and the deleterious effects that might have (even ones we don't know about yet).
Don't have kids right now, or any on the way, but that's something on the horizon for me so I've been thinking about it.
There are some games that explore that direction (e.g. space engineers has some kind of programmable block), but no successful ones in the spirit of the original 0x10c vision (which was pretty vague and maybe the hype and high expectations killed it). I still think that one could build a great game around the main idea, but probably it is hard to balance the game mechanics between "real" programming and actual game play without alienating users that want to get into programming and actual programmers that want to play a game.
Similar posts pop up occasionally here, such as:
A couple more lists of programming games:
Live map: https://intel.malwaretech.com/WannaCrypt.html
Relevant MS security bulletin: https://technet.microsoft.com/en-us/library/security/ms17-01...
Edit: Analysis from Kaspersky Lab: https://securelist.com/blog/incidents/78351/wannacry-ransomw...
> "The malware was circulated by email; targets were sent an encrypted, compressed file that, once loaded, allowed the ransomware to infiltrate its targets."
It sounds like the basic (?) security practices recommended by professionals - keep systems up-to-date, pay attention to whether an email is suspicious - would have covered your network. Of course, as @mhogomchunu points out in his comment - is this the sort of thing where only one weak link is needed?
Still. Maybe this will help the proponents of keeping government systems updated? And/or, maybe this will prompt companies like MS to roll out security-only updates, to make it easier for sysadmins to keep their systems up-to-date...?
(presumably, a reason why these systems weren't updated is due to functionality concerns with updates...?)
This shows that no agency is immune from leaks and when these tools fall into the wrong hands the results are truly catastrophic.
All he did to get infected was plugging his laptop on the network at work(University of Dar Es Salaam).
The laptop is next to me and my task this night is to try to remove this thing.
How many systems that is, is debatable but by at least one benchmark (https://www.netmarketshare.com/operating-system-market-share...) we're looking at 7% of the desktop PC market that could be exposed with no patch available.
The NSA should have responsibly disclosed the vulnerabilities they had been sitting on as soon as they were discovered.
That protects national security - not this.
It'd be an interesting project to try and track where these funds go and where they came from.
https://blockchain.info/address/13AM4VW2dhxYgXeQepoHkHSQuy6N... - 11https://blockchain.info/address/115p7UMMngoj1pMvkpHijcRdfJNX... - 4https://blockchain.info/address/12t9YDPgwueZ9NyMgw519p7AA8is... - 6https://blockchain.info/address/1QAc9S5EmycqjzzWDc1yiWzr9jJL... - 11
I think collateral damage like that is way underrated by politicians all around the globe that call for their respective intelligence agencies to build up offensive capabilities to be able to conduct cyber warfare and whatnot.
Bank transactions, patient medical data, stored passwords/keys/CA info, contacts, emails, configuration files, registry dumps for firewall rules etc etc. (I'm not that creative so there's probably a lot more that's been exfiltrated).
Pretty hellish knowing they'd let that quietly sit there, in the name of espionage. I'm not sure the benefits outweigh the damage they're doing, without even mentioning the chilling effect and lack of confidence this instills in IT everywhere.
I mean I get it is all to help stop the bad guys, but if you are keeping cyber weapons like this. You should be required to keep them as secure and locked as possible if you don't follow responsible disclosure.
Just like how a cop would keep their weapon on them, instead of sitting it down on the table while eating lunch.
It's perhaps a little more difficult as you'd need a vulnerability to keep spreading the innoculation. Arguably, though you release the virus, let it spread and then trigger the innoculation using a mechanism like calling out to a webserver, just as the kill switch worked here.
I give a pass to individuals (bandwidth for updates can be expensive, regular users don't know about patch Tuesday etc), but enterprise scale deployment should have IT for this, and IT should have been well aware of this kind of thing happening.
Ransom ware was a play for big Bitcoin holders to unwind large positions at the highs without too much downward pressure in Bitcoin market.
After Further investigation, it appears this attack could be in relation to this http://www.cvedetails.com/cve/CVE-2015-1875/
The primary issue at the heart of things like this, beyond the backdoors and 0-days is this: bad IT.
That being said though, bad IT is far too often the fault of upper management, and not the IT people themselves. After years of sysadmining, I've seen the inside of hundreds of companies, from fortune 500 oil to medium sized law firms. You know what they have all been doing over the years? Cutting costs by cutting IT. Exept... they completely fail to consider long term consequences, which end up costing more.
I blame things like this on two main groups. Boards of directors, and company executives. Far too often I ran into a situation where a company didn't even have a CIO or a CTO, and you had some senior one man miracle show drowning in technical debt reporting to a CEO or CFO and getting nowhere, and therefore getting no support, no budget, no personell, etc. I've seen exceptions too, but they are far too rare. If it's not technical debt that's drowning the company, it tends to be politics. The bottom line is forward thinking IT personell don't get heard, and inevitably companies hire people or an MSP with all the proprietary, cisco, microsoft, oracle, etc bullshit certs that make the C's feel better, but don't actually produce the wanted results. They inevitably end up providing an inferior product with inferior service at a short term cost just as high as doing it right the first time, and a much higher long term cost.
If I could say one thing that could help prevent issues like this, besides my standard whinging on about FOSS and the four freedoms and such, is that we need better CTO's and CIO's to advocate on behalf of IT departments, and I think senior sysadmins who feel they have hit a ceiling should consider going for their MBA's and transitioning to those titles.
Now, onto the NSA angle of the story. Well... all I can say is I told ya so, with an extra note that HN in the past few years has been surprisingly dismissive of FOSS proponents who have been warning about these things.
First they made fun of us for saying everything was being spied on, and then Snowden happened. (often followed by bullshit like "are you suprised?" or "what do you have to hide?"
Then we warned about proprietary systems, and then NSA/CIA tool leaks happened. (often followed by things like "but its for foreign collection only" and "but the NSA contributes to SElinux")
Ya'll aren't listening until after the fact, and that's not going to fix anything.
In the US, you have to manage your own healthcare. Get every result as a hard copy or on disk (in the case of MRI etc) and save it yourself. And back it up. That way you're prepared.
Analysis here: http://blog.talosintelligence.com/2017/05/wannacry.html
I have set up my mom to use a live debian cd through VMware, but I would also like to disable networking through Windows Edge and Explorer. I don't know how to do this however.
Myself, I follow a similar scheme but using a linux virtual guest and host. Is it easy to disable networking for all networking except for apt/yum and vmware/kvm?
Lastly, does anyone know what it costs for a personal subscription to grsecurity?
What Microsoft's software should be updated now to protect against this particular attack? Windows? Windows at the end user machines? The servers?
Could someone share a "What should I do now to protect myself" guide, please?
1) a railway dispatcher just tweeted that IT systems will be shut down (https://twitter.com/lokfuehrer_tim/status/863139642488614912)
2) a journalist tweeted that an information display of DB fell victim to ransomware (https://twitter.com/Nick_Lange_/status/863132237822394369).
I guess that #1 and #2 are related, though.
disclaimer: i hope no b/c it's like any other military tech being leaked and used, but am not sold either way.
Just in case there are any journalists reading - never use the term "perfect storm".
Lazy writing at NYTimes; what on earth does this attack have to do with the one at hand? It's not broadly the same type of attack, nor the same scale, nor the same outcome.
Guys, it may surprise you, but some of this kit predates Rust :)
I feel like the words "urgent" and "forced" might both be a bit shy of absolutely true here?
There is not a single proof or reason to believe that the second leak was not a fake (while the vault7 leak looks more legit) .
There are reasons to think that the same people are behind the second leak and the malware, and the malware, which is said to be based on "a leaked NSA exploit", was the part of a single plan.
It is not that hard to guess who is behind the internet bullying.
The US military and intelligence communities focused hard on cyber offense, rather than improving the defensive standards and technologies practiced among allies. Because of this, several allies have important systems compromised by (essentially) US-engineered malware.
Well, at least DARPA is sort of on it: http://archive.darpa.mil/cybergrandchallenge/
(There's also work stemming from the HoTT body of work on verified systems, as I understand it. But that doesn't have a sexy webpage.)
We are seeing bullet holes from what seem to have been cyber warfare between the former cold war foes.
"He adds that the fear is that the ransonware cannot be broken and thus data and files infected are either lost or that the only way to get them back would be to pay the ransom, which would involve giving money to criminals."
The new terrorism.
Here is the BBC news update about the NHS Cyber attack:
"NHS trusts 'ran outdated software'
Some who have followed the issue of NHS cyber security are sharing a report from the IT news site Silicon, which reported last December that NHS trusts had been running outdated Windows XP software.
The website says that Microsoft officially ended support for Windows XP back in April 2014, meaning it was no longer fixing vulnerabilities in the system - except for clients that paid for an extended support deal.
The UK government initially paid Microsoft 5.5 million to keep providing security support - but the website adds that this deal ended in May 2015."
On the other hand, I remember reading someone who said that one property of brilliant ideas is that when other people hear them, they think "well I could have thought of that."
Went straight from SF to Contemporary in a decade.
Stallman would make more headway with his politics if he had a little less contempt for his audience.
Python has a suite of facilities exactly for this very kind of problem.
Literally, the solution is "os.path.abspath(filename).startswith(os.path.abspath(dlfolder))"
This should, in all cases, return true if the filename is within the download folder directory, and false for any other case.
On day 1 every position from CEO, CFO, to mail delivery messenger is filed with one or 2 names: the founder(s). As the company grows, you hire people and start delegating the work so that they can fill the positions on the organization chart.
This kind of knowledge "beforehand" is a valuable startup insight that is not disseminated well enough IMO.
Otherwise it's pretty good. I actually use both Spyder and PTVS and am unhappy with both. Bad doc rendering in PTVS, no git in Spyder.
That is, it's not even a final decision of a court.
So while interesting, it's incredibly early in the process.The same court could issue a ruling going the exact opposite way after trial.
As someone else wrote, basically a court rule that a plaintiff alleged enough facts that, if those facts were true, would give rise to an enforceable contract.
IE they held that someone wrote enough crap down that if the crap is true the other guy may have a problem.
They didn't actually determine whether any of the crap is true or not.
(In a motion to dismiss, the plaintiff's allegations are all taken as true. This is essentially a motion that says "even if everything the plaintiff says is right, i should still win".If you look, this is why the court specifically mentions a bunch of the arguments the defendant makes would be more appropriate for summary judgement)
Alternatively, Hancom could pay Artifex a licensing fee. Artifex allows developers of commercial or otherwise closed-source software to forego the strict open-source terms of the GNU GPL if theyre willing to pay for it.
This obligation has been termed "reciprocity," and it lies at the heart of many open source business models.
The more important issue here is reciprocity, not whether an open source license should be considered to be a contract.
AFAIK, the reciprocity provision of any version of the GPL hasn't been tested in any meaningful way within the US. In particular, the specific use cases that trigger reciprocity remain cloudy at best in my mind.
Some companies claim that merely linking to a GPLed library is sufficient to trigger reciprocity. FSF published the LGPL specifically to address this point.
So I believe a ruling on reciprocity would be ground breaking.
The GNU GPL was written on the basis that if someone does not accept its terms, then that without any other license from the copyright holder, redistribution puts that person in violation of copyright law.
Suing for damages on the basis of a breach of copyright law clearly does not require any contract.
So this is more about a technicality of the legal process in this particular case, rather than anything about whether copyleft is legally enforceable or not in general.
Specifically, because the motion denial was based on the defendant's own admission being deemed to be the agreement of a contract, this says nothing about the general enforceability of the GPL (future defendants could simply avoid making such an admission).
Further, since the ruling was in response to a specific motion, it only concerns the claims made in that motion: about whether a contract exists in this particular case. It says nothing about the "copyright violation if you don't accept the license" mechanism of copyleft.
Finally, the article does not provide any evidence that there has been any ruling that determined that the GPL is an enforceable legal contract, contrary to its title. The ruling as quoted just says that the defendant, by its own admission, did accept to enter in to the GPL-defined contract.
Any judge in the country or anywhere else would laugh a GPL challenge right out of court. Any any IP lawyer reading it would tell their client that that's what's going to happen if they try to challenge it. That's why it's never been fully tested in court... no need.
... so they admitted to the court that they willfully used the software without a license to do so?
Defendant (Hancom) was trying to say that because they didn't sign anything they didn't have a contract.
But Hancom "represented publicly that its use of Ghostscript was licensed under the GNL GPU"
Therefore, the Judge ruled that in their own words they publicly acknowledged the contract.
Also, the article doesn't say much about how that lawsuit came to be. Did Artifex approach Hancom beforehand to notify them about the license infringement or just directly sue? I guess in this particular case, Hancom knew what they were doing, but I can imagine some (smaller) companies not being fully aware of open source license specifics and unknowingly running into a lawsuit.
Ask HN: What if the vendor had structured their product in a way that GhostScript is its own stand-alone app. Would they still be obligated to release their entire code, or just the portion that uses GhostScript?
It is simple to work around licence issues with your project. You just have to put in the work. Know that your design may have to factor in extra time because you can't use lib XYZ because you have to write your own library to do the same thing. If using lib XYZ will save a bunch of time, then know that you will have to adhere to lib XYZ license. Maybe writing a wrapper application that you opensource, and your closed source application interfaces with might be a design consideration.
In the end, it's your project, your call. Just know when you make a decision you weigh the pro's and con's of going forth with that decision.
In one enforcement, the defendant defaulted and the SFLC ended up with a pile of violating televisions!
The enforceability of the GPL is in no way news. That anyone would continue to try to violate it is the real WTF.
It's fairly clear that they will win the case in one fashion or another. I am predicting that the case will quickly be settled out of court for a lump sum plus a running licensing fee. You have a public admission from the defendant that they integrated the plaintiff's Ghostscript software into their own without either: 1) making the resulting Hancom office suite open source, or 2) paying Artifex a licensing fee for the software.
The case against Hancom was solid under copyright infringement, and now has the added sting of breach of contract.
On a procedural level, understand that this is a district court opinion and is not binding on any other court. Of course, if other courts find the arguments persuasive, they can adopt the reasoning. But no court has to adopt the reasoning in this opinion.
On a substantive level, it's important to look at the arguments the court is addressing and how they are addressed:
1) Did the plaintiff adequately allege a breach of contract claim?
We're at the motion to dismiss phase here and the court is only looking at plaintiff's complaint and accepting all of the allegations as true.
There are essentially only 2 arguments the court addresses: A) Was there a contract here at all?; and B) Did the plaintiff adequately allege a recognizable harm?
Understand that in a complaint for breach of contract, a plaintiff has to allege certain things: (i) the existence of a contract; (ii) plaintiff performed or was excused from performance; (iii) defendant's breach; (iv) damages. So, the court is addressing (i) and (iv), which I refer to as (A) and (B) above.
As to (A), the argument the defendant appears to have made is that an open source license is not enforceable because a lack of "mutual assent." In other words, like a EULA or shrink-wrap license, some argue that an by using software subject open source license doesn't demonstrate that you agreed to the terms of that license.
The court, without any real analysis, says that by alleging the existence of an open source license and using the source code, that is sufficient to allege the existence of a contract. The court cites as precedent that alleging the existence of a shrink-wrap license has been held as sufficient to allege the existence of a contract.
But the key word here is "allege." As the case proceeds, the defendant is free to develop evidence to show that there was no agreement between the parties as to the terms of a license. So, very little definitive was actually decided at this stage. All that was decided is that alleging that an open source license existed is not legally deficient per se to allege the existence of a contract.
As to (B), defendant apparently argued that plaintiff suffered no recognizable harm from defendant's actions. The court held that defendant deprived plaintiff of commercial license fees.
In addition, and more important for the audience here, the court held that there is a recognizable harm based on defendant's failure to comply with the open source requirements of the GPL license. Basically, the court says that there are recognizable benefits (including economic benefits) that come from the creation and distribution of public source code, wholly apart from license fees.
This is key - if the plaintiff did not have a paid commercial licensing program, it could STILL sue for breach of contract because of this second type of harm.
That being said, none of this argument is new. There is established precedent on this point.
2) Is the breach of contract claim preempted?
Copyright law in the United States is federal law. Breach of contract is state law. A plaintiff cannot use a state law claim to enforce rights duplicative of those protected by federal copyright law.
So, what the court is looking at here, is whether there is some extra right that the breach of contract claim addresses that is not provided under copyright law.
In other words, if the only thing that the breach of contract claim was addressing the right to publish or create derivative works, then it would be duplicative of the copyright claim. And, therefore, it would be preempted.
Here, the court held that there are two rights that the breach of contract claim addresses that are different from what copyright law protects: (A) the requirement to open source; and (B) compensation for "extraterritorial" infringement.
The real key here is (A), not (B). With respect to (A), the court here is saying that the GNU GPL's copyleft provisions that defendant allegedly breached are an extra right that is being enforced through the breach of contract claim that are not protected under copyright law. Therefore, the contract claim is not preempted.
(B) is a bit less significant for broader application. What (B) is saying is that because the plaintiff is suing for defendant's infringement outside the U.S. ("extraterritorial" infringement), and federal copyright law doesn't necessarily address such infringement, that's an "extra element" of the breach of contract claim. I say this is less significant because it wouldn't apply to a defendant who didn't infringe outside the United States. So, if you were the plaintiff here and the defendant was in California and only distributed the software in the U.S., argument (B) wouldn't apply.
I hope this clarifies what is/is not significant about the opinion here.
Hancom's CEO is a thief.
When Microsoft tried Secure Boot there was a huge outcry. But when HBO/Netflix/Verizon/WB demand a complete lockdown of your device (to the point where AACS 2.0 demands you have a special CPU, Motherboard, GPU and more components that lock you out and disable themselves if you use custom software/drivers), then suddenly even on HN I see a huge amount of people defending a complete lockout from your device to the point where you're not allowed to even install a custom, better, driver.
What is it about some shows/movies that would be SO DAMAGING to whole society if a few people would be able to copy them on another device or even give it to a friend?!
I dropped Netflix after whoever is in the group that decides policy for Netflix decided that Hurricane Electric's IPv6 tunnels are "a VPN" that is being used to circumvent Netflix's location checks with no warning.
(I'm aware of the DNS tricks I can do to only return IPv4 addresses in response to queries for the netflix.com zone. I choose not to do them and, instead, to not avail myself of Netflix's content.)
Wth would I pay extra for a Nexus/Pixel if unlocking it causes all Android software which uses DRM to start failing due to "not being compatible"?
Might as well buy a Samsung then. Or even better: an iPhone.
It's no longer your device anyway.
And to think Android was once open source. Now it's infected with DRM all the way down to the bootloader.
Such a shame. There are no free devices anymore.
Edit: to clear I've always been OK with DRM in apps (as opposed to HTML) because that clearly isolates it from the general purposey bits of the platform. Seems that's no longer the case with Android.
If they're targeting consumers, why do they give a damn if the phone is rooted or not? As long as they pay netflix to stream the content to them?
Or is this "just" to prevent fake locations and such, to please their content producing/distributing overlords?
"Netflix does not value your money, therefore you cannot use our services. However, you can always pirate the content on Internet for free."
Now let someone come and disrupt this industry swamp with DRM-free video.
For example, Artem's unlocked stock Pixel is still on Widevine Level 1, the most secure level, but fails SafetyNet because it is unlocked.
(What does "unlocked stock" mean - does the Pixel ship carrier-unlocked? Was it unlocked by calling up the carrier and asking for an unlock, or in some other way?)
Surely not all Netflix content is licensed under terms which prevent it from being distributed to rooted devices.
My phone is not rooted. It is unlocked. Netflix app works fine on this phone.
> someone calling themselves "the Shadow Brokers" leaks a huge trove of classified NSA documents to WikiLeaks, who in turn dump it on the internet.
Shadow Brokers didn't leak to Wikileaks. Shadow Brokers uploaded the trove of NSA documents to `mega.nz`, and someone else downloaded the trove to GitHub. Wikileaks merely tweeted about this after it happened.
Correction: As per well-sourced Wikipedia article, this was not the `mega.nz` leak, this was another subsequent one. The main point still stand: Wikileaks has nothing to do with publishing the MS17-010 vulnerability.
Would be nice to stop pushing the false narrative that Wikileaks was involved in that one NSA leak.
Yep - way too implausible, even for hacker fiction.
Anyway, sounds like your book was Nostradamus-esque in depicting recent events. Maybe a bit too good :D
Having this memory absolutely changed the way I've been viewing NSA related leaks in the past few years.
Let us not forget the used to be part of the NSA's mission. A part that was essentially abandoned early in the 21st century.
For example, the NSA required mysterious changes to be made to the DES s-box; many assumed at the time (as did I) that the agency wanted to weaken security, but it turned out, to quote Bruce Schneier, "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."
If you haven't read this trio of dystopian novels (you can read them in any order) you really should. Still mind blowing today.
(Admittedly he wrote them at a time, unlike today, when the US appeared to face an existential threat from terrorism. A threat that of course never materialized).
"I believe they were trying to query an intentionally unregistered domain which would appear registered in certain sandbox environments, then once they see the domain responding, they know theyre in a sandbox the malware exits to prevent further analysis"
But as the specialist external reader said: "Stross can clearly write workmanlike, commercial prose". I can definitely agree with that!
That he wrote premier system introspection tools for Windows makes me think he must have been privy to the complexity of such things by colleagues discretely long before DREAD and SDLC fruits were born out in the Vista/7 era.
Any one know what the E means in code names? There's a list somewhere, but I can't remember where now.
> I'm sorry, this is just silly.
This only goes to show that reality doesn't have to make sense to a literature critic. Only novels do.
Or was it an experiment aiming to show that fiction and documentary are two very different genres? Well than it was successful.
Besides the false association of TSB and Wikileaks that others have mentioned, I have a huge problem with this. Someone who gets kidnapped by pirates (The Shadow Brokers) while running from a press gang (Microsoft) is still a victim. Calling them "lazy" is an easy way to avoid the hard work of apportioning blame correctly.
A hell of a lot of that blame goes to Microsoft themselves, for turning an important security update service into a marketing channel. Maybe Stross gets around to pointing that out, but I stopped reading there.
But I also had contact with academics, and like the article said, it's not so much that academics are refuting Piketty, but that they simply aren't studying the same problems that he is talking about. From what I've been told, a lot of academic work in economics is focused on incredibly unique and specific problems. It isn't "fashionable" to be studying something so broad and perhaps abstract as inequality.
>>But perhaps the greatest rebuke of Piketty to be found among academic economics is not contained in any of these overt or veiled attacks on his scholarship and interpretation, but rather in the deafening silence that greets it, as well as inequality in general, in broad swathes of the fieldeven to this day.
The reason for this deafening silence is simple: the truth revealed by Piketty is inconvenient, and there are no easy solutions.
If r (the rate of return on capital) is less than g (the growth in output) that means that people have no incentive to build wealth or become more intelligent to deploy that capital more profitably. If I'm never going to make more than the growth in output, why bother with capital?
The logical conclusion to that would be that everyone wants to be an employee and nobody wants to be an employer.
If I can grow my wealth more quickly than the nations output, I'm grabbing a bigger slice of the pie. The hope with capitalism is that I'm grabbing that pie because I've earned it and the market hopes I'll be able to steward that wealth.
So yeah, capitalism may be inherently inclined to wealth inequality because some people outperform. But do you really want it another way?
There certainly is wealth inequality in the world, but it isn't actionable to blame it on r > g. It's more effective to look at things on a micro basis. Does this person have a child that is prohibiting them from saving? Why is the person being excluded from jobs? Do they have a proper education?
Saying what people want to hear does not make you a good researcher. They don't take him seriously for the same reason we don't take young-earth creationist researchers seriously. It's not good research.
Edit: I'm not an economist, and I'm not going to do justice to the criticisms (which aren't hard to find), but fine, here are links:
Sorry for the bad English.
Seems in my mind to be a nice complement to achieving code-coverage with testing i.e. whereas unit/integration testing might test the various code paths with a few good/bad values, this then throws every possible input value at them to see what breaks.
Here is a direct link: https://ia801705.us.archive.org/12/items/80_Microcomputing_I...
Intro is a good watch for nostalgia and perspective; relevant Jobs interview starts @ 4:20.
Good thing they gutted it in 1995, I guess. Congress didn't want the public to find out about such facts.
> Criticism of the agency was fueled by Fat City, a 1980 book by Donald Lambro that was regarded favorably by the Reagan administration; it called OTA an "unnecessary agency" that duplicated government work done elsewhere. OTA was abolished (technically "de-funded") in the "Contract with America" period of Newt Gingrich's Republican ascendancy in Congress.
> When the 104th Congress withdrew funding for OTA, it had a full-time staff of 143 people and an annual budget of $21.9 million. The Office of Technology Assessment closed on September 29, 1995. The move was criticized at the time, including by Republican representative Amo Houghton, who commented at the time of OTAs defunding that "we are cutting off one of the most important arms of Congress when we cut off unbiased knowledge about science and technology".
> Critics of the closure saw it as an example of politics overriding science, and a variety of scientists such as biologist PZ Myers have called for the agency's reinstatement.
"That's the opinion of the Congressional Office of Technology Assessment in a 116-page report released late last year.
"'Extensive data collection and possibly surveillance by government and private organizations could, in fact, suppress or 'chill' freedoms of speech, assembly, and even religion by implicit threats contained in such collection or surveillance,' the report said....
"[T]the use of an electronic funds transfer system to gather the same type of information would be far more intrusive, since much more data, some of it of a highly personal nature, could be collected in secret."
John P. Mello, Jr., writing in 1982.
It detailed government and business computer use, and was early, closeer to 1970 than 1980 as I recall. Several pages, fairly prescient and well written.
If anyon can reecognize the piece from an admittedly vague description, I'd appreciate a link. I've seen it online, if that helps.
(This is from 2006 btw but quite relevant)
Thats a weird notation. I would have preferred 30.000 metric tons.
I assume at least some work has been done in the last 10 years.
I wonder if this means that Google will lead by example and prolong the time they deliver updates to their own phones. They don't guarantee new updates to their current Pixel phones after October 2018 , which is not good enough.
Preferably they shouldn't be able to choose. Google should be in charge of updates and manufacturers should have to make a special effort to prevent an update. i.e if they are certain that an update will brick their device they would then make a formal request to google not to send the update to their devices.
It lowers the price floor for a shiny new phone. All of these additional features are expensive to create but, they are differentiators. With this, Google has the ability to push more new features on the base OS. By conforming to this standard, Google make it easier for them to compete with all of these manufacturers' features.
Now it's up to them to make compelling reasons to upgrade their phones beyond apps. I see things like Google Assistant, Mapping, etc. being more integrated into the OS so that you are always in the Google system no matter what app you are currently in.
This is a big and brilliant win if they can first pull it off technically and then pull it off with compelling services. They certainly look like they are investing heavily in both.
I look forward to a $99 or $199 (or $49 if you can stomach sketchy Chinese phones) phone that just keeps getting better and better and better for free as long as the phone works. This also makes a very compelling thing to make the phone into a computer once the battery can't hold a charge, etc. Take the guts or use some kind of USB->HDMI out and make it into a TV app or a digital mirror or another internet station somewhere.
Brilliant move Google.
I would have thought new shiny software was a nice incentive to get customers to upgrade to a new phone?
As we see an increase in the diversity of applications using Android, this upgrade path will be very important. Just wait until you see your first ATM or POS system "Powered By Android ".
Their abstraction with the camera2 and hal3 was a small step in this direction. any camera with these abstractions would be able to use RAW imaging.
For example, a lot of Android phones are running 4.4 and 5.0 in this part of the world. Those versions are pretty bad and the people that bought Android 4.4 and 5.0 actually do not know what they are missing and how to actually update their OS since there is no way for them to do that for now.
I hope that with this Treble, there will be a lot more Android phones(from Chinese makers) that can update base Android OS to the latest one much more frequently.
Not to say this isn't a huge step forward from status quo - if vendors contribute features and fixes to MediaServer and everybody uses the same implementation it will be much easier to update it for all vendors.
What still sucks is this is not going to be Google that will update the Android framework - it's still OEMs and the carriers.
Now all we need is to have Google distribute the framework over the Play Store instead of relying on OTAs, and all will be right with the world.
Also, how about using cgroups instead of the custom security model? Maybe we could get reuse out of Google's security patches for Linux, and they could benefit more from the community.
Which of these problems will Project Treble solve? Eg, have they actually added a stable driver KBI? Or pushed drivers to userspace? Or is this just about GUIs?
-- Scotty, in The Trouble With Tribbles
What was the previous "vendor integration" initiative? How long did it last? Two years? Or was it one.
Lack of vendor buy-in. Combined with Google's ADHD project support.
Nice idea, but color me skeptical.
I don't see anything that hints at a change in the fundamental cost/benefit that's driving the current mess.
Maybe I'm just projecting cynicism, because I'd actually like to be proven wrong. And bad press seems to be the only external influence on Google, that actually gets through.
But in reality, there's a huge expense to all the work of updating devices to support Google's rapid change cycle for dozens or hundreds of different models, and the problem stems first and foremost from that lack of abstraction layer.
This is likely a first step to finally catching up to Windows Mobile: Making the core OS upgrade come straight from the actual OS developer, so that the company that writes the code is actually the one that updates the code.