Porsche and Renault did something similar lately. It's great to see WebGL used for this in production. Honestly, I'm surprised it took so long. Visualizing cars with WebGL seems like a no brainer, especially when most current websites load dozens of images for their "360 views".
As this visualization demonstrates in nice 3d form.
This is both impressive, and makes me feel very old.
How is this acceptable? Shouldn't he be held accountable for this kind of stuff?
Of course there are well-known answers that are used to mitigate these problems somewhat, TFA solutions, login images, etc. But I still feel as if social engineering attacks hit a really vulnerable weak spot in many systems.
(On a mostly unrelated note, can we get rid of security questions forever? I've taken to just giving nonsense answers for them and storing my answers somewhere secure. I sure don't want my passwords being reset because somebody knows my mom's maiden name...)
I'm also surprised that the government doesn't have more stringent guidelines about the private email use of its top officials.
Has there been any confirmation that this account even actually belonged to the CIA director? If yes, has there been any evidence that there was actually anything sensitive on the account? (I seriously doubt the latter)
If there was nothing on the account how is this different from any of the other tens of thousands of aols that have been hijacked since the 90s?
Computers are pretty good at security; humans, especially underpaid and overworked helpdesk jockeys, are not.
"The findings suggest that studentswith poor mental health may be greater users of SNSs."
Fix this, please, Docker. A few more points towards black isn't going to destroy the look and feel of your page.
If I want to run locally, I have to ditch Docker entirely and just use Ansible: https://github.com/ansible/ansible-examples/tree/master/mong...
I think Docker is just passing through the early adopter status in terms of actual production usage (it's much more mature in its lifecycle for dev), and having one of many cloud Docker providers owned by Docker might have a chilling effect on other 'container in the cloud' providers using Docker as their primary container format/platform.
In 2011 dotCloud launches as a platform-as-a-service company.
In 2013 dotCloud releases docker, software based on the lessons they learned building their PaaS product.
In 2013 Tutum starts to build a PaaS based on Docker.
In 2014 Docker (renamed from dotCloud) sells their PaaS to cloudControl.
In 2015 Docker buys Tutum.
I switched to Heroku only to realize that I had the same problem there too, obviously it was an issue in my app but at least Heroku have me an specific R14 error code and description of what was happening and finally knew what I was dealing with. For the next 48h that I was debugging the memory leak I had my dynos switched to 1X to get even more resource metrics, once the issue was solved I switched my dynos back to hobby.
I'm considering going back to Tutum now that I have deferpanic installed and configured in my app and my Heroku bills are around 100 USD monthly(20USD SSL endpoint x 3 + 7USD hobby dynos x 3 + 22.50USD Compose RethinkDB), but I was shocked to realize how much value a mature PaaS can deliver for clients even for a hobby-ish app like mine.
Searching for "Tatum video introduction" on a search engine only returns results about a certain movie star, which is not terribly helpful.
Anyone know how that actually works? Is it similar to Flocker at all?
updated tentative pricing image url as it looks like someone has deleted it off the tutum slack team site.
Congrats to the tutum team, they have built a really nice project that makes really easy to build and maintain container pipelines.
Can't wait to see the integrations with the other docker tools...
It is that which killed Safe Harbor, and none of the proposals at the end of the article would be immune to that weakness again.
It would remain the case that the proposals made would not be in line with the clear ruling that the European court gave if the US government can continue to override international treaties and their own courts.
Let each country have it's own set of rules and have all countries respect those rules for data located in the hosting country.
The idea that each country must be exactly the same and data is by default available for transmission across borders is only to the benefit of multinational companies.
> Microsoft stands in contempt of court right now for refusing to hand over to US authorities, emails held in its Irish data centre. This case will surely go to the Supreme Court and will be an extremely important determination for the cloud business, and any company or individual using data centre storage. If Microsoft loses, US multinationals will be left scrambling to somehow, legally firewall off their EU-based data centres from US government reach.
At the moment, data can be held within the EU by US companies and it's all ok. If Microsoft is forced to hand over emails stored within the EU to the US government, then all bets are off.
In that future, it may not even be enough to have an EU-based subsidiary of a US company hold data within the EU, since it'll have been shown that the U.S. government can coerce them.
And we like to talk about large companies like Microsoft, Apple, Facebook, Google etc. But they can throw money, lawyers and engineers at this problem. But the thousands of US-based SaaS apps do not have that luxury. Likewise, there are thousands of EU-based small SaaS app that will have everything from their hosting stack, to their bug tracker, to their communications tools taken off them.
The 'privacy is dead' crowd should really take notice of this article.
Third, there should be an exception to this approach for citizens who move physicallyacross the Atlantic. For example, the U.S. government should be permitted to turnsolely to its own courts under U.S. law to obtain data about EU citizens that move tothe United States...
I'm getting an Internal Server Error on the original page.
It depends on the rules.
For example: privacy of communications has no intrinsic dependence on technology. Security of personal data requires the verification of said security (or the commitment to it), etc...
I do not know about this specific law. But just because a law is old does not mean that it is bad. And this is what Microsoft is saying.
After hundreds of years of slavery, it was abolished in the US in a single day. So what? Is this bad?
Well, Microsoft is wrong here to believe that the Judicial Redress Act  will be sufficient. The CJEU has required "essentially equivalent" privacy protections for EU citizens as they get in the EU.
The US Privacy Act does not give them that, so this Judicial Redress Act is a hit and a miss.
The US needs to pass a much stronger privacy law that is "at least" as good as the one in the EU, if it wants its companies to continue to get EU citizen data (and I assume it does). It can start by finally reforming the ECPA for the 21st century.
This is a great article to remind us how bogus that argument was.
See this presentation, slide 12. Right from the horses mouth:
Reason 7 [to have a holding company in Holland]: Fiscal climate: Very competitive tax climate from its far-reaching tax treaty network to the possibility to conclude socalled[sic] advance tax rulings.
These tax deals usually take the form of a fixed tax guarantee: the company agrees to place their holding company in the Netherlands and pay X euros in tax for the next N years (2 to 5), regardless of their actual revenue or profit. For the Dutch government this is just free tax revenue and if they don't make a sweetheart deal with the multinational the holding company would end up in Luxembourg or Ireland instead. This way the multinational can make the countries fight for the most preposterously low offer.
In the end, you can recover the same money by taxing dividends and income accordingly, and it's far more difficult to hide those (a person's residency is less ambiguous than a corporation; and while you can try to play games, with proper enforcement you will end up in jail for doing it).
Of course any politician will get castigated for suggesting removing the tax entirely, but that's just politics and not sound economic policy.
This may be a small step in getting global players to play fairer, but from what I can tell this is still cheating the system and depriving countries and their citizens of badly needed tax income. All while competing unfairly with smaller non-global companies.
I don't get why the EU still hasn't managed to get this under control.
Asking because the article mentioned some people staying only for a few weeks. In a small startup, not seeing traction would be reasonable cause for departure since your paycheck directly depends on it. In a large company, things are not always interrelated in the short term. So there is more time to try out things. I guess the question is: how long should one try to make a new job work?
Microsoft has Windows, Office and Xbox.
Amazon has massive web server and product distribution platforms.
Apple has OS X and iPhone.
Google has Android, search, ads, and an online office suite.
What does Yahoo do? I guess they have some long-time email users and a well-liked stock tracking platform?
>>Just last week, Yahoo lost two senior women execs development head Jackie Reses to Square and marketing partnerships head Lisa Licht.
Before that, another exec once close to Mayer CMO Kathy Savitt left for a job at a Hollywood entertainment company, although sources said that was due in part to increased estrangement between her and Mayer.
She has been a CEO of a publicly traded company for 3 years. One that was "seemingly" in trouble. 3 years is a long time for a company that was founded 3 years ago. Yahoo is 21 years old and still alive (and somewhat doing well).
Also +1 for being a Clojure app. I am going on vacation tomorrow and am loading the code on my tiny travel laptop for reading.
How does it compare to re:dash? https://github.com/EverythingMe/redash
Metabase seems to have a GUI to construct SQL queries which re:dash doesn't. What else is different?
But here WD bought SD for ~85-86$ per share, when it was worth ~75$ per share. It's at least 13% more.
Does it simply mean that WD hopes that SD will rise in value rapidly? Or I imagined company buying evaluation wrong?
I asked Tom Callaway from Red Hat about it and he said "I'm not a fan, I think its a poor decision, but I also appreciate that I might be in the minority these days." 
Hopefully, once enough people have been burned by the apparent convenience of bundling, we'll see the tide change. Maybe after Dockerization has run its course.
The metaphor doesn't pan out. The third is canonizing a technical error.
There were some serious issues with this on the golang-nuts fedora ML some time ago where lsm5 was lamenting about the issues Fedora faces when upstream simply won't remove vendored libs.
Do they have the law on their side? Yeah. So did the Pope when he sentenced Galileo to life in prison for promoting heliocentrism. That doesn't mean they're in the right; that means the law is in the wrong.
Hopefully this could end up in the Law next year.
I say "over 100%" because several times I've had hard copies sent for whatever reason with hand-written letters thanking me for expressing interest in their research and letting me know they'd be happy to answer any questions, etc.
I've generally found that some researchers, especially in relatively arcane areas are very pleased to find people who are genuinely interested in their work.
I only appeal to authors directly if I'm unable to access a paper online through my library's JSTOR access which is fairly extensive.
His table of particularly overpriced journals in economics is dominated by Elsevier journals: http://www.econ.ucsb.edu/~tedb/Journals/roguejournals02.html
Hopefully we see more academics collectively abandoning such journals like Knuth and the Journal of Algorithms board and these other examples from Ted's website: http://www.econ.ucsb.edu/%7Etedb/Journals/alternatives.html
Aaron Swartz - Guerilla Open Access Manifesto
I realize not everyone is on top of internet culture and slang, but reading "#icanhazpdf" is a "secret codeword" makes me wonder if the whole piece is tongue-in-cheek ("I am shocked, absolutely shocked to find gambling in here!") or if the author really has discovered the internet for the first time.
Living in developing country you learn to ignore copyright or you never learn anything. I don't know if it was invented as a way for developed countries to keep competive advantage, but it sure would work that way if people actually obeyed.
Some food for thought: science is mostly funded by public money. A small portion of that money goes to paying scientists - the rest goes on products and services bought in the process of research. Some of these are necessary. But publishing takes a large chunk of that funding stream - they charge us thousands of dollars to put articles we write on their website. In almost all cases they add no value at all. Then, they charge us, and anybody else, to read what we wrote.
But maybe it just costs that much? There are two issues here: firstly, for-profit academic publishers have some of the highest profit margins of any large business (35-40%). Secondly, they are charging thousands of dollars for something that with modern technology should be nearly free. They are technically incompetent to the extreme - not capable of running an internet company that really serves the needs of science or scientists.
They systematically take money that was intended to pay for science, and they do it by a mixture of exploiting their historical position as knowledge curators and abusing intellectual property law. They also work very hard to keep the system working how it is (why wouldn't they? $_$) - by political pressure, by exploitative relationships with educations institutions, by FUD, and by engineering the incentive structure of professional science by aggressively promoting 'glamour' and 'impact' publications as a measure of success.
The biggest publishers are holding science back, preventing progress to maximise their profit. We need to cut them out, and cut them down. Take back our knowledge and rebuild the incentives and mechanisms of science without them being involved.
I'm glad they've found a workaround but that being said, opening a PDF attachment coming from god knows where isn't the best idea. I hope they're being careful.
Especially considering that the research and the the writing is done by scientists, the review is done by other scientists. For free. The writers even pay a lot of money to get published. So I wonder what justifies these price tags for offering a PDF for download.
Don't get me wrong - I can still see the role of a publisher in the scientific world. But perhaps the monetarization should be overworked... As the article said: let's see how this whole publishing world will change. Open Access and comparable models are becoming more and more popular.
With that said---I'm a Nature subscriber, and I'm pleased to see the emphasis on "Open Access" by many scientists and organizations. Hopefully this trend will continue, and silly issues like individuals requesting PDFs from fellow scientists won't be termed "piracy".
This is what makes the situation profoundly more complex compared to other application of copyright, say in the software industry, where clearly switching to an open source model doesn't change the incentives i.e. who assesses the quality of software.
The long term effects on academia of switching to a model where the taxpayer gives money to scientists to pay for open access submssion of their research are hard to evaluate, and do no get enough though (imho).
That clearly doesn't mean that there aren't bad journals that are not OA, nor that for the benefit of the public some sort of arrangement shouldn't be found for older research: I'm a big believer in "faster decaying" copyright in general, and mandating that all publications describing research that is publicly funded become OA after, say, 30 years, would help significantly.
The other trick I recommend people try if they frequently have trouble finding papers is to try EndNote. It is a little expensive, but I found it to be great at finding papers that I couldn't get through the official sources with my school's access.
I assume the problem is that Elsevier doesn't much like when articles are also made available outside their publications? Well, then either starve them of all publicly funded content or just have them accept that all the publicly funded content will always be available outside their publications. It's as simple as that.
A proposal requiring that publicly funded research is publicly available would be how hard to pass in as law? Why aren't such proposals made? If they are, what has stopped it from already being law?
Why is it assumed that there is no public record of the paper changing hands? They tweet the request publicly, so it stands to reason that someone is paying attention and archiving. I suppose the key word here is "public", but I'm not sure why that matters if the goal is covering up illegal activity.
Tax Payer Money going to research not available to continue science = Flawed Policy
How can an article about this not mention Aaron Schwartz?
I'm all for free papers by the way, nothing is more annoying that researching things and hitting paywalls but someone has got to pay the people doing the publishing work.
Also: If I order a paper from our library or I download it myself, it often comes with an on the fly generated cover page with my IP address on it. One can remove that, certainly but there may be other mechanisms to tag papers. Amazon reportedly investigated (and implemented?) putting specific, unique errors in DRM free ebook copies to identify sources of piracy. So I wouldn't advice you to just send the PDF around unless you are the author maybe and have a PDF that did not go through the publishing process.
Still loving the initiative though ;)
Even better, publish your articles 'for free'
Some make serious money out of scope creep in IT projects. Sometimes I think that this must be how the major consultancies are sustaining themselves.
Someone wrote that the /average/ IT project ran 10x over budget. This would be completely unacceptable in any domain but IT.
The only way that could possibly work is if the original systems are grossly repetitive and unoriginal (variations on 2 themes, perhaps?).
Sure, but common sense would guide programmers about such tradeoffs. The extra time spent loading an additional library dependency would be amortized over the total execution time of the program -- IF -- the program makes repeated use of the library.
One of the reasons to use a library the author didn't touch on was "insurance against unknown edge cases." For example, I could attempt to write 50 lines of code to uppercase a lowercase Unicode string. However, my attempt would have bugs in it. Instead, it would be more prudent to use the ICU library. It's a hassle to add that dependency and it's many thousands of lines more than my "simple" program but the ICU developers covered more edge cases than I ever thought of.
Most modern software has incredibly complex dependency chains and that's what makes it fragile and unpredictable most of the time. If we focus on making the languages, runtimes and core libraries flexible enough that we don't need to assemble code from dozens of hobbyist GitHub projects to put up an app with a reasonably modern UI, we would make a huge step forward.
* License is it compatible with the rest of your code?
* Relative size of module vs what you need from it. The size of the module introduces a non-trivial maintenance burden.
* Difficulty of just doing it yourself. Is the thing you need to do non-trival? like say encryption? or Distributed Consensus?
* Is the module compatible with the internals of your system? Will it require significant changes to how your code or application works?
* Platform support? Will the module work on all the platforms your code/application will run on?
All of these things influence the decision. A knee jerk response of "Just use X" may be right but you'll find yourself in the position of not knowing why it's right and thus unable to adjust if it ever stops being right.
It's doubly frustrating when a library function is obsoleted and replaced wholesale rather than fixed of whatever defects it has.
And file I/O is triply frustrating because there are all sorts of corner cases, plus leaked abstractions from the file system and hardware.
The moral of the story is that you should never use libraries, and never do any reading or writing of files. Wait, what?
...would've said that module embodied Worse is Better philosophy of something that snowballed into a mess with uptake and feature creep. Just guessing: haven't read its source or anything.
Concise and correctness just often go hand in hand. Your approach is likely closer to The Right Thing. ;)
One is the role of technology in sports which is really interesting. In other sports there is a lot of debate over what technology is allowed and what isn't. I would be really interested to see some things like how far someone could hit a golf ball if there were no restrictions on the club or ball or how fast a person could swim if there were no restrictions on the swimsuits.
The second thing, however, is that the text of the story and the video have different focuses. The text focuses on telling the story of how this is a grassroots movement by some athletes. The video, however, seems to have a more pronounced undercurrent that this might really be about one company, BalancePlus, trying to put pressure against an upstart competitor, icePad, who is eating into their market share. I think it is really interesting that the text doesn't emphasize this as much as the video does.
I applaud this, but I guess ultimately, either directly or indirectly, the sponsors still decide who is on a team. Or in other words, teams with more money will still be better teams.
I feel that once a device gets past the point where the lack of vertical resolution limits productivity, the marginal utility offered by even more vertical resolution becomes rather insignificant, to the point where I'd probably benefit more from having more horizontal resolution for things like snapping windows side-by-side and not having black bars on 16:9 video.
I typically skip a generation to upgrade machines, but the Pro 4 (based on this review) solves all the small quips I had with mine. Its looking like I am going to upgrade to the Surface Pro with Iris graphics. Still on the fence about the Book, I don't really need the laptopness.
Where would Microsoft be today if they had given up on their OEMs 5 years sooner, and gone head to head with Apple on hardware?
It's my terminal workflow that I can only use on a Mac/Linux.Zsh, Vim, Tmux, LateX, Python and mostly a package manager (Apt on Linux, Homebrew on the Mac). Most of which I can just configure in a new system pulling the config files from my Github repository in less than 15 minutes after a fresh install.
That's what's mainly holding me from going back to windows (and the Application update process which is still awful as I can see from my VMware installation of Windows 10... seriously, updating the various components of Visual Studio in a semi manual way is just ridiculous), is there any good alternative for the Shell in windows that doesn't involve considerable tinking?
This paradox hinges on the strange notion of cardinality of infinite sets. Specifically, the set of all even integers, the set of all odd integers, and the set of all integers(!) have the same cardinality, and therefore the same "size".
>Suppose the hotel is next to an ocean, and an infinite number of aircraft carriers arrive, each bearing an infinite number of coaches, each with an infinite number of passengers.
How would we extend that?
suppose we have an infinite number of passengers, carried by an infinite number of coaches, transported by an infinite number of aircraft carriers, shoved in by an infinite number of tsunamis, which occur on an infinite number of continents, on an infinite number of Dyson spheres...
The bugger that haunts human cryonics is that thawing is never perfect because the cryoprotectants used to prevent ice-crystals within the cells are usually toxic. If you freeze cells that are measured to be 100% viable/alive at the time (very common) then thaw them using best practices, you're going to have some cell death-- maybe 1-5% if you're fast (less time spent in toxic cryoprotectant) and lucky. If you're unlucky or slow, you can look at 25-45% of your originally healthy cells being dead upon completion of the thawing process. The remaining cells are usually extremely discombobulated, and can take days to return to their baseline. This is completely fine if you're tooling around in a research lab or industrial lab, but even a 1% loss is probably too much for a human brain to bear and remain the same as before.
I suppose that if you work under the assumption that the future technology cryonics relies on for thawing will exist, cell loss during thaw will not be a problem; I find this possibility to be fairly likely over a long time span. Alternatively, you could assume that there will be advanced ways of restoring brain function or generating fresh neurons after systemic damage-- quite a stretch if you ask me, but it's conceivable. I think that ultimately the goals of cryonics will be scientifically realizable for those who were most recently preserved.
You might also look at the Alcor FAQ for scientists:
Some of the ongoing information generated by the Brain Preservation Foundation's technology prize competition is also interesting.
The perspective of the BPF folk is perhaps a useful calibration point for those coming into this as a new topic; they are critical of cryonics for some detailed technical reasons, with plenty of room for debate, think that plastination should be developed as an alternative technology, but are firm supporters of the concepts of brain preservation and the evidence to date for fine structure preservation. For example, see this response to an earlier and very shoddy article critiquing cryonics at the Technology Review:
diff <(curl -sS -L https://httpbin.org/get) <(curl -sS -L https://httpbin.org/get?show_env=1)
See also mergely which supports diffing URLs:http://pixelbeat/programming/diffs/#mergely
(1) Terminal based
(2) Supported other types of HTTP requests
(3) Supported request body
(4) Allowed editing request headers
(5) Wasn't so easily exploitable to be used as a proxy or a DDoS relay (server-side bummer):
For people wanting to have a CLI tool instead, John Graham-Cumming's httpdiff might be worth looking at. https://github.com/jgrahamc/httpdiff
1) What's the diff logic? At first glance, it looks like JSON is reformatted (maybe canonicalized in some way) and then a line-by-line diff is applied. Is there more to it? Since the tool seems JSON-aware, I was surprised to see an added trailing comma up as a difference.
2) Do you have plans to expand the kind of HTTP requests users can make? It would be nice to use different verbs, headers, and request bodies. Runscope has a similar tool built in that I believe (haven't tried it yet) allows a bit more flexibility, but it would be nice to have a standalone tool available.
A daemon that sends to two backends, and diffs the results.
It renders a set of pages in a headless browser, compares them visually and alerts you if something changed.
Just a few lines of bash as you can see. But it turned out to be pretty useful. UrlDiff is a regular part of our regression testing at Product Chart now.
From JS Console:
[Error] TypeError: undefined is not a function (evaluating 'Array.from(e)')_toConsumableArray2 (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)s (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)f (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)onload (app.min.js.pagespeed.ce.ozGaCBt6Kj.js, line 1)
Just the other day I needed something similar and was disappointed that I could find it.
I wanted to discuss something with a remote colleague and to illustrate it I wanted a visual diff of two files. I was hoping there was a nice little web app offering this but I was forced to screenshare (I could have terminal-shared but it was more hassle).
I was hoping for something like Etherpad but with a live visual diff.
A side question: What in the POWER architecture makes it hard to implement? I was told the addressing modes are complicated enough that it will always be slower and harder to create than other processors. I'm wondering if this is urban myth or has some basis in reality?
1) http://www.amazon.com/The-Race-New-Game-Machine/dp/080653101... with a lot of articles written about the book such as http://www.wsj.com/articles/SB123069467545545011
>An HTTP 200 is returned on successful completion, and HTTP 500 is returned in the case of an error (i.e. an exception). Note that exceptions are intended to represent unexpected failures, not application-specific errors. No other HTTP status codes are supported.
Now you are just using HTTP as your transport layer, you can very well make it customised to your particular needs rather than defining a spec. This might result in easier client code, but why not use Thrift if that is what you need.
I see that it is using JSON instead of XML.
While I may personally feel that JSON is a better format than XML, there are a implementations of XML-RPC for almost all languages and platforms which is a huge advantage.
> Get the latest Flash player to view this content
No, I won't. This scares me too much:https://www.cvedetails.com/vulnerability-list/vendor_id-53/p...
>> Can you test X next?
> Tests take about a month. I do take suggestions into consideration, but I can't promise you anything. Backlog is a few years long at this point.
I would love to see the multixact data corruption problems introduced in 9.3 analyzed, and see if he can verify them to be solved in the latest version.
Linters are by far powerful enough by now.
Written style guides are good for understanding why - but linters actually help others to adapt to it quicker
In a traditional web app, we have 4 layers: client views, client app, server app, database. React, described as a strict view layer, in reality is being used as much more. At this point, it is not just consuming the client app, but is also taking nibbles at the server app as well.
To each their own of course, but i would ask people to hesitate about these decisions. The architectural issues with monolithic views is well known, and just because we have a shiny new tool does not mean we should throw that understanding by the wayside.
Source: i work full-time on a React and Backbone app
 https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API https://github.com/github/fetch
>Do not use Backbone models
I use Backbone models for ajax since it makes decisions such as PUT vs POST and model.save() looks cleaner than $.ajax. Also, Backbone collections provide a declarative way to handle sorting and duplicate models. But these models are internal to the Store and not exposed to the views. I'm still a React newbie. Is this a valid reason to continue using Backbone?
2. It seems as though Khan Academy do not use React for SVG elements in their interactive exercises. For example, https://www.khanacademy.org/math/geometry/transformations/hs... Do you plan to migrate SVG to React?
Even though John Resig is one of their main devs.
Never use jQuery for DOM manipulation
Are there any CSS-frameworks that have been converted to JS but not are not their own components yet? It's easy to find React-Bootstrap but that comes with ready made components, I am looking for styling that's purely in JS so I can make my own components.
Also would a route-component be considered logic or presentation, or maybe it is its own thing and they forgot to mention it?
This seems a little strong. What is the reason for this guideline? I know of many projects that are combining the use of React with Backbone.
In all seriousness, though, I appreciate the brevity of this guide. It can be quickly read and understood, and is not the fully-fledged book I've seen from other places.
I'd even say that the only place where floating point is necessary is in simulations (physics, 3D, analog signals all of it should be properly done with GPUs.) Everything else (2D layouts, finance, data processing) is better served with either rationals or decimals.
We should remove floating point support from general-purpose CPUs and leave it to GPUs, where it belongs.
Like I mentioned before, I just spent 6 months in the deep bowels of the criminal justice system for drug possession here in Florida, and those observations have shaped this view.
The cops, the judges, the lawyers on both sides (who eventually become the politicians that make the laws), the clerks, the guards (do they hate being called that!), the jail/prison administrators...they ALL are making pretty awesome livings from the war on drugs, and have zero repeat ZERO incentive to change anything.
The problem is not a legal one, it's economic.
The public is fed the propaganda of wrecked lives and violence to keep the status quo...until the population somehow wakes up and sees how the CJS is totally broken as perhaps even more corrupt then the drug game, I doubt anything can change.
The usual way of doing it is simply lots of lobbying: Paying "well meaning" people, journalists and politicians to stay on the track of keeping most drugs illegal, so that the drug lords and cartels can continue to earn money.
So does really anybody wonder why the (obviously totally pointless) "war on drugs" is still waged and will probably waged for quite a while?
I should be free to experiment with my own consciousness (so long as it does not impede on the rights of others) - how much more personal can you get? For the government to impede on this is unconscionable (pun intended).
We live within societies where the way we control substances creates a huge social and financial cost.
We'd be better off legalising every drug and creating a social framework within which these drugs can be taxed, consumed safely, and users supported.
Case in point is the legalisation of cannabis which I hope is a stepping stone to legalisation of all recreational drugs.
Economic issues, politics, racism.... oppression on various fronts from the system down to the family to the self.
Drugs (and alcohol) are an escape. Normal activities also provide an escape when people get obsessed with them. TV, video games, food (my escape), exercise for some addicts, sex, etc.
Just legalize cannabis and decriminalize other drugs already.
As such, hopefully I can quibble with the text a bit.
"Ceasing this hypocritical practice by releasing nonviolent offenders is morally urgent."
Yeah, not so much. Yes, there is a severe moral problem here, but please do not make moral arguments! It's folks with moral arguments riding around on high horses that got us into this mess. Instead, argue from the standpoint of practicality (which she does).
One of the practicality arguments she does not make, which deserves mentioning, is that because the drug war is unwinnable, there are too many laws. This makes folks with the power of selective enforcement lords over the rest of us.
Have a traffic stop? Cops ask to search your car? You have a right to say "no". But if you do, be prepared to wait around until the drug dog shows up. He'll sniff around your car and "alert" the cops, even if there's no drugs present. Then, guess what? They get to tear apart your car while you watch. All because of the war on drugs.
Let's say you are a drug user. You have a joint in the ashtray. In this case, it gets even better. Then -- if I'm not mistaken -- they get to take your car! A few dollars worth of illegal pot, which might not even be yours, and you could lose tens of thousands of dollars worth of car.
It's not that this is morally outrageous. It certainly is. It's that a system of justice cannot maintain the consent of the governed when it turns LE officers into something approaching highway bandits. Selective enforcement of drug laws -- both by cops and prosecutors -- distorts the legal system so much as to make it unworkable. Sure, it's bad, but the bigger point is that it cannot continue working in this fashion. Something's gotta give.
I liked the article. It's good to see public discourse slowly become much more reasonable about drug addiction and its consequences. One caution, though: in my opinion what we need to do is still stay tough on violent, hardened criminals while being more pragmatic about drug crimes. Otherwise we'll end up being slandered as soft-headed and irrational.
Increase in the number of robbery and petty crime identically track both the increase in use of and violence associated with drug use. But the war on drugs' main influence on this violence has been to keep the pressure on the drug dealers, increasing risk to sell the product and making it less available, thus driving up prices, increasing competition, and therefore promoting violence between drug dealers, and by drug users in order to afford what they're addicted to.
And all of this leads to increased incarceration, not just from drug charges, but from the increased violence associated with the drug trade, gang warfare, and an unlawful under-society where people do whatever they can to get by.
A flow chart would make it a bit easier to grok, but basically the drug war throws fuel on a fire that was only simmering before.
* code/data duality
* direct manipulation of the program
* program always running and you edit the VM image, instead of editing sources and starting the program again
* ``[t]he application s user interface and a view of the application s structuresit side-by-side in front of the developer/user''
* ``[t]here is only one form of the application and there is no translator.''
For example, a person using the Pages application on a Mac might be inclined to distribute their writings via PDF using Mac-typical fonts utilizing Apple-inspired layout and formatting options. You know, instead of just making a web page or using a web publishing tool.
Despite the annoying format I did read the PDF and while I like the sentiment I can't help but think it's a lot of wishful thinking. People have been trying to make visual programming tools (i.e. tools with real-time positive and negative brain feedback loops) for a very long time now and they always come up short.
Viewing the labor of programming in real-time makes sense though and I think we can get there for a lot of use cases. Taking advantage of the web and interpreted languages (e.g. Python) or instant-compiling languages (e.g. Go) is probably how we'll do it.
The opposite of progress in this area would be utilizing extremely verbose languages like Java or C# or languages that require a lot of preprocessing and/or compiling and/or complicated build and deployment processes. Java has got to be the worst here with slow startup times, complicated and (often extremely time-consuming) deployment processes... And that's just for the IDE! Haha