Another clear advantage of R is that it is embedded into so many other tools. Ruby, C++, Java, Postgres, SQL Server (2016); I'm sure there are others.
But when you start having to massage the data in the language (database lookups, integrating datasets, more complicated logic), Python is the better "general-purpose" language. It is a pretty steep learning curve to grok the R internal data representations and how things work.
The better part of this comparison, in my opinion, is how to perform similar tasks in each language. It would be more beneficial to have a comparison of here is where Python/Pandas is good, here is where R is better, and how to switch between them. Another way of saying this is figuring out when something is too hard in R and it's time to flip to Python for a while...
There's scant few articles on going from Python to R...and I think that has given me a lot of reason to hesitate. One of the big assets of R is Hadley Wickham...the amount and variety of work he has contributed is prodigious (not just ggplot2, but everything from data cleaning, web scraping, dev tools, time-handling a la moment.js, and books). But that's not just evidence of how generous and talented Wickham is, but how relatively little dev support there is in R. If something breaks in ggplot2 -- or any of the many libraries he's involved in, he's often the one to respond to the ticket. He's only one person. There are many talented developers in R but it's not quite a deep open-source ecosystem and community yet.
Also word-of-warning: ggplot2 (as of 2014) is in maintenance mode and Wickham is focused on ggvis, which will be a web visualization library. I don't know if there has been much talk about non-Hadley-Wickham people taking over ggplot2 and expanding it...it seems more that people are content to follow him into ggvis, even though a static viz library is still very valuable.
IMO, R's system is actually more powerful and intuitive -- e.g. it is fairly straightforward to write a generic function dosomething(x,y) that would dispatch specific code depending on classes of both x and y.
If you need cutting-edge or esoteric statistics, use R. If it exists, there is an R implementation, but the major Python packages really only cover the most popular techniques.
If neither of those apply, it's mostly a matter of taste which one you use, and they interact pretty well with each other anyway.
That being said - all the serious math/data people I know love both R and Python...R for the heavy math, Python for the simplicity, glue, and organization.
The conclusions state what we already know: Python is object oriented; R is functional.
The Last Word appropriately tells us your opinion that Python is stronger in more areas.
What features or workflow does R or Pandas/Numpy offer to manufacturing that Minatab & JMP can't?
I've grown to appreciate R, especially its plotting ability (ggplot).
The "weekend hack" that was Python, a philosophy carried into 2.x, made it a supremely pragmatic language, which the data scientists love. They want to think algorithms and maths. The language must not get in the way.
R wants to get things done, and is vectors first. Vectors are what big data typically is all about (if not matrices and tensors). It's an order of magnitude higher dimensionality in the default, canonical data structure. Applies and indexing in R, vector-wise, feels natural. Numpy makes a good effort, but must still operate in a scalar/OO world of its host language, and inconsistencies inevitably creep in, even in Pandas.
As a final point, I'll suggest that R is much closer to the vectorised future, and that even if it is tragically slow, it will train your mind in the first steps towards "thinking parallel".
to me, R was a waste of time and I really dont understand why its so popular in academia. if you already have some programming knowledge, go with Python + Scipy instead
EDIT: R is even more useless without r studio, http://www.rstudio.com/. and NO, dont go build a website in R!
But no http2 so it won't get in front of my nginx instances, yet ;)
Otherwise I love HAProxy!
Squid remains to be the only one that can deal with SSL proxying(yes it's kind of MITM, but it's needed sometimes), and it's also the real "pure" open source. HAproxy might be better fit for enterprises that need support?
Every side project I have ever attempted has taken me ages and never got anywhere. Saying things like localisation need to be done before your MVP ready is nuts.
I'm working on a side project right now. I've tried to learn all my lessons from other attempts. It's just a basic website (landing page). Between building that, trying to sort out product, make a plan, etc - there is barely any time left... I have a full time 9-6 job, and a two hour (total) commute. There's hardly any time left at the end of the day, squeezing out an hours work each day is hard enough. Localisation would never be on my list.
Having the time to check each one of these points is a luxury most side projects can't afford.
First, it's basically a list of minor nit-picky random things rather than a thoughtful list of essentials. How is "Pages don't refresh automatically" on the list, but, "Buy an SSL certificate" is not?
Second, many of these things would be total waste of time pre-launch. If you have zero people buying your product and you spend your time perfecting "Currency, language, country specific deals, taxes" then your perspective is way out of whack.
If I was making a similar list it'd be something like:
1. Does the product do something useful yet?
As soon as the answer is "yes" then you should launch. That's it.
I've been struggling with making things 'look clickable' on my website (or maybe everything is fine and it's in my head). To solve this I've been trying to define what exactly makes a user want to click something. The most common methods are the blue link color, underlining, or a 'more/click me' button. I don't want to do any of those - wanting someone to click content is more desirable than telling them to.
In researching other sites and monitoring my own behavior I noticed that I always want to click images. This probably has to do with being a long time image-board browser. But the content I want to be clicked doesn't have any images associated with them. I've been trying to work in glyphicons that indicate a link, but it messes with the aesthetics of the content.
To go a bit off-topic, I actually tried to follow the "don't use color alone to provide information" for the static diff on diff.so/about, but I couldn't find a way to get ChromeVox to read text on one side of the diff any differently than text on the other. I'd be really interested in any advice on that - it seems like some screenreader users could use something a little more user-friendly than emacs.
That's a move that seems like it may push Gitlab ahead of GitHub in some ways (well, to me at least).
watt balance: https://www.youtube.com/watch?v=VlJSwb4i_uQ
and the conundrum was that they still needed to have a precise enough measurement of that constant because it's an experimental measurement.
"They never found the cause for the disagreement, but in late 2014 the NIST team achieved a match with the other two"
at a time when this story, also from Nature, is also on the front page: https://news.ycombinator.com/item?id=10383984
Speaking as someone who sometimes puts his back out and spends a few days hobbling slowly about, I shouldn't have to quickly get out of someone's way if I see them coming and they shouldn't be allowed on pavements.
The price is prohibitive, but the market now has something like 7 or 8 brands competing, so I expect the price will come down in the next year or so.
Here in Toronto, I've seen marketing teams of people handing out this device and rolling around Yonge-Dundas square (sometimes cast as times square in feature films) in a guerilla marketing push.
It doesn't mean they're not cool and fun, but the breathless articles about how cool, hip, and trendy people are "suddenly using the device" are often coordinated.
It is fun, but you have to be very careful about stepping on. It is one of those things where the more you panic the more something goes wrong as you overcompensate. I fell when I tried to step onto one on concrete and it was way faster than I was use to so I overcompensated for the pressure my foot was causing and landed hard around my tail bone. And by that point I was pretty experienced getting on (just not on concrete).
Snowboarders seem to have a pretty easy time with it. But again, getting on without help is probably the most difficult step.
Also I can't wait to get some more traction in my side project so I can get my "recreational personal flight device" into a prototype and testing stage. Think of it as akin to JetMan, version 2.0, and not needing a helicopter.
Seriously, the way people whine and whinge about them, you'd think they never did anything unless it was for some grimly healthy purpose.
As it generally happens in software/hardware patents, the claimed solution seems quite obvious whenever one wants to solve that particular problem, and the hard part is the "execution", i.e. implementing it efficiently and figuring out whether the tradeoffs are worth it.
So assigning patents to things like this seems really dumb.
The patent in question pertains to an optimization of what these days you'd call "memory disambiguation." In a processor executing instructions out of order, data dependencies can be known or ambiguous. A known data dependency is, for example, summing the results of two previous instructions that themselves each compute the product of two values. An ambiguous data dependency is usually a memory read after a memory write. The processor usually does not know the address of the store until it executes the store. So it can't tell whether a subsequent load must wait behind the store (if it reads from the same address), or can safely be moved ahead of it (if it reads from a different address).
If you have the appropriate machinery, you can speculatively execute that later load instruction. But you need some mechanism to ensure that if you guess wrong--that subsequent load really does read from the same address as the earlier store--you can roll back the pipeline and re execute things in the correct order.
But flushing that work and replaying is slow. If you've got a dependent store-load pair, you want to avoid the situation where misspeculation causes you to have to flush and reply every time. The insight of the patent is that these dependent store-load pairs have temporal locality. Using a small table, you can avoid most misspeculations by tracking these pairs in the table and not speculating the subsequent load if you get a table hit. That specific use of a prediction table is what is claimed by the patent.
Maybe this is worth a patent, or maybe not. For what it's worth, I don't think anybody was doing memory disambiguation at all in 1996. Intel was one of the first (maybe the first) to do so commercially in the mid-2000's. Apple's Cyclone architecture also does it, and I think it was the first in the low-power SoC space to do it.
So it's a university [mainly] funded by the tax-payer. How can it be that the research of this university isn't in the public domain? The public paid for it, the public should reap the benefits without paying again.
Sure, Apple tries their hardest not to pay taxes, but the patent isn't limited to them.
In particular, given how much industry funds them, collaborates with their professors, etc, what is going on now is a remarkably stupid approach mostly driven by tech transfer offices that want to prove their value.
Which will be "zero", once the tech industry starts cutting them off.
I mean, that'd be funny, right? Teaching students something that you patented, waiting a few years for them to go into industry and apply what they learned, then suing them for it.
Why doesn't Apple start lobbying for real patent reform?
More to the point of the article, there is also a custom messaging channel, you can create your own interactive experience (the receiver is just a website displayed in a chrome tab). Here's an example of tic-tac-toe: https://github.com/googlecast/Cast-TicTacToe-chrome. You can already develop a custom receiver for an in-store search feature or something more interactive and playful. You can connect to it via the same wifi. I don't understand what additional value nfc provides in this case as you'd still need to maintain an open/persistent connection to the receiver from the sender, would nfc be able to provide that?
Btw, Google has a series of APIs called Nearby (https://developers.google.com/nearby/) that are all about connecting to nearby devices but NFC doesn't seem the right answer here.
It has a Marvell Avastar 88W8887 chip, which has NFC built in. http://www.marvell.com/wireless/88W8887/ as well as FM Radio.
I said this should be used for "Tap and Play"
I have a Chromecast and I Don't Get It. I'll be honest, I haven't tried searching for cool stuff to do with it. Sometimes I stream Netflix or Youtube from my phone. I found a way to stream MKV videos from my computer using Chrome. But that's it, nothing there is cool or revolutionary.
So, what's some cool stuff I can do with my Chromecast? I recognize there's something neat going on here, but it seems so locked down that I can't figure out what it is. Can I write arbitrary apps for it somehow? Is there a cool collection of apps that do... something?
Help me out. What's cool about Chromecast?
The Chromecast can display webapps (Chromecast apps) and this could transform any TV as interactive screen. Imagine you have a store, you can create an app which display the new products. But instead of having to type the URL on your phone to have more info, the web app could use the Chromecast API to broadcast a URL via NFC.
And for more advance stuff, it could implement something like Liwe. It's a service to use smartphones as remotes for web apps (>> liwe.co).
The problem right now is the Chromecast is only known for broadcasting video and audio while it can do more then that.
It seemed to imply that mnesia is the DB of the future as soon as everyone realises that everything they are doing is completely wrong and they should be doing things that are more suited to mnesia. Without saying what those things are.
I actually found one of the child comments  was pushing in a better direction. Essentially, the vast drop in $/TB of storage means that persistence of time series/ event type data is practical for the masses now. Sure it's found a niche in ads on the web, but it has much wider applicability than that. I personally think that Erlang is particularly well suited to this space.
My guess is that if somehow Erlang was where it was in 2015 except it didn't have Mnesia, nobody would really perceive much of a hole there, and nobody would write it, because of the database explosion we've seen in the past 10 years. But it is there, and if it works for you, go for it.
My only slight suggestion is that rather than inlining all your mnesia calls, you ought to isolate them into a separate module or modules or something with an interface. But, that's not really because of Mnesia... I tend to recommend that anyhow. In pattern terms, I pretty much always recommend wrapping a Facade around your data store access, if for no other reason than how easy it makes your testing if you can drop in alternate implementations. And then, if mnesia... no, wait... And then, if $DATABASE turns out to be unsuitable, you're not stuck up a creek without a paddle. With this approach it's not even all that hard to try out multiple alternatives before settling on something.
mnesia is a database for the 90's because it was written by smart people in the 80's and like most of the rest of the otp stack was fairly under used or maintained.
I have a huge amount of respect for Klacke and the original authors behind a lot of this tech, however the erlang community that followed seems to suffer some cognitive dissonance around what problems it solves and how well they are doing them. It would be hard to pick a database less suitable for SMB use than a domain specific database in a niche ecosystem.
That comment really packs a punch and should get much wider visibility. Ad tech and related software is where way too much of our collective efforts are going.
Agent -> ets -> dets -> mnesia -> riak (or sql tooling etc.)
(Agent http://elixir-lang.org/docs/v1.1/elixir/Agent.html is just a state-holding process. Erlang folks can probably write one of these in their sleep, Elixir added a bit of wrapping-paper around it.)
If you're writing an app, I think it's best to be storage-agnostic from the get-go. You shouldn't be building up queries in your core app code- push it to the edge of your code, because otherwise it's not separating concerns. All your app (business logic) code should delegate to some wrapper to work out the specifics of retrieving the data; your app code should just be calling something like Modelname.specific_function_returning_specific_dataset(relevant_identifier) and let that work out the details. That way, if you ever upgrade your store, you just have to refactor those queries but your app code remains the same. On top of that, in your unit tests you can pass in a mimicking test double for your store to do a true unit test, and avoid retesting your store over and over again wastefully. (You'd still of course have an integration test to cover that, but it wouldn't be doing it on every test.)
But the answer is such a broader evaluation of the utility of the tools we are using, related to what we use those in.
And some rants that I share: "but really boil down to adtech, adtech, adtech, adtech, and some more adtech, and marketing campaigns about campaigns about adtech."
However there is 1 thing that mnesia got absolutely and totally right. Database schema upgrades. You can create an mnesia database and upgrade it's schema on the fly as a part of it's operation without once bringing it down or running a script. I did this for a toy project I did in erlang once that I unfortunately never finished since the need for it disappeared.
The Soviets also used searchlights to dazzle enemies during attacks - particularly that attack on Seelow Heights:
Perhaps it's my impression, but I've rarely seen anything constructive recently. Almost everything in this discussion, for example, is reaction and sniping ("useless", "absurd", "stupid", etc.). It's a way to hang out and socialize online; there's nothing wrong with it. But personally, I've read enough Internet sniping for a lifetime; it's not thoughtful, informative, insightful or constructive; I don't learn anything and leave uninspired.
Perhaps it's just my impression or it's temporary; perhaps it's a bigger change (related to YC and its leadership distancing themselves from HN?). Is there anywhere online where the sniping is eliminated and the discussion more valuable?
EDIT: Sorry, I know it's off-topic, but there's no other place to post it (that I know of).
So any bicycle or pedestrian friendly environment needs weather protection as part of its design. It does not need be fully enclosed but that type of protection may be required depending on climate. Perhaps a convertible system where panels retract?
So it's a nice thought experiment but not at all practical, on top of that 'bike lanes in apartment building hallways' make you wonder just how much experience the designer has riding bicycles, you park your bike at the interface between inside and outside and you don't run around the apartment hallways on a bicycle because of (1) pedestrians, (2) playing kids, (3) the fact that you now have to elevate your bicycle every time you want to go in or out of your house and (4) storing your bike at streetlevel is simply much more practical.
Anyone who has ever actually had a bike for more than a week and who actually shops for their own groceries will know this is a terrible idea.
The e-commerce business is really challenging, and we feel like with this online-offline equation weve really unlocked something that can scale,
I have a long vision for the company, one that could take decades to unfold, and I didnt think that my running the company day-to-day was necessarily optimal to getting there.
So after 8 years unable to scale, they now want to scale by building brick-and-mortar stores where you can try a product that you can only buy online?
Am I missing something here?
JSON has become the go-to schemaless format between languages, despite being verbose and having problems with typing. Transit aims to be a general-purpose successor to JSON here.
Stronger typing allows the optional use of a more compact binary format (using MessagePack). Otherwise it too uses JSON on the wire.
Anyone who knows more, please correct me.
I'm more partial to the way Avro does it, where the encoded JSON remains type-tag and cruft free, and a separate schema (also JSON) is used (and required) to interpret the types, or encode to the correct binary encoding.
It's worked awesome for updates, and using Transit to keep the transmissions minimal has let us focus on the API for a realtime system.
 - https://github.com/ptaoussanis/sente#sente-channel-sockets-f...
Doing client-side programming with things like CLJS, Figwheel, Reagent and core.async feels miles ahead of what we have in moden-js-land (es6/7, babel, webpack, React, promises).
If you were to start a startup today, would you be comfortable going with something like Clojure/script?
The extension mechanism is writing handlers in all languages communicated with, since its stated purpose is cross language value conveyance.
In contrast, a schema language allows extensions to be described once, in one language.
I was expecting this to be a sort of macros for data notation (an inline schema language), but it seems more like an extendible serialization library.
Such as? Article is a bit short on facts here.
> At the same time, webmasters weren't keen to begin the migration process to HTTPS because of that pesky mixed content warning, which had a tendency to spook less-experienced users of the Information Superhighway.
And rightfully so! There's no difference between mixed content and HTTP only for the purposes of data security. Just yesterday I noticed that a payments website had mixed content issues and elected not to risk my personal info. This change is even better because now you really can tell your family to "just look for the lock icon".
Or would they have to get their jammer close enough to be in range of the ship's guns (and presumably its loud radio broadcasts would give away its position)?
I wonder more why it was dropped in the first place...
Another side effect is its a filter on people who can't handle math or can't handle complicated written procedures. Its unclear how important that is, but it is clear its a very good filter if you value those skills for other reasons.
With respect to situational awareness, its very easy to own a GPS while confusing that attribute of ownership with understanding where you are, but its very hard to mentally do celestial nav without understanding where you are, what are the present sea and weather conditions. Also there is a difference between merely owning a (possibly GPS accurate) clock or chronometer and understanding what time it is. "I own things that could provide accurate 4 dimensional situation, were I to actually understand the outputs" is a lot different from "I can perform extensive labor and calculations with the result of being deeply aware of my 4 dimensional situation"
Polish minister goes to France to meet his counterpart. They meet in an amazing office and he asks his french college- How did you get money to build this?- Can you see the bridge outside the window?- Yes.- 500 mln on paper, built it for 250. Voil.
After a year they meet again in Poland in even bigger an more magnificent building. French minister ask:- How did you get money?- Can you see the bridge outside the window?- No.- Voil.
Please stop this ride.
Note that the bridge isn't unsafe, the modern span is much, much safer than the old WW2 era span. The new bridge won't collapse in an earthquake. However, because of design failures and shoddy construction, it will last much shorter than planned.
One thing I see time and again, in projects I've either been involved with directly, or indirectly, is a failure to do any substantive geo-technical assessment in the early planning phases. Not understanding your environmental invariants properly (when they are usually diverse due to the physical scale of these projects) always comes back to bite you later on. It's usually (eventually) a critical path item on any schedule since the structural design is so dependent on them.
"In April 2006, a consortium involving American Bridge and Fluor won the tower contract. It was built in China to save moneya decision that carried its own costs when inspectors later found poor welding and busted bolts at key points that required fixing. Frick says the current $6.5 billion total is a rough estimate, and that it doesnt include interest or financing costs."
A mistake most larger EPCM's made in those days. The horror stories regarding the quality of Chinese steel and fabrication back then are very real. It is unthinkable now to let any fabrication of that nature happen there without adequate on-the-floor supervision and oversight.
This industry is so old yet the innovation is really behind. Technology (?lame word?) should be speeding things up and making things more accurate and less error prone, but instead it seems to be delaying the time work gets completed.
Some promise is held in Information Modeling like BIM or CIM, but they are not trickling into the industry at the pace required. And further, many of the present day engineers are not in a position to understand this stuff.
Oh, you want the chocolate sauce on top? That will be an extra few billion dollars, please.
If the first estimate had been $4 billion (assuming we're taking $6.5 billion in 2015 dollars and working back to 1998 dollars), the project never would have gotten off the ground. The government would have said "fuck no" and asked for another bid. It would have been mired in discussions, argument, etc for years before eventually settling on a $1 billion price tag -- that will eventually balloon to $7 billion or so anyway, because the winning bidder intentionally underbid because it was the only way it would get approved.
The only way to build large public projects like this is to take advantage of the sunk cost fallacy (or "bait and switch".) Government contractors will get their cut, and the regulatory tack-ons added by local governments to put their stamp on it (and get some operating budget!) also add money.
When the people who are not knowledgeable about the actual details of the project require their expectations to be met, other expectations of theirs will not be met (the classic "fast, cheap, or good, choose two" joke). I like to say to people making unreasonable demands "Do you want to be disappointed now, or later?"
It would be hard to stick to such a pledge, of course.
$6 billion overrun could have fed, clothed and health insured million of people
So even if data protection rules were perfectly adequate in every single country on this planet, there would still be justified concern about transferring data across borders.
That's a situation that must change, and it can change without taking away the bowl of sweets from security agencies altogether (which will never happen).
There's no new policy and no court orders to do particular things. What's likely to happen is an extensive legal limbo. We may even end up with a special Snowden version of the cookie warning: "Data stored on this system is subject to mass surveillance and may be accessed by the security services without a warrant or due process".
Oh good, I was worried a little about that one.
> Undoubtedly (as the CJEU accepted) national security interests are legitimate, but in the context of defining adequacy, they do not justify mass surveillance or insufficient safeguards.
Another good thing. I wasn't sure if this ruling affects spy agencies, too, or just companies.
It claims that another star having a close encounter with KIC 8462852 (the star discussed in the article) and stirring up its comet cloud "would be an extraordinary coincidence". There is, in fact, evidence that such an encounter happened in our own solar system, "only a few millennia before humans developed the tech to loft a telescope into space." Calculating the speed and trajectory of a particular star, astronomers found that it would have crossed within the radius of the Oort Cloud approximately 70,000 years ago, producing exactly the scenario that "would be an extraordinary coincidence."
Additionally, astronomers have checked stars in the galaxy for the possibility of a close encounter with our solar system, and find dozens of such candidates to come close to our solar system, sometimes within the radius of the Oort cloud, within the next million years (some as close as 240,000-470,000 years from now).
The idea that a passing star would stir up the comet cloud of KIC 8462852 should not be dismissed as a coincidence, especially not to give leeway to discuss the potential for intelligence to build megastructures, when we see that such a coincidence is not even that rare for our own solar system.
"Citizen scientists catch cloud of comets orbiting distant star"
The paper. Basically Kepler, or to be more precise the Planet Hunter crowdsourcing effort, found a star with a rather strange light curve and the Atlantic jumped the gun and babbels of aliens.
After all, this light pattern doesnt show up anywhere else, across 150,000 stars. We know that something strange is going on out there.
It would also be an extraordinary coincidence if we find another planet with life on it, so quickly after humans started to look for it. 150'000 stars is a narrow band of universe, cosmically speaking.
Another cool citizen science project was the observations of the epsilon Aurigae transit. https://en.wikipedia.org/wiki/Epsilon_Aurigae
we should be able to pick up that high energy stream with current technology, but we'd need first to guess at which frequency it transmits to (if it's there)
edit: ok scratch that I stand corrected I was still thinking in earthling terms
Right now, the field of neural networks seems like a maze. It is too easy to get lost, or to settle on the wrong, suboptimal solution.
I have now started using supervisor for deploying Sidekiq, but I would have preferred a Foreman like tool (so that development is also nice and simple)
What's the largest predator in Britain? The badger? The fox? Or that housecat everyone thought was a lion.
Talk to anyone in the pacific northwest. If you take only the slightest precautions you have nothing to fear from the wolves, cougars and bears. You are far more likely to be eaten by a fellow human. You are more likely to be killed by deer. They are already all over Britain. So the wolves will in all probability reduce the number of animal-related deaths.
There are some good discussions and great links to related content including the TED talk on 'rewilding' by George Monbiot who narrates the How Wolves Change Rivers video.
On the other hand, they are eating all the woods, starting with the saplings which is causing real harm to the sustainability of forests.
Predictably, the notion of culling some is very controversial, especially from nature loving people. But the alternative is bringing back the wolves. The wolves will do lots of wolfy things like killing dogs and eating livestock and be equally controversial.
There was another example I read recently: the Indian Vulture Crisis. Apparently the vulture population in India has been declining dramatically. I wouldn't have thought vultures were particularly good, but their declining population has led to all sorts of significant issues such as an explosion in the number of wild dogs and the spread of disease. It has been traced to the administration of an anti-inflammatory called diclofenac to livestock.
Nature has many complex interactions.
The century following the Cromwellian conquest saw a bounty-led drive to exterminate wolves with the last one being killed in 1786.
The first time? Maybe. It quickly becomes distracting, annoying, and (depending on the distance) frightening.
The only decent argument I can find for reintroducing wolves is that it would help keep wild deer in check. But the costs of wolves are far higher than the costs of too many deer. Deer don't kill livestock or humans. And of course, wolves aren't the only solution to reducing the deer population. They can be culled in other ways. The whole thing seems like a non-starter to me.
I think most who are in favor of reintroducing wolves are just infatuated with charismatic megafauna. "Wolves look cool and they used to be on the island, so let's bring 'em back." or something like that. Then they rationalize their conclusion with arguments about tourism and culling deer.
What if instead of wolves, it was crocodiles that had been eradicated from Scotland? I seriously doubt there would be as many supporters, yet the same arguments for reintroduction apply.
It's so efficient that it's hard to grok at first. And 'normal' COBOL sytnax works. And it comes with a web framework. If anyone on HN is sitting on an aging COBOL project/app you could do worse than to chat to Zortec in Tennessee.
Funny thing, I worked on a project with a mainframe division, that replaced a crusty old COBOL-based mainframe app with... an RPG mainframe app. In the 2010s. Ostensibly because we could leverage existing knowledge from other divisions under the same parent company, but really because of politics, as these things normally are.
What a vivisection of nasty things!
Anyway, echoing andrewray, whats the point of this? It seems no easier than just working with SVG directly. Its considerably less powerful than e.g. using D3.
Whilst it's heartening to see that these top businesses apparently didn't waste the cash, it's not especially surprising to see that enough free cash to pay 3 workers' entire salaries for 3 years significantly boosted the chances of the business surviving over that period.
I also think it was quite interesting how they had a preference for existing businesses from a commitment point of view (and probably to prevent corruption) and how they now think this was wrong.
Will be interesting to see the results form Phases 2-4.
> Even under some very tolerant assumptions, the expected payoff from playing on, for either player, was greater than the expected payoff from accepting the repetition.
Payoff, sure. But it's well-known that the marginal utility of money is not linear, which means people tend to value money differently based on how much of it they have (poor people value a dollar more than rich people). This indirectly means some level of risk aversion is actually an optimal choice. Turning down a gamble, even if the expected payout is positive, can be rational.
This is well studied in economic theory: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenster...
It should be possible to get a temporary restraining order against the city in cases like this within days of the first contested case. It should be easy to demonstrate there is no imminent harm of telling the city, you have to stop doing this until we decide it's OK or not, and quite the opposite, cars are an essential and significant asset, and this policy placed a potentially massive burden on the citizens it effected.
In one of the examples, by the time the victim prevailed against the illegal seizure backed by zero evidence or investigation of any kind, they had already sold off his car, and offered nothing in return. A pretty large part of the population doesn't have a spare $2,000 in cash to get their own car back while the city makes them prove in front of a Kangaroo Court that they were driving their own family to the airport... Missing from the article -- is there any hope of any kind of restitution? Can the victims now pursue a civil case against the city?
City DA: No they're not.
What's your recourse here? Call the FBI or federal prosecutor and report an organized crime syndicate being run by corrupt law enforcement professionals? Because... isn't that what this is?
Is there any onus, or even incentive, for them to listen and investigate? Is the only way to redress the problems a civil lawsuit against the City citing Bivens and various appellate court principles like malicious prosecution? Because grand theft auto, extortion, racketeering, and fabrication of evidence / perjury are not civil offenses, and conservative readings of the concept of 'standing', as I understand it, make it rather difficult to challenge the authors of a failed / withdrawn prosecution in order to get at the legal principles which triggered it.
Concepts like this one, as well as things like civil asset forfeiture, are so clearly in direct violation of the Constitution that at some point, it's not legitimate to shelter enforcers under cover of "just following orders". We still have laws (Constitutional and common), and Peabody, Minnesota doesn't have the right to do things like put all the gay residents to death by legislative fiat & judicial compliance; If you found this occurring, you wouldn't need to file a lawsuit alleging that a constitutional overreach has been committed and demanding merely that the policy cease to be in effect. Instead, you would get some overriding authority, like the state police or the FBI, to run in with SWAT teams and arrest and prosecute every last person peripherally attached to the Peabody legislature or judiciary or law enforcement. For murder.
No amount of 'adopting selective prosecution based on what we can win, since the courts recognized a valid affirmative defence' or 'changing training programs to be more in line with civil rights' or 'firing/reprimanding the officers involved and settling a civil suit' makes killing the gay population of Peabody less of a crime, and no amount of lawsuit would be required to get that recognized.
Does that apply to civil forfeiture as well? Sounds like it should.
If I copy & paste the link to a new tab, then it worked for me.
They should take a page out of London's book and allow minicabs to operate.
I tried checking the "Warn me when websites try to redirect or reload the page" box in Firefox, but it doesn't appear to be stopping it.
Presumably too many people are starting to use things like "Google Sent Me".