If you don't know, John Mackey, the CEO / founder, is a major believer in conscious capitalism and of empowering his employees.
Whole Food employees get paid pretty darn well with some crazy good benefits for their industry and line-of-work (UNION FREE most of the time too!).
WF banks on them being true believers and motivators of the cause - including dedicating a fair amount of paid time to trainings. I've heard mix stories about how Amazon treats employees. I wonder how that will mesh.
So I guess I'm asking:
* What is going to happen with employee culture?
* What is going to happen with all the "Fair Trade" deals WF has in place that might not be the most economical decision now?
* Here comes store automation and hefty lay-offs?
Source: Worked at WF for 3 years
"Amazon did not just buy Whole Foods grocery stores. It bought 431 upper-income, prime-location distribution nodes for everything it does."
Groceries are one of the few large markets that require some proximity to customers due to costs and spoilage. Each grocery store is a type of mini-distribution center for grocery products.
Shipt and Instacart have succeeded to date because they use existing distribution channels and set up marketplaces for the "last-mile" of delivery. This is in contrast to Webvan in the early 2000's who tried to do grocery delivery by building their own distribution and failed spectacularly.
Amazon has become an expert in distribution and logistics. But it is clear that using their current model doesn't generally work with groceries (RIP Webvan, 1998-2001). Bananas need to be treated much differently than books.
So what does Amazon do? But Whole Foods!! A moderate sized grocery store with a significant national footprint and lots of higher income customers.
Now they instantly have a pre-built distribution channel that is already optimized for the grocery business (which again is much different than non-perishable consumer goods etc).
Things definitely just got interesting in this space!! I still believe that Instacart and Shipt can succeed, but they need to maintain a laser focus on making their shoppers and customers happy! And grow as fast as possible while Amazon digests Whole Foods!
[Note: I was the early CTO for Shipt responsible for building their grocery delivery platform and initial engineering team. Go Shipt!]
If this is about Amazon thinking they can turn things around for Whole Foods, then it will certainly mean drastic changes to price, selection, and employee structure.
If this is about Amazon using a brick-and-mortar chain as a tool to help Amazon's own ventures (e.g. grocery delivery, local storage for same-day deliveries, product return and support locations, etc), then it will certainly mean drastic changes what a Whole Foods store even is.
Either way, I can't imagine a course in which Whole Foods as we know it isn't basically over. Which doesn't necessarily bother me (I migrated to Trader Joe's and similar competitors long ago), but does seem like a big deal in the grand scheme of things. The grocery industry was heading in a Walmart-ish direction... and Whole Foods was almost single-handedly responsible for bringing a counterculture into the mainstream, and forcing all the other chains to reverse course and up their game.
Frankly I don't buy as much from WF anymore considering that what they carry is much more like "organic junk-food" than actual food.
If you really want to support the "cause", find yourself a local farmer or CSA to buy from and support them directly.
Alexa: "Buying Whole Foods"
The whole premise [of online grocery] is that youre saving people a trip to the store, but people actually like going to the store to buy groceries.
A bunch of smart people at Amazon have been thinking about re-imagining the next phase of physical retail. They want more share of the wallet, and habitual, frequent use of Amazon for groceries is the ultimate goal.
"Long term, a stronger grocery business could position Amazon to become a wholesale food-distribution business serving supermarkets, convenience stores, restaurants, hotels, hospitals and schools. "
"A group of Amazon executives met late last year to discuss the disadvantage Amazon faced compared with grocery competitors such as Wal-Mart and Kroger because of its lack of physical stores and customer apprehension about buying fresh foods online. They decided they needed something more to jump-start Amazons grocery push beyond plans already under way for the Amazon Go convenience store, modeled for urban areas, and drive-in grocery pick-up stations suited for the suburbs."
Not saying it won't be a game-changer, but it is fascinating to watch people extrapolate this out to the extremes as soon as the news hits the ticker.
To put it another way- if it were that obvious that buying Whole Foods was going to make Amazon the dominant grocery seller, why weren't people predicting that they would do it all along and asking what they were waiting for? It isn't until Amazon acts that people say "Oh yeah, that was the right move."
I watched this video a while ago about what Amazon Go is a precursor for. In summary, if AWS is renting out server infrastructure for people, then you can imagine that Amazon can make the infrastructure to lease out Amazon Go to other stores. They first integrate it with Whole Foods, then as customers expect no more checkout areas, they will only go to Whole Foods because it's faster and maybe cheaper. Then because customers want it everywhere, Amazon could force other stores like Walmart and Target to integrate Amazon Go infrastructure to their stores, without Amazon directly competing with these stores, and make a ton of money leasing it out without spending money on building whole new stores. Of course, the acquisition could also be like nodes for warehousing and delivery, but both avenues are not mutually exclusive.
Amazon is known to work backward from an imagined future press release, and then do the actions necessary to make that press release. How does Amazon see the future in this case?
-Amazon already has PrimeNow and Amazon Fresh, which offer a great grocery delivery service. For those who have tried these services, it's easy to see how addicting they are vs going to a physical store.
-I can't see Amazon using existing retail stores as distribution centers. I would think you really only need one grocery distribution center for each city in America, and PrimeNow (and AmazonFresh) already has that! Or, has Amazon determined that picking/packing from a retail store is actually efficient? Retail as a DC seems tough to automate, items are in the wrong spot, suspiciously missing, etc. I don't get it.
-I would have guessed that online grocery from highly automated distribution centers is where the majority of the market would be within ~20 years. Does Amazon, the king of online, not think that!?
-Or does Amazon just believe that they can run Whole Foods better than it is currently run?
-Do they just want the purchasing, existing relationships, etc to also sell their customers through other channels?
As a German I'm amazed by the "food haul" videos on YouTube and the positive feedback for ALDI US (belongs to ALDI Sd), Trader Joe's (ALDI Nord) and Lidl (part of Schwarz Gruppe, they just started in the US yesterday).
Amazon saw it and responded quickly.
Amazon is a cut throat profit making machine at the expense of human exploitation.
Wholefoods felt like the little guy who was trying to do things differently from mainstream supermarkets.
Whole Foods has millions of customers. Amazon will surely be advertising AmazonFresh or some re-branded form of it - such as "Whole Foods Direct" - to Whole Foods customers. Do Whole Foods turn into AmazonFresh warehouses? Possibly, but it's unclear how the two business will eventually integrate. It's also possible that Instacart gets acquired by Amazon, but for the most part I see them getting screwed in this deal. Instacart's business development deal is like bringing a knife to a bomb-fight.
The other major value this deal brings to Amazon is the industry-specific knowledge that the Whole Foods team brings. As a frequent East Coast, Whole Foods shopper, I am always amazed to see how much of their food comes from the West Coast/all over the world. I think the execs of AmazonFresh, who are mainly HomeGrocer/Webvan execs, appreciate the complicated logistics of doing the businesses. Amazon will be able to combine its software engineering knowledge/logistics knowledge with Whole Foods' expertise at creating an amazing grocery-shopping experience.
Now imagine if you are Amazon and you'd like all those people going to Walmart to just use your services. If I could get my groceries delivered to my door and only need to go to stores once in a blue moon I'd be really happy.
I'm surprised the article didn't mention this.
That's $1 for every year since the Big Bang.
Since I did not see it asked, any idea if we will see some new Prime benefit at Wholefoods?
I'm not shedding any tears for Whole Foods' "culture" now that they've been bought by Amazon. They were always a sort of sham-progressive company. In the words of Portlandia, "Whole Foods is CORPORATE."
I suppose if you're going to be corporate, might as well go full Amazon. They do it very well.
> an American supermarket chain exclusively featuring foods without artificial preservatives, colors, flavors, sweeteners, and hydrogenated fats.
I was waiting for this to appear (a common grocery store with seen-as-healthy stuff) but it already exists. TIL.
People seem to think it will be bought, but this would seem to be negative news because the price paid for Whole Foods was about half what Sprout's is trading at.
I feel this is more Mackey cashing out than anything else.
Whole Foods is about to change a lot and quickly.
What would be also helpful would be for Amazon to disrupt the wine and liquor sector.
How's this for an investment strategy... Long AMZN, short a basket of all other mid to large retailers & grocers.
In my area where there is a big college campus, they have been hiring for AmazonFresh devs for over a year. We have no AmazonFresh here but we do have a Whole Foods.
Mr. Bezos:- Alexa, buy me something from Whole FoodsAlexa:- Good. Buying Whole Foods.
1) Import cheap Chinese goods,
2) avoid sales tax,
3) destroy the environment (china),
4) destroy brick and mortar stores,
5) destroy small/medium business.
... I wonder how this will play. Is Instacart's business threaten by losing Whole Food as a client?
What is the relationship between a centrally planned economy as for example Gosplan was trying to run in the USSR, and the presence of these huge market-like but centralized entities like Amazon and Wal-Mart inside a free-market economy? This becomes particularly interesting because Amazon apparently does not have any particular interest in turning a profit.
A central problem faced by Gosplan was the collection of high quality data about supply chains and the estimation of the utility function of consumers. I would say Amazon is in a pretty good position to do both right now.
Another question, somewhat related: At what point does it become profitable for Amazon to lobby for more redistributive taxation? This might sound paradoxical because you would assume that Amazon represents the interests of it's owners, who would probably suffer under such a taxation scheme. But shouldn't there be a point at which giving more disposable income to poor people will boost the overall income of an entity Amazon (since Amazon doesn't sells many flat-screen TVs but not as many mansions or yachts)?
What other ideology than fascism best describes government coercion of its citizens to engage in business relationships with the insurance companies to ensure their profits remain at certain levels?
Sometimes I forget just how big Amazon is.
... But it's nice to know I can get my artisanal, single-source, Fair Trade, organic, small-farm, no-GMO cucumber water with one day shipping now.
I'm building an app to help you easily email your congressperson and ask them to create legislation requiring space/tab equality. This has to stop. Please consider donating to my Patreon.
Brace yourself: I use two spaces after the end of sentences too.  I am quite the rebel.
Modern IDE's (Sublime Text) let you easily convert spaces to tabs (or vice versa) and intention length of existing code. 
* Joe likes 4-space tabs, I like 2-space tabs, and Jane is old-school with 8-space tabs.
* All goes well until someone aligns something visually, like so:
* Now it aligns perfectly on my machine, looks mostly ok on Joe's machine, and is ON MARS on Jane's machine.
Thus one-or-more of three futures happens:
* Someone implements a code re-formatter into version control
* Someone re-aligns the code, starting the process over again.
* Someone calls a meeting and demands we all switch to spaces
I generally prefer tabs because I feel that they're more egalitarian: I like 4-space indentation, but don't want to force that on everyone encountering my code. Similarly, I find 2-space indentation very hard to parse in most languages, so I don't want that affecting me if I can get away with it. While this is possible with spaces and maybe a series of Git hooks, it's trivial with tabs.
On the other hand, I always use spaces in languages like Python or Ruby where there are well-codified style standards. I also always show invisible characters on any editor which allows it, and have cleanup scripts to ensure that whitespace is standardized across any non-vendor code in the project.
Maybe most tab users don't feel this way? Maybe most aren't as careful/picky as I am? Maybe tabs are more popular with younger devs? But I feel like tabs can offer more interoperability than spaces when many coders are working on the same project when the language/community doesn't strongly specify whitespace.
I would expect there simply is a confounding factor that the author did not look at. Maybe the info is not in the data.
I can imagine that the space/tab choice is related to the "upbringing" of the developer. Maybe which language or editor they used first in their life.
Or maybe it's related to culture. For example when using IRC, tabs are usually not used to communicate. Maybe that impacts the general choice of tabs/spaces.
Or maybe more sophisticated users tend to exchange the tab key for something else:
This means you consider consequences beyond "but it works on my machine" so you're a better programmer. Ergo, higher salary.
Sure, but how many spaces? grabs popcorn
From the Wikipedia article (https://en.wikipedia.org/wiki/Tab_key):
> A common horizontal tab size of eight characters evolved, despite five characters being half an inch and the typical paragraph indentation of the time, because as a power of two it was easier to calculate in binary for the limited digital electronics available.
Why someone decided to round up to 8 instead of down to a much more sensible 4 spaces is beyond me.
We finally compromised and we're using 3 tabs.
Only, in Go, you don't have a choice -- gofmt enforces tabs only (with spaces for alignment). So something seems odd there.
The answer, as always, is: lisp with parinfer - makes the whole debate irrelevant.
Unlike "traditional" formatters, it parses your code into a syntax tree completely disregarding any original formatting, meaning the output is entirely consistent. It's pretty liberating to devote zero time to manually formatting and can make code reviews more constructive and less superficial. It is what is is, and it's pretty opinionated based on Facebook's code style. Works great for us, enforced with a git hook.
If a tree falls in the forest, etc
Case closed! :)
Because bigger corps generally set company wide standards on the code indentation and that more often that not prescribes spaces.
People who use spaces are just better than those who use tabs.
If you care about clean code, being orderly and organized probably extends to other areas, too, and that helps you make more money.
In my experience, developers who mostly use default settings are often unorganized and easily confused. They also know very little about the systems they are working with, because everything outside their IDE doesn't interest them.
I'd also bet that many tab users had to check what they use, because they didn't know or care.
tl;dr: Developers who change the settings are more dedicated to their job.
The Go community was on to something with gofmt (even if they did decide on tabs).
That said, in the imperfect world we live in, I always use spaces. "Indent with tabs, align with spaces" is obviously not rocket science, but its just too opaque unless you have a strong code review process.
When I was working on my own small projects I was using tabs, or tabs combined with spaces, which yielded me not a lot of money.
Once I started working for a big corporation, the coding standards mandated by the company meant I could only use spaces, because people couldn't be trusted to use a nice space/tab combination.
These people have a particular gusto in constantly one-upping each other with the latest good practice; the one that adopts the highest number of good practices wins. Their constant talk of the latest fad and push for the "right ways" of doing things usually puts them in positions where they end up evaluating and hiring new developers (I got interviewed just the other day by somebody that didn't ask me to design or structure any code, but rather if I use == or ===).
Some of these are actually excellent developers nonetheless; others will drive entire teams into rewriting a perfectly working application into a completely useless mess of a thousand microservices. Endeavour that will end up in their CV anyway, helping them to find another excellently paid job once it's time to migrate.
Set the tab stop to what ever your project/team style guides requires. Tabs to spaces is just plain stupid. Why on earth would you ignore the x09 character, a single character that exists for this exact reason, and replace it with multiple x20 in the file. Just set your tab stop to what you like to look at, be it 2, 4, or 42 characters. By the way the default is 4 characters for most editors/IDE's.
vim - :set tabstop=4vscode - to your settings add the following. "editor.tabSize": 4
Everyone else's comments are moot!
In a perfect world you'd use tabs for semantic indentation and spaces for stylistic indentation but this is too hard to implement in 100+ person teams and also can't be automated via an IDE style sheet.
I'm not a PHP guy (so I'm not sure about this) but it looks like PHP-FIG suggests it too... http://www.php-fig.org/psr/psr-2/
So are we really saying software developers who follow style guides earn more? That doesn't surprise me. Adhering to guidelines is a good way to work well on teams and thus become a more valuable team member.
If the 95% confidence intervals for tabs and the 95% confidence intervals for spaces intersect at a point, there is a possibility for failing to reject the hypothesis that the difference between the two is nonzero at the alpha = 0.05 level. Since there is little overlap in most cases, the original hypothesis holds.
So the model actually predicts causality, instead of correlation? That's amazing. I'll start using spaces instead of tabs today and I will ask for a 8.6% raise.
According to this model, I should get it!
Who do I contact for my check?
With this, I instantly conform to how the file is formatted. Is it 3 space indentation, made with a mixture of 8-tabs and spaces? Autotab will figure it out, spit out the Vim params, and you're modifying away without causing spurious diffs in version control.
You have to learn to use Ctrl-T for indent and Ctrl-D for deindent in Vim; those obey the shiftwidth and generate indentation according to the shiftwidth, tabsize and expandtab setting.
I don't really care that much, Go says tabs so whatever. But spaces have the benefit (or drawback to some) of rendering exactly the same way for everyone (assuming fixed width fonts, does anyone code with proportional fonts?). Also I've always got two thumbs on the space bar. And it's much bigger than the tab key.
So settled then?
Or do some people actually use the space bar to indent code? (which is obviously insane)
I never thought that such styling would really matter much ... I wonder how much a developer using a beautifier earns in average..
Anyhow, statistics sometimes brings up some weird conclusions.
I use vim with vim-sleuth now. If anybody knows how I can achieve what I described above in vim, please tell me how.
Am i missing something here? this sounds really dumb, as tabs make the most sense, and it appears use less memory as well.
Then somebody asked me why. The answer was that Sun Solaris was a crappy operating system which would fail to boot if you used tabs in files like /etc/vfstab.
For some odd reason, I carried around a weird bias about tabs rather than regarding Sun as being shitty.
I worked somewhere where a bunch of folks preferred four spaces (because they came from Python), others two (because they came from JS). Use tabs, set your preferred tab size, boom, everyone gets along.
That someone with this many different types of experience can have accidentally avoided encountering whole class of people and their code, really puts into perspective how small the experience of any 1 person is.
Do Visual Studio and SSMS support the space equivalent of "Select X rows and tab them all at once"? I just tried now and all the code is wiped out, replaced by a single space.
I think it's the best of both worlds, really.
It lets you check in an .editorconfig file that specifies whether your project uses spaces or tabs. And a bunch of editors and IDEs already have built-in support for it!
Doesn't solve the holy wars, but it can sure help reduce the friction.
... "format": "prettier-eslint --write --trailing-comma es5 --single-quote true \"_src/**/*.js\"", "lint": "eslint \"_src/**/*.js\"", "precommit": "npm run format && npm run lint" ...
"There were 28,657 survey respondents who provided an answer to tabs versus spaces and who considered themselves a professional developer (as opposed to a student or former programmer). Within this group, 40.7% use tabs and 41.8% use spaces"
Without filtering to the 'professional developers', meaning overall, there are more tab users (32% vs. 28%).
(Am I wrong? I would hope a Data Scientist would know a basic thing like this, but I don't know R so I can't tell for sure from their code.)
...but at least with tabs everyone can adjust the gap size to their liking. :>
It's because they get more keystrokes in!
Depending on your tab/indent settings you might get as much as 4x or even 8x the XP by using spaces.
Use whatever's right for you! And, if you come to a workplace where there are "rules" about that, try to obey them.
Never take part in any of those wars of Tabs vs Spaces, VIM vs Emacs vs Sublime vs whatever.
Spend time on writing more tests instead!
Boring corporations like boring spaces, and have to pay big, boring salaries to get any talent at all.
On the other hand, cool code slingers may or may not prefer tabs out of personal idiosyncracies, but as long as all of them get shortchanged by the VCs and/or startup founders...
Is it possible that tab-aversive people making hiring decisions act on their aversions (consciously or unconsciously), while tab-friendly hiring managers do not?
Now that's a fact for your next job interview!
I mean how are we to achieve world peace when we still have people using and being awarded for wasting precious bytes.
Tabs are more often used in languages like C and C++ which are more traditional and pay less despite being more technical.
Indeed, it does means that.
The internets giveth, and the internets taketh away.
Many of those who answered tabs are actually using an IDE which inserts spaces when they push tab. They believe they are using tabs, because they've never realized that this is going on. People under that misapprehrension are likely to be less skilled.
Additionally, if a coder, is in fact, deliberately choosing the use tabs, they are going against the majority opion of coders and almost all style guides. That attitude might be correlated with lesser income.
Spaces are for people still using 80 column monitors.
I don't think the fact that you use spaces automatically makes you a richer programmer.
You tabbers are costing me money!
Use and respect .editorconfig files in your projects.
Let's move on to ASCII vs Unicode ....
Are you guys all programming in vi or notepad or something?
Eventually leadership got annoyed at the amount of time developers were wasting punting code reviews back and forth over this silly nonsense, let alone the loud altercations around the office. Who ever could have guessed that developers would be such an opinionated bunch?
So they mandated spaces, and all was peaceful in the office.
For about a day.
Naively they put something along the lines of "spaces are to be used for indentation" in the code style document, but failed to specify howmany spaces.
So the new arguments started up amongst the office. 3 spaces or 4? Whoever could have guessed that a number of developers were actually belligerent types who would go out of their way to find something to argue about, and also stubborn? Such a rare trait in developers.
So the arguments raged again, and eventually management decided they'd had enough. After all the fuss and grumbling over making an arbitrary decision on the tabs vs spaces debate, they decided this time to be democratic.
They scheduled a big all-hands meeting for the developers, and tolerating no interruptions, outlined that a binding vote was going to be taken. The code style document would be updated to reflect the democratic consensus, and also warning that future arguments on any other points would result in verbal warnings, and potentially dismissal.
With the software development managers standing at the front each to independently do the count, they asked all developers in favour of 3 spaces to raise their right hand, and all developers in favour of 4 spaces to raise their left.
The count started, but soon the managers realised that with all the raised hands, they couldn't see the fours for the threes.
Correlation does not imply causation
Just a guess.
Meanwhile, a group of Finnish researchers are organizing a review boycott  against Elsevier, one of the reasons being Elsevier's unyielding opposition to the Finnish libraries' OA requests .
In the past the cost of papers was paid on the demand side and borne communally, now the cost is paid on the supply side. Science still values paper counts and citation counts - but it seems to me that folks who can afford publication now have an unhealthy advantage that they didn't used to!
Maybe if America had open access, things would of turned out a lot better for Aaron Swartz :(
Just kidding of course, this is great news. The EU should still be the main beneficiary of open access science following this policy.
May. 27, 2016
Note, that here the "product" I'm referring to is the final formatted article. If governments want to mandate that universities release internal versions of their published works that seems fine, but that work should be for the universities or governments to undertake. They should not be allowed to release Nature's formatted/published version. This is how Pubmed Central works currently in the US (unformatted manuscripts are released, not the journals' version). When Nature releases an article, they put a lot of work into formatting it for publication so it looks nice. That final product does and should belong to them.
It's fine if people think that publicly-funded research should be freely available. But the fact remains that scientists have been voluntarily publishing their work in private for-profit journals for 100+ years. You can't just "undo" that. And they're still doing it today. If scientists truly felt strongly about these issues they'd only publish in OA journals, but most of them don't care (source: I'm a scientist).
This seems to be a shot at WhatsApp and Signal, implying that they have loopholes that allow the FBI to snoop in. I'm not sure how true that is. This might be an attempt to deflect from the fact that Telegram uses a home-baked encryption protocol which might be insecure, while WhatsApp uses the OWS protocol.
In 2003-2006, we built a service that was a financial system to exchange financial data through various means including AS/2 EDI over HTTP with big companies and the government suppliers such as AAFES (Army and Air Force Exchange). Initially we had RSA, PGP and a custom encryption in there, the latter two for other features besides EDI. We got a letter from the FBI asking us to switch only to RSA, they wanted to know about our use of PGP and wanted to see our custom encryption if we continued to use it. Being a small/medium company we switched to just RSA to avoid any issues. It was an odd day, when I came into the office they told me I had an FBI letter on my desk and you can imagine what happens around an office when something like that happens. Very strange day indeed.
Moral of the story, if you create your own crypto or aren't using the ones you are supposed to use, in any capacity, expect some knocking.
One also has to wonder if the FBI consider the Telegram team to be essentially undeclared Russian agents, and hence fair game.
A journalist like Poitras is on all sorts of lists and incessantly harassed. There are secret courts, secret laws and secret processes at play. And beyond this the power of harassment, intimidation, blackmail and bribery. Individuals and even organizations cannot prevail against the array of capabilities.
Its nice to think of democratic theory and the rights but these only exist when not exercised as talking points. The moment you start exercising them you end up on all sorts of lists, marked for harassment and basically have a target on your back. Dissent is squashed even before it can formulate.
But my first reaction was "Cool, our government really cares, is creative and has the necessary power to get things done."
For those of you who've worked with government, you've seen how insanely difficult the procurement process is. Being as specific as needing to get competitive bids for toilet paper purchases, etc. So the fact that they could get potentially large amounts of bribe money means (a)This goes to high levels in the organization (b)They've probably done this before.
I wonder how much they offered?
And I wonder how many other pieces of software have backdoors. I would think the first things they would try and get access to is (a)Certificate issuers and (b) VPN software.
Do we know that Godaddy,LetsEncrypt, OpenVPN, Cisco VPN, Juniper, etc don't have backdoors?
Sure, you can lock up all communication for privacy reasons, and the government can spend all kinds of resources on trying to control to prevent or circumvent encryption - however it's a waste of resources as it's simply a bandaid.
If I wanted to do something violent or evil I/you can simply have regular meetings and use paper communication - the old spy-style stuff. Of course those networks can be infiltrated by governments with the resources, and they can maintain that presence by allowing certain acts within networks to occur vs. deciding which ones they should stop; it's how the war against Hitler was won once their encryption was broken - watch the very well-done The Imitation Game - http://www.imdb.com/title/tt2084970/ - for a reference.
The only real solution is dealing with the root causes. I heard an analyst on TV (a rare occasion for me) mention after Trump's Saudi visit and speech, that he didn't mention that the Saudis should look into the root causes of why there is terrorist activity growing in their countries; of course a lot of it is historical karma and rage from violent acts against their families, however a lot is because people's basic needs aren't being met which prevents the higher levels of Maslow's Hierarchy of Needs from being reached and maintained.
There's a solution and it requires building real community, locally, where you are now - and striving for people to become healthy so they don't develop bias and other coping mechanisms which prevent empathy and understanding and therefore compassion; preventing responsible ownership of weapons isn't useful either, not developing and supplying weapons on mass would be beneficial, however most attacks recently have been with vehicles or knives.
Universal Basic Income will also allow closer to a truly free work market and it can evolve from there, giving people the time to do what they feel is the most important in that moment for themselves, while not having to be forced to working in a shitty environment with shitty managers or co-workers; the health improvement and increased productivity here alone is worth it.
and, "before going to monterey and while exploring the beauty of san francisco i was contacted once by a us navy intelligence officer who seemingly unintentionally appeared next to me at the bar"
And if such PR herding worked, wouldn't the surveillants be prepared to pay for such efforts to make their job easier?
So, what seems readily apparent is: Telegram takes state money, to offer an insecure option, while dissimulating to the world that it's: a) secure and b) turning down state money all the time.
I know why this perspective isn't discussed in MSM. But I don't get why it's not discussed more here. It seems obvious to me. And personally IMHO, I think that's a good thing. Catch more criminals / terrorists.
I mean, simply use a public/private encryption algorithm that has proven to be highly secure:
- Share your public key openly
- Anyone can send a message to you using your public key to encrypt the message
- You decrypt with your private key on device
Do all the encryption/decryption on device and viola, secure messaging. (This is basically how https works.)
Of course this only allows a single device the ability to decrypt the message.
However, if you want to allow multiple devices to share a private key, they can simple send each other their own private keys using the same encrypted protocol.
In addition, for super paranoid use, a master password could be used to salt the private key so that would be required with the private key to enable decryption. (Which is similar to how password keepers basically work.)
What am I missing?
Option 2: Could be true because seriously, who trusts the FBI/NSA not to violate our privacy anymore?
Really not sure what to believe about this one.
I used to wonder whether some success of social media companies couldn't be explained by secret payments for backdoor access. You could be operating out of Europe or Africa and still get offered money, and other pressure carefully applied.
You might think you'd hold true to your plan of privacy-for-all, but if they offer $x00m or more?
Especially considering how that competitors like Signal are US based. Signal is owned by twitter which by no means is a small player, so it isn't likely to fly under anyones radar.
I think we can all agree that if some totally below-the-radar crypto anarchist who happens to have a few million dollars from bitcoins figured out that they actually have enough access via the dark web to bribe a few Russian generals and long story short detonate a nuclear bomb a few miles outside New York City, just for shits and giggles, then they should be stopped at some point along the way. This will seem like a made-up example to you but I purposefully don't want to confuse the issue with practical examples. We can all agree that at some point this should be stopped.
A reasonable time to stop it might be if intelligence agencies get a literal screenshot from a darkweb chatroom (from a concerned participant, where the participant thinks they're really going too far) where this is being planned in exacting detail but more information is needed to be precise. (For example, suppose the source of the nuclear bomb were not Russia but not enough information was given to identify it. There are actually quite a few nuclear states and many of them are quite corrupt. A short list includes India, North Korea, Pakistan.)
I would think that this kind of actionable urgent intelligence should unlock whatever privacy safeguards are in place, but the issue is that if there is a correct "technical" solution (if cryptography works 'correctly' and is not broken, in an academic sense), then there is no technical possibility to unlock anything. If Tor, crypto currencies, and encryption "work" (in a binary, yes it works, or no, it's broken sense) then following the receipt of such a screenshot there is no technical means of any further step.
Here I'm going to be philosophical for a second. The future of technology is nearly infinite human power. You can already in the next few seconds initiate a crypto currency transfer to anyone anywhere in the world, who can receive it without any banking infrastructure or oversight.
The arc of technology has been personal human enablement. When individuals become nearly God-like and all-powerful, it is dangerous to be in a position where, like the Muslims reporting the madman banned from his U.K. mosque for radical insanity, the status quo is that if you report your friend to the authorities saying, "My online friend, God-like in his powers, is planning to murder a million people just for shits and giggles, and he's kind of insane. Unfortunately, I don't know where he is or what he's doing, but I'm pretty concerned. He has a lot of money from a few ponzi schemes he ran. It's pretty credible for the following specific reasons (screenshots, quotes, etc)." And the only response from the authorities is, "Thanks for all this. We don't know where he is either, in the grand scheme of things a million deaths isn't that much and if it happens we will look at preventing another such case."
That's a pretty silly response, isn't it? That the only possible response is, sorry, nothing can be done.
Okay, now I've laid out why there should probably be some infrastructure on the back-end.
What I don't like is that this translates to humans literally reading people's private correspondence, web searches, etc. It's not very good.
What is a good middle ground?
Can't the NSA make things that run locally, so that no human is reading your correspondence or web traffic, but as you start researching nuclear weapons and making plans on how to murder a million people, and start making those transactions, all this starts adding up and, to quote the Constitution, its tools can receive instructions "particularly describing the place to be searched, and things to be seized", so that after such a report, its perpetrator can be found, or at least enough information can be collected to stop it if it is actually taking place?
I think that all of us here could be okay with being stopped at some point between purchasing a hundred million dollars in anonymous currency, and detonating a nuclear bomb. It's sensible. That can be part of the social contract.
It's difficult. Nobody wants to live with a judge, jury, and executioner in their home looking at everything they are doing in case they break some law.
I am glad that I personally don't have to answer these questions. But we can all agree on the need for privacy (no human looks at what you're doing), and also on the reasonableness, as each individual online progresses toward infinite personal power, for protecting the rest of society from credible and immediate, specific threats.
I agree with cryptographers who think of cryptography as a tool that is either working or broken. (If it has a back door, it's 'broken').
Perhaps if tools included a certain portion that runs locally they could increase the extent to which the tools are not actually 'broken' (i.e. they are actually working, and actually not backdoored), while also increasing the safety every single person has from other individuals being able to plan or pay for their specific death anonymously, and with impunity.
I realize that my suggestions here are not specific enough to be actionable, they are not clear recommendations. But I don't even see these possibilities being discussed (at least publicly), so I wanted to at least move the conversation a bit in this direction.
I'm getting downvoted pretty heavily. Let me ask point-blank: are you okay with someone being able to spend two weeks on the dark-web researching how to make and detonate a bomb using totally innocent chemical purchases, and then your spouse, parents, relatives, or you, being an innocent victim of my exploding the results, or would you want that person to be stopped at some point after they started doing that? The future of information is that it is ubiquitous and easy to access[I edited this paragraph edited from first to third person.]
Actually secure communications would mean that it is technically impossible to see if someone has started communicating with people at ISIS who have overseen and helped people explode themselves. I am not saying communication should be weak and insecure, but should I really practically be able to start doing that if I want?
This is not some kind of false example, either.
Also, for downvoters: I think it is easier for you to agree with the other half of my statement, that nobody should be looking at our web traffic and correspondence, and that it should be actually secure, and also actually private.
The abilities of a technician can be very valuable to a business, but especially as it begins to scale the owner/operator(s) need to adopt different mindsets in order to succeed. In short, if you don't like the idea of spending most of your time on business or marketing stuff, you should find someone who can handle those, or perhaps be a solo consultant/contractor. (I think this is a large part of why YC encourages cofounders so much.)
Exceptions certainly exist--there was a time when tech was a magical world and you could do magic things just by being an expert engineer--but increasingly I feel they are getting rarer.
Started out when I was eight, I got into electronics. Read all magazines, learned myself how to design electronic circuits from library books. Kept my own, hand written, library card system describing the specs of all transistors, ICs I could get my hands on. Designed/built and repaired devices for other people who paid me for the materials and for my trouble. I was 15. When I was 18 I went to university, switched majors multiple times.
And then the Dutch electronics magazine published the Junior computer, based on the 6502. I spent all my money on it, and learned assembler by inputting hex numbers. After that came the MSX computer (I disassembled the BASIC interpreter to grok how it worked) and I started searching for programmers jobs.
Found a job at KLM where I got out top of the class and entered a special called SMART. For special internal projects. All of us programmed in IBM S370 assembler, they tested C but it was too slow.
I was 25 by then. The following years went downhill. In IT. I changed jobs multiple times, but the companies kept going out of business. I was flabbergasted at the amount of incompetence I saw in salespeople and at C-level. I had no idea, coming from a blue collar background.
Side note: in 4 years I had 9 CEOs, 8 of which left their wife for their secretary in the time I worked there! I had a lot of respect for the 9th until I found out a couple of years later he'd done the same after I left.
So I decided why not start my own business i was capable of going bust as well couldn't do worse as those guys. So I started the first commercial ISP in the NL. One thing led to another and many companies later I now pulled out of most and again starting as a founder and learning all the new hot technologies.
If you aren't in a country with proper healthcare and are not earning AT LEAST $300k USD a year (consistently), understand that all your years of work can be destroyed by one diagnosis. And plan accordingly.
I love my life. I've had a charmed existence moving to wonderful locales and doing what I wanted when I wanted; but the genetic lottery cannot be outwitted. You can be healthy one day and in debt the next.
Plan accordingly. Don't let youth and good health lull you into complacency.
It's completely possible and attainable for software developers to be independent anywhere on the globe, but understand the potential financial implications and limitations of the social safety nets of your country of citizenship/residence. Plan accordingly.
1) You can keep your job. Just always build things on the side. It keeps you coding for play and not just for work.
2) Try to sell the things you build. You don't need to be fully polished on day one. In fact, you shouldn't be and can't be because you need real customers to really understand their needs.
3) you can find the numbers online, but my saas project Cronitor had just $500mrr after seven months. You need patience and to adjust your work factor to match the available outputs. By letting it coast a bit while it picked up momentum we prevented burnout. When it started to grow faster we could pour some attention in and level up the product.
4) grow it while you work your day job. This is easy at first and grows harder. Having a partner is important here. Alternative: a business where a little downtime is not a big deal.
5) when it gets stressful, know your commitments. Your day job gets first bite and when you can't do that anymore you know it's time to move on and do it full time.
Most importantly the tldr is: quit your job after you've replaced most of your salary. And before you quit enjoy the incremental income.
You need a good chunk of capital to survive the first few years, before you even think about expanding. (Which, when it's time, you must do, or risk shrinking into oblivion. And not everybody is able and willing to expand forever. I wasn't.)
Just sharing my own little caveat; otherwise I think starting your own firm/consultancy is fantastic.
The biggest thing I have learned so far is it's really not about "hard work". Imo, its much more about how well you balance work and life. Are you on an unsustainable path or are you on a sustainable one? Are you enjoying what you are doing? This is critical. Are you genuinely eager to work on it and can you sustain that after one year? If so, you're highly likely to succeed in my opinion. If you believe in it, and love what you're working on, it's very likely there are other people out there who do too.
"Ok, Im going to send you on this course, its a 20,000 guilder expense on my side. If you let me down, youre out and I never ever want to see you again, if you pass the examination at the end youve got a job as a junior programmer."
Stuff like this can make a world of difference to the trajectory a young person's life takes. Much respect to people who are open and inclusive like this
Had to fold the business. Nearly lost my house and marriage. This was in the late 1990s.
For every story, there's an equal and opposite one.
Statistically, you can't do it, and you'll waste a lot of time, money and prime earning years figuring that out. That's not to say you shouldn't try, but only to say you should be realistic about what you are likely sacrificing for that small chance of success.
> If a high school drop-out with nothing but a typing diploma could do it, so can you. Now go do it.
this is not representative. From the reading you seem to be "smart" too apart from hard working. =D
I'm certain companies could benefit immensely by contracting with someone to help improve their Ops game, but I'm that awkward stage where I'm not 100% sure anyone will request my services.Wish me luck!
"I don't know if we each have a destiny,............or if we're all just floating around......accidental-like on a breeze.........But I think,......maybe it's both" - Do we have defined destiny? or Can we influence the destiny with our actions? The answer is not black and white.
Life is like that, it is part luck and part hard work/smart work. The context, the skill set etc is entirely different for every single individual. So you can't really learn much from patterns.
On an ending note-"- Do you ever dream, Forrest, about who you're gonna be?-Who I'm gonna be?............. Yeah. .................Aren't I going to be me? "
I mean, my teammates all quit as soon as they realized how much hard work a startup requires... Life lesson!
And, in actuality I haven't made much progress toward an actual company. but, I mean, this has been pretty great in some ways. Best part is my mentor was an expert in my field (automation, robots, etc.) and has really had a lot of fantastic input for me.
My key takeaway from the competition? Don't try to build a start up on your own. You really MUST have a strong team backing you up.
I'm probably going to crumple up this owl and start another one soon. Most importantly if you want to get good at drawing owls, you have to love drawing.
In general, of course. In some specific cases, maybe not.
For you to have a job by being hired by an employer, someone else has to create that job. In the private sector, usually someone has to start and own a company, make it successful, and generate enough free cash to pay you.
So, if you are looking for a job at all, you are essentially admitting that it's possible, reasonable, common, doable, etc. to start and own a company and make it successful.
So, why not you? That is, if you want job, especially a good job, then consider creating that job for yourself.
American Institute of Mathematics Open Textbook Initiative -- note that they review the texts too and are a bit picky about what they list: https://aimath.org/textbooks/
More than just math: University of Minnesota open textbook initiative. Stats, CS, and humanities as well: https://open.umn.edu/opentextbooks/
Not a repository, but an individual free/open math text under development -- comments and feedback desired: https://www.softcover.io/read/bf34ea25/math_for_finance It starts with elementary probability and then combines probability and stats with linear algebra, multivariable calculus, and differential equations. Aimed at folks who have seen the math before but need a refresher and a viewpoint that unifies seemingly disparate topics. Note that it uses Softcover, a great way to publish technical texts to several formats at once.
Is there anyone who has done something similar who might share some suggestions for success?
Calculus Revisited: Single Variable Calculus | MIT https://ocw.mit.edu/resources/res-18-006-calculus-revisited-...
Calculus Revisited: Multivariable Calculus | MIT https://ocw.mit.edu/resources/res-18-007-calculus-revisited-...
Complex Variables, Differential Equations, and Linear Algebra | MIT https://ocw.mit.edu/resources/res-18-008-calculus-revisited-...
Linear Algebra | MIT - https://www.youtube.com/watch?v=ZK3O402wf1c&list=PLE7DDD9101...
Introduction to Linear Dynamical Systems |Stanford https://see.stanford.edu/Course/EE263
Probability | Harvard https://www.youtube.com/playlist?list=PL2SOU6wwxB0uwwH80KTQ6...
Intermediate Statistics | CMU https://www.youtube.com/playlist?list=PLcW8xNfZoh7eI7KSWneVW...
Convex Optimization I | Stanford https://see.stanford.edu/Course/EE364A
Math Background for ML | CMU https://www.youtube.com/playlist?list=PL7y-1rk2cCsA339crwXMW...
I'll never forget how the math professors would switch from edition x to edition x+1 with the only clearly visible difference being the homework assignment questions.
I truly hope that this is not just a trove of books, but also a signaling of the change in culture from opportunism at the expense of the students to openness.
That alternative is the books published or republished by Dover publications. They like to take older textbooks and purchase rights to republish them as relatively inexpensive paperback editions. A very large fraction of their books are under $20, with many under $12. A few are more expensive, but only rarely more than $30.
The level ranges from suitable for high school students to graduate level and beyond.
Here's their mathematics section: http://store.doverpublications.com/by-subject-mathematics.ht...
Don't overlook the "general" subcategory. They have some wonderful problem books there, such as Yaglom and Yaglom's "Challenging Mathematical Problems With Elementary Solutions" series.
They also do this for physics, chemistry, engineering, history, economics, computer science, biology, earth science and more.
This list's a couple years old, for machine learning, including basic lin.alg, prob/stats: https://www.reddit.com/r/MachineLearning/comments/1jeawf/mac...
- Deep learning book by Goodfellow et al,http://www.deeplearningbook.org/ (the one by Michael Nielsen is good as well)
- Foundations, excellent text: http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning... Shalev-Shwartz, Ben-David
- https://www.cs.cornell.edu/jeh/bookMay2015.pdf, Blum, Hopcroft, Kannan, probably an older version
Professors William T. Trotter  and Mitchel T. Keller  Applied Combinatorics [3,4]
I can't comment on the deeper parts of the book, because I don't get it yet (I don't really have the time atm to slog through a 900 page book, as much as I'd love to)
"Hell, if I could explain it to the average person, it wouldn't have been worth the Nobel prize." 
Showing a limitation of the maxim or Feynman's hubris?
If you rehash it in smaller words, just by information density alone, aren't you guaranteed to be losing some detail?
This is the sort of thing you'd believe if you were an arrogant 20-something who thought they could learn any subject in a few hours, cushioned thoroughly by the illusion of understanding.
"Oh yeah, I understand the mechanisms of human vision. It's just rods and cones."
"I understand the causes of the American revolution. It was just people protecting their property."
"I understand Joyce's Ulysses. It's just follows three people from Dublin over a single day. I read the Cliffs notes."
"I understand why coffee makes me alert. It's just blocking some brain things that make you sleepy."
Now, I will agree that if you don't know how to break interactions down into teachable parts, you will probably have trouble as an engineer or scientist both advancing your own knowledge and introducing people to the field. But to suggest that your understanding of a subject hinges on being able to deliver an explanation in simple terms is just silly.
> I really cant do a good job, any job, of explaining magnetic force in terms of something else youre more familiar with, because I dont understand it in terms of anything else youre more familiar with.
The article implies this is a case of the scientist expressing that he didn't understand a thing. But watching the video in full, one realizes he is saying something different:
"It's a force which is present all the time and very common and is a basic force.
I can't explain that attraction in terms of anything else that's familiar to you. For example if we say that magnets attract like as if they are connected by rubber bands I would be cheating you because they're not connected by rubber bands-- I should be in trouble if you soon ask me about the nature of the band. And secondly, if you were curious enough you would ask me why rubber bands tend to pull back together again, and I would end up explaining that in terms of electrical forces which are the very things that I'm trying to use the rubber bands to explain. So I have cheated very badly, you see."
In other words, for some phenomena the only simple examples are themselves instances of that same phenomena. So the only possible analogies are themselves merely tautologies.
I've noticed something less sweeping though similarly absurd with the internet. As more and more of people's daily lives depend on internet technologies, it becomes more difficult to find modern, simple examples for analogies that don't rely on similar internet technologies. So someone who wants to explain the wonders of packet switching compares it to long-distance telephone calls, but they then spend the bulk of that time explaining long-distance phone calls to people who have never used a wired phone.
This actually came up for me at the office. I was asking a bunch of questions about the Z transform and the Fast Fourier Transform. The person I was talking to said, "Hey, just call the function in MATLAB, it doesn't matter how it works, just that you understand what it is saying."
All of my life I have rebelled at this notion. My earliest recollection of running into it was when I was in grade school and took apart three wind up alarm clocks, each more carefully than the one previously. My Mom was curious what I was looking for and I told her, "How does a clock know how long one second is?" She didn't know, and I didn't know, and while I had mastered using a clock and accepting that it would go off when I set it to go off, I didn't really "know" how a clock worked until I had taken apart and identified, (and modified to validate the identification :), the escapement.
The question, of course, is how well this experiment has succeeded. My own point of view which, however, does not seem to be shared by most of the people who worked with the students is pessimistic. I dont think I did very well by the students. When I look at the way the majority of the students handled the problems on the examinations, I think the system is a failure. Of course, my friends point out to me that there were one or two dozen students who very surprisingly understood almost everything in all of the lectures, and who were quite active in working with the material and worrying about the many points in an excited and interested way. These people have now, I believe, a first rate background in physics and they are, after all, the ones I was trying to get at. But then, The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous. (Gibbon)
Richard P. Feynman, 1963
Note that by his own account, most of his students did not do well. James Gleick's biography of Feynman, Genius, has a longer discussion of the disappointing results of his lectures to undergraduates at Caltech, many of whom reportedly stopped attending the lectures as they were not getting anything useful out of them.
That Feynman in fact had difficulty explaining freshman physics to the highly qualified students at Caltech surely does not indicate he did not understand freshman physics.
Some topics are simply very complex. It is not clear that they can always be conveyed in simple terms. In some cases, a "big picture" explanation may be possible but the details remain complicated. In some cases, a hand-waving analogy to some everyday phenomenon may create the illusion of understanding but be misleading or wrong.
To give a specific modern example, a state of the art video codec such as H.264 is extremely complex, built of many complicated components and sub-algorithms. While it may be possible to explain the big picture in relatively simple terms, the detailed implementation and operation is not simple. The inability of someone who creates or implements a video codec to explain it in simple terms to a layman is not an indication that they do not understand it.
Some people look at advanced mathematics or physics and wonder why it has to be so complicated and so full of jargon. It's complicated because it is. The jargon, believe it or not, is mostly an attempt to make it easier to communicate. It would be very, very difficult to wade through these ideas without introducing new words with precise definitions.
Then again, John von Neumann said, "In mathematics you don't understand things. You just get used to them." So maybe the title is true for trivial reasons after all.
I'm not saying I'll take it as literally true in every situation. But what I love about the quote is that it sets the bar for "understanding" very high.
People sell themselves short on understanding - they reach a certain level and are satisfied that they understand something, when there is actually much deeper understanding to be had. For example, being able to write a proof of a theorem can be very far from understanding why it's true, but even mathematicians sometimes pretend it's the same.
So I like that this quote challenges us to understand things more deeply. And more often than not, I find it rings true.
(A basic example coming to mind is the determinant of a matrix. Can be explained in simple terms to children (at least the key idea), or in confusing terms to freshman linear algebra students....)
It is also dangerous to assume this, because that is exactly how we reached the "my uninformed opinion is as valid as your years of experience" aspect of the current political climate. NO, things are NOT as simple as you think they are just because you saw it in the space of a tweet!
On the other hand, it is important to recognize expertise over bullshit. The easiest defense is having several experts, since at a certain point they would need to do an awful lot of collusion to just make things up between them (i.e. if enough of them agree then what they say is apparently correct).
And there are three kinds of explanation:
So sometimes, you understand something visually, or mathematically, but you are forced to put it into verbal terms (say, over a text only channel, or voice), and then you may seem not to be able to explain it even though you understand it.
We've started doing Explorable Explanations / Animated Explainers, here are some we've done and some that others have done:
- Explaining how GIT works: http://gun.js.org/explainers/school/class.html
- How neurons work: http://ncase.me/neurons/
- How end-to-end cryptography works: http://gun.js.org/explainers/data/security.html
- How gerrymandering works: http://polytrope.com/district/ (by a friend of mine!)
- How sorting on partial data / data streams works: http://gun.js.org/explainers/basketball/basketball.html
And more! It is possible, it can be done. But it is hard. That is no excuse for not trying though. Big shout out to Bret Victor's work for starting a lot of this, and thanks to Feynman for encouraging and practicing what he teaches.
I think the effort involved in trying to come up with a serialization causes us to more carefully examine our models, which usually improves them.
But I don't think the lack of a good serialization implies the lack of a good model.
Explaining something in simple terms does not mean you _fully_ explain it. You explain the essence (or what you see as the essence) of the thing. Google search is: you type a question into a box and Google shows you the best answer. Google search is a lot more than that, of course, but if you can't "boil it down" you don't understand it.
This is the top line of a git commit v.s. the comments you leave in the source code. You can spend months working on thousands of lines of code, bur if you can't describe it in a single sentence (while leaving a lot out!) it's a bad sign.
It is amazing how rarely people can get it across to me in basic terms. In fact even the idea of breaking it down into non technical concepts seems to be surprising and alien to many people.
I really admire those who can.
The Feynman Lectures are now on Youtube, and I like to watch them (all of them) every few years. I highly recommend that if you've never seen them, you take some time and watch them- really watch them. Close the other windows, turn your phone to do not disturb, and really watch these masterpieces of education.
And in my experience, the harder the subject, the more informally experts speak. Partly, I think, because they have less to prove, and partly because the harder the ideas you're talking about, the less you can afford to let language get in the way.
Informal language is the athletic clothing of ideas.
It doesn't work for biology, which is complicated at the bottom. Evolution doesn't have the parsimony of physics. Nor does it have to be understandable by humans.
Whether it works for software is a design issue. It's certainly possible to create software which cannot be explained simply.
And an underappreciated corollary is...
If you want something explained well in simple terms, you have to find someone who understands it deeply.
In the sciences, that means someone who has it as their research focus. Because as you move away from that focus, understanding rapidly becomes ramshackle. Leave someone's subfield, and you might as well be talking with a random graduate student (in that field). And that's hopeless.
Thus many research talks have videos and stories which would nice to have in a K-12 classroom. And most all K-12 education content is incoherent wretchedness.
An old essay of mine: "Scientific expertise is not broadly distributed - an underappreciated obstacle to creating better content" http://www.clarifyscience.info/part/MHjx6 In which a 5-year old with finger paints wants to paint the Sun, but encounters astronomy graduate students.
"I am sorry for the length of my letter, but I had not the time to write a short one." - Blaise Pascal 1657
There's a sad little genre of low-quality science education research that goes: "I tried to teach topic T to students of age A. I taught it <really really badly>. Surprisingly, that didn't work! I've reach the obvious conclusion: students of age A are developmentally unready to learn topic T."
But understanding, while necessary, is not sufficient. At PhD poster session practice, it's often remarkably hard to help candidates develop an "elevator pitch". To clearly understand the core of what they've spent the last n years working on. I'm still amazed by how often one gets something like "wow, now I can explain it to my parents".
"But if you can ONLY explain something in simple terms, you still don't understand it"
Many think they understand something, when really they only know how to use it. For example, I understand how to use a computer, but that doesn't mean I understand how a processor works at the level of registers and assembly language. So if I were to try to teach someone a computer, then I could say things like "Click that, and this will happen," or "Type such and such, and then this other thing you want will happen." But if anyone asked me about how that actually works, to follow all the way how a physical mouse-click gets transformed into a change in the window on the screen, then I couldn't. Or, even if I could, it might take me half an hour to explain it, depending on how much they want to know.
So maybe it's that we undestand things, but at different levels. Few people understand something at its deepest level. In fact, physicists would say no one does.
For example, a lever seems conceptually simple, but to create a lever in the body is extraordinarily hard. The joints have to be solidly connected and free to open or close. The direction must be precise and rotation must not wobble. There are so many things that can err and lots of places for force to leak out.
I think there are too many times when people affect a tone of authority and expertise and hide their lack of understanding in verbiage and complexity while making excuses for their inability to explain it to the layman.
Monkey eat => Monkey live.
Monkey live => Monkey eat.
Monkey not eat => Monkey not live.
Monkey not live => Monkey not eat.
Here "=>" is used as in "implies"/"because". The last statement is weird. There are more ways for monkey to "not live" than to "not eat".
Not being able to explain does not imply not being able to understand. Not understanding surely implies not being able to explain.
Correlation, Causation, get it ?
If you can't explain it to a six year old, you don't understand it yourself.
Often, it takes a lot of awareness of what are the common mental models / mental blocks other people have when learning the concept you are trying to communicate. You have to structure things as a series of strategic progressions before tackling the most complicated form of something, all of that is more the art of teaching ( which of course requires good understanding )
Of course, if someone can do that, it's a brilliant proof they do understand something.
If they can't do it, then it can leave you with doubt what someone else understands. Which in Apples case may be considered entirely unacceptable.
The main issue when explaining concepts (especially maths concepts) is switching from one formal context to another, deciding what details to omit, and determining what rules in both contexts should be treated as analogous.
Think of a translator. He/she/it needs proficiency in two languages to do a proper translation. Lacking a second language precludes translation. But it doesn't affect mastery of your native tongue.
A popular question to qualify for engineering job interviews is "describe in simple terms what happens when a user accesses a website on the Internet" - The question doesn't give any info on who the target audience is so you never know what level of detail you're supposed to go into. Because this is an engineering question, I tend to go into more detail but after a certain level, you can't really keep it simple because the reader has to understand what things like cache are... Else you will spend 20 pages just writing definitions.
Skeptic: I understand X. Ive spent years working on it, and Im recognized as an expert in the field. but I cant explain X in simple terms.
Believer: Well, then you obviously dont really understand it. Can you prove to me that you do"
Being able to explain things in simple terms is a skill in and of itself. Many people do not possess this particular skill, but that does not mean they are unable to understand any subject.
Eg. IP = A address like your home address. So the internet knows where to search. We use zipcodes, the web uses numbers.
Then: I need to adjust a dns-record with our IP. Becomes, I will point the website to our address.
If it's not obvious, then all my previous clients are lying ( just mentioning it, cause it's possible)
She'd gone to Caltech. That was on her resume. So I asked her if she'd ever taken a class from Feynmann. That was actually unlikely but she had sat in on a seminar with Feynmann once. She said he could explain the most difficult material and that you would understand it. You would understand it walking away and this would last about 15 minutes during which time you confuse yourself.
Sadly it's 2017 and the popularity of TEDTalks make the laymen think otherwise.
Take for example legal concepts like securities law or environmental regulation. Yes, you can "simplify" an explanation of the Securities Act or the Paris Accord enough to fit them into a tweet, but you lose information necessary to formulating a full understanding.
If you're trying to have an informed debate about policy adoption, the details matter.
Opposite example: Simplify how walking works, and make sure to include the critical systems such as major muscle groups, stabilizers, vision, inner ear, thigh/knee/pelvis/hip construction, the curved spine and its connection to the head, and blood pressure flow/regulation.
The Buddha and the Upanishadic seers were exceptionally good with explaining complex phenomena in simple terms.
Apparent sophistication is a sign of a confusion. Clarity is an evidence [of deep understanding].
Nature is vastly complex but not complicated (a few fundamental laws at work). Only simple things work.
In fact, "if you make this fallacy, you're a terrible human being" (which is sarcasm here since this very statement includes the exact same fallacy)
If you hold a complex idea in your head translating that into English can be difficult because part of the process is removing/altering information to fit into existing notions. That is why buzzwords are popular they can take an idea and put it in a relatable concepts for the masses.
And to be fair, it's pretty rough sledding even when you understand the operators involved.
So, no, you cannot explain everything in simple terms. But you can find sweet spots when trading brevity for accuracy.
1. be familiar to the logical reasoning : what implies or even for any x means. 2. know the relevant set of theorem and axioms used in the demonstration.
You could probably illustrate what a mathematical result implies in some real-life example, but you won't be explaining it.
Quantum physics is a really good example of this, because it's not that difficult to understand if you look at it with the mathematical PoV : it's basically linear algebra in infinite dimension, you have vectors (in the space of functions of |R) and linear applications on these vectors (with all properties of such applications, like eigenvalues and eigenvectors), etc. But if you try to explain it in simple terms, you're going to distort the reality to fit in the macroscopic-scaled human representation of the world and you'll probably say things that won't be true.
Whatever is well conceived is clearly said,And the words to say it flow with ease.
Ce que l'on conoit bien s'nonce clairement,Et les mots pour le dire arrivent aisment.
I understood very early in life that if I cried I would be hit. I couldn't talk, write, or communicate my understanding in any way, but I understood clearly.
I've witnessed dozen of people try and spectacularly fail at teaching their own language.
Remember that "in simple terms" does not mean easy or over simplifying something. To me it means making a to-the-point and jargon-free explanation.
Person B: How?
Person A: You're a dummy! There's mountains of evidence!
Person B: Like...
Person A: You're killing the vibe brah.
In the words of the xkcd on the subject, (check the title text):
"Actually, I think if all higher math professors had to write for the Simple English Wikipedia for a year, we'd be in much better shape academically."https://xkcd.com/547/
If you're already running a trusted Debian system, then install the debian-keyring package. Packages are signed and verified, so those keys don't need further verification.
Otherwise, fetch the keys in  with gpg:
$ gpg --keyserver keyring.debian.org --recv-keys <...> # e.g. 0x6294BE9B
$ gpg --fingerprint
Finally download the checksum and their signature files, and verify their signatures:
$ gpg --verify <...> # e.g. SHA512SUMS.sign $ gpg --no-default-keyring --keyring /usr/share/keyrings/debian-role-keys.gpg --verify <...> # if using debian-keyring package
From what I read  Debian 8 will be supported until April 2020 and Debian 9 until June 2022.
So in 2020 I will have to decide to either switch to Debian 9 or to Debian 10 which probably will be out by then. Is that correct? My feeling is that it might make things easier for me to skip Debian 9 and go directly with Debian 10.
I did the same with 7. My server used Debian 6 until I switched to Debian 8.
> If you use debhelper/9.20151219 or newer in Debian, it will generate debug symbol packages (as <package>-dbgsym) for you with no additional changes to your source package. These packages are not available from the main archive. Please fetch these from debian-debug or snapshot.debian.org.
No more shipping -dbg packages with full binaries. And less storage space is always a win.
This is surprising though:
> Python 2.7.13 and 3.5.3
I thought 3.6 was in Stretch out of the box. Why 3.5 only (especially on a LTS)? :\
Although Alpine Linux is my personal choice.
Edit: the uploads are complete, v1.2.0 of debian9-amd64 and debian9-i386 are released.
If there is user demand for it, we can look into vmware boxes, and possibly hyper-v too.
Apologies if anyone feels this is off-topic/opportunistic - AFAIK all other Debian 9 boxes on Atlas target Virtualbox only, and while projects like Boxcutter (which we forked from) do support Parallels/etc, they aren't always the quickest to produce new boxes.
How well is Chromium supported on Debian?
I like it as a secondary browser for its excellent support of multiple profiles but I run Ubuntu and had to switch to Chrome because Chromium doesn't seem to be updated promptly.
One thing that is new in this release is the availability of mod_http2, for Apache. I'm looking forward to seeing if that will increase the response-time of my various websites.
I suppose the idea of reducing freeze time with "always releasable testing" didn't really work out (lack of resources?).
In version 45 (released on March 8, 2016)
Sid (Debian unstable) is named after the guy that breaks the toys.
In the past Debian was considered to be one of the most stable Linux distributions available. Stability and quality was a priority above anything else. However, around 2014 something changed when systemd was forced into Debian in a way that would never have happened before the new generation of developers took over the project.
Maybe this is just something we have to get used to, young developers seems to value ease above quality and stability, this also explains the current flood of Electron apps.
The inputs seem to be road line recognition, optical flow for the road, and solid object recognition, all vision-driven. Object recognition is limited. It doesn't recognize traffic cones as obstacles, either on the road centerline or on the road edge. Nor does it seem to be aware of guard rails or bridge railings just outside the road edge. It probably can't drive around an obstacle; we never see it do that in the video.
This looks like lane following plus smart cruise control plus GPS-based route guidance. That's nice, but it's not good enough that you can go to sleep while it's driving.
"Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year."
Autopilot UpdatesWe just released the latest version of Autopilot. You can now experience Enhanced Autopilot features including Traffic-Aware Cruise Control, Autosteer, Auto Lane Change, Parallel + Perpendicular Autopark, and Summon. Automatic Emergency Braking, Forward + Side Collision Warning, and more advanced safety features are also active and standard.
All Tesla vehicles have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver. And Tesla vehicles continue to improve with over-the-air software updates, introducing new features and improving existing functionality to make your vehicle safer and more capable over time.
My theory is still that the demo video is actually from Nvidia's SDK and the actual autopilot they deployed is totally different and not actually in the 'self-driving' category at all at this point.
But they are very aggressively rolling out updates and new features for more autonomy and yes they do intend to push for a complete door-to-door self-drive ASAP, ideally before the end of 2017 (at least as a new alpha version they can demo). Otherwise they would not sell it as such. But they do not plan to take another year to get there, based on Musk's tweets and the fact so many already paid extra for a full self-driving ability.
There a few new features that my AP1 might not have like Perpendicular Autopark, but I won't know till I get it back. From what it seems it's just gotten to the level that they were with with the previous generation that was developed by or in conjunction with MobilEye.
I think they will need a hardware revision for actual full self driving perhaps 2 years away.
This is a statement of intent, and production vehicles are a long way from having software that enables this.
The amount of objects for detecting and avoiding will be way too high.
The tests shows almost clear conditions for driving. This should be tested on streets of NY or a busy city like Mumbai
I think one big selling point of cars has always been that they grant the usera great amount of autonomy (unprecedented, in their time, taken for grantednowadays). You can ride your car and go anywhere you like! The cost of thatautonomy of course is that some of us will be killed or maimed in roadaccidents, because you can't give silly little monkeys autonomy behind thecontrols of big powerful machines without death and carnage ensuing.
Self-driving cars propose to reduce this risk of death and injury by takingaway the autonomy we traded it for in the first place. What remains would bejust a mindless automatic system carting the user to and fro. Well, in thatcase- we don't need to wait around for full level-5 autonomy. We already havedumb machines that can do that: trains, trams, all sorts of vehicles-on-rails.
Why do we need self-driving cars, then?
Answer: we don't. And I haven't for a moment believed that any of this isanything to do with road safety. Note that nobody even discusses the other 900pound gorilla in the room: pollution.
Guess what? Taking cars off roads completely would also reduce air and noisepollution tremendously.
That claim is strong and false. What about Roadster and the old Model S with the old AP1 hardware?
I wonder what the current status is, both in terms of software validation, and regulatory approval.
And with that, this study is bullshit.
Human beings don't listen to linear sine sweeps. We listen to music. Recorded music has 8+ octaves of frequency range (the bottom octave plus a little extra is almost always rolled off in real-world recordings, to ease stress on downstream components that can't reproduce such low frequencies anyway), and 20-50db of useable dynamic range.
Sine wave measurements of audio gear ignore impulse response, intermodulation distortion, phase shift, and a host of other real-world physical device responses to real-world musical signals. Scientific, reductionist thinking is inadequate to get an accurate picture of the factors that matter to human listeners.
Frequency response and total harmonic distortion aren't measured in these cases because they're useful or relevant. They're measured because they're easy to measure. It's like looking in the wrong place, because the light is better there. And the results? It's like measuring a car's performance by how well it can drive in a straight line at 60mph. Acceleration, braking, and turning are too hard to measure, so we ignore them...
I'm a musician and record producer. I've engineered and produced numerous albums, and rely on multiple different types of headphones for different purposes. The article's claim that one headphone can be easily morphed into another through mere equalization is, frankly, bullshit. The two headphones I rely on the most (Beyerdynamic DT880 and AKG K240) sound wildly different. Neither is "accurate". Neither are the Tannoy System 12 DMT midfield studio monitors I use for mixing, or the stock Subaru car speakers I use for reference to check the mixes from the Tannoys.
Audio reproduction is incredibly complex and difficult stuff. Trying to isolate one factor and saying "That explains everything!" is bad thinking.
It's really cool hearing what they heard in the studio control room for the final mix. And often surprising.
You can get a range of other precalibrated pro audio headphones or correction profiles from sonarworks.
Consumer headphones are just silly IMHO. Artificially boosted frequencies with prices up to $400. A set of precalibrated MDR7506's is around $220.
If you don't care about truly flat response with correction, you can get a set of AKG K240's for $100 bucks and they're super comfy, amazing sound and loved universally by audio pros.
- Someone with online alias NwAvGuy put the whole AV industry (ok maybe not the whole, but some big players) in a loop by showing in online forums that a totally inexpensive DIY DAC (with a free design he/she shared) could be built with quality rivaling elite products worth thousands of dollars.  (well a hazy version of the story goes that he/she exposed various audiophile review sites and forums as being full of sponsored reviews, and that eventually lead to his/her ban from head-fi.org I think)
- As for capsule mics (commonly known as condenser mic), market is flooded with DIY designs and DIY kits which let you build/buy one for $200-$400 (the dominant cost being that of the capsule itself) that will rival the quality of multi-thousand dollar mics. They go by the names Neumann clones, etc.  (no affiliation), .
In retrospect, and given the shady things AV sellers do, like trying to sell you a USB or HDMI with gold-plated pins, claiming it to be superior, it should come as no surprise.
Though, no offense, but audiophile consumer base is filled to the brim with hipsters who judge the quality of a product by its price (and some of the "experts" were busted after they failed blind tests; I think opus vs flac, I'm mixing a lot of things now).
Headphones also have a serious empiricism issue. You can probably pass off one high end Sennheiser for another in an A/B test. But you couldn't pass off an Audeze for one and have a valid A/B test. Also, you will often read or hear an expert say, if the measurements say something is bad, but it sounds good, or vice versa, then it means we're measuring the wrong things. I'm not saying that the Harman response curve isn't valid. It's just not the whole story.
tl;dr -- Buy the cheapest headphones that you really like, and ignore whatever your coworkers say. ( Hell, there are actually Beats that are good headphones! https://www.innerfidelity.com/content/time-rethink-beats-sol... )
Things are going to change in significant ways in the future as the price of signal processing, compensation, and active correction drops, however. Combining those with advances in the cheaper manufacturing of better drivers will result in the headphones of 10 years from now making the high end headphones of today seem "meh" and today's typical headphones seem trashy.
We need objective benchmarks for everything. Especially when marketing is growing bigger each year. Even "Tech websites" are biased and not objective anymore.
It's very easy to say, "I can hear so much more of the song out of my ATH-M50's than I can a pair of Beats", and you may be right. But something objective to back it up would be great, too.
This is a silly assumption, and easily explained.
1. Most headphone purchases aren't and cannot be made by comparing sound quality. Reviews of sound quality are so universally understood to be subjective that most consumers probably ignore those details.
2. There is no one subjective or objective standard that is meaningful for all listening material. Podcasts, modern pop music, older pop music, classical recordings, television shows, and movies all have wildly varying acoustic profiles between and among each genre.
3. The vast majority of headphones have Good Enough sound quality for the vast majority of consumers. Sound quality is highly unlikely to be the primary reason most consumers buy a set of headphones, and it's unlikely to be the reason they are dissatisfied with certain headphones.
4. Headphone design, form factor, build quality, fit, feature-set, and even color are all much more important factors in terms of consumer satisfaction with headphones. They are, after all, a highly noticeable part of your ensemble. They are intimately in contact with your body. And you want them to work without thinking about it too hard. In addition to being more important, most of these factors are far easier for consumers to judge between headphones than sound quality, so again it's no surprise that an arbitrary single standard of sound quality would fail to correlate with perceived value.
In other words, this is silly for reasons that have nothing to do with technical arguments about actual sound quality, whatever that means.
Price is correlated with perceived value, which includes quality, brand recognition, brand opinion, current style, and a long list of other factors.
(And, yes, this is a horrible use of the word 'correlated.' 'Derived from' or 'based on' would be much better.)
"Nevertheless, assuming that the perceived audio quality is largely determined by the spectral magnitude response of headphones..."
This is a very wrong assumption.
Audio component designers have more or less a hard time picking up which measurements can correlate with audio quality. And frequency response measurements using sine sweeps, like in the cited study, are almost of no value for discriminating between two transducers (headphones, speakers) with regarding to 'audio quality'.
Also, the fact that one headphone can extend beyond 20KHz or that it can go below 20Hz will give zero guarantee of better audio quality.
Frequency response measurements using white/pink noise can give a slightly better hint because they can take a look at resonant peaks that might be annoying to the listener, but even this is not a law set in stone*
* Impulse measurements (and waterfall plots) can give you a clearer idea of how clear is the sound going to be; but then you can have a transducer with a fairly good impulse response but a slight resonant peak somewhere --- OR you can have sometimes a transducer which shows pretty flat frequency response but bad impulse response.
A good test for intermodulation distortion (the big white elephant in the audio room) will REALLY give you a hint of which headphone will be least annoying to the ear when listening to loud complex music like classical music, vocal music, etc.
It seems that the article has been written by experts in acoustics, but not really in "audio".
TL;DR: Freq response measured with sine sweeps can't really tell you anything helpful to discriminate headphones with regard to sound quality.
Indeed, they did find a significant difference in magnitude response _error_, although the effect was quite small.
Of course all this is confounded by the fact that music will tend to sound best on speakers/headphones with a response curve most like the speakers/headphones that the mastering engineer used (or more accurately, the set of speakers/headphones that the engineer compromised among). You will probably tend to have the best experience listening to music with the popular devices within a given musical subculture, because mastering engineers will be targeting those devices.
I find little fault with the arguments laid out supporting the paper's thesis.
For those commenters making the jump to "sound quality" (which is not the topic of this paper), the quoted observation above conclusively proves that these headphones have differing tonal qualities. Even a casual listener will be able to hear a difference of 5dB in the critical freq range of human speech.
 In terms of music quality. Other use cases may prefer designs that focus on other features.
I could quantify that, but why bother unless I'm getting paid a hell of a lot to to do it? I don't see anyone here who's championing this naive approach offering to pay for a study designed by an experienced professional, so don't complain about a lack of scientific rigor if you're not prepared to pay for it. I prefer the more concrete feedback of people telling me it's the best soundtrack material they've ever received in post production.
You can talk about the scientific method all you like. I'm very fond of the scientific method. But rigorous testing costs money. If you're not willing to put your money where your mouth is, then accept the opinion of people who do this kind of thing for a living.
Most consumer audio equipment is a scam. I'd be interested in the subset of equipment from Shure, AKG, Sennheiser, Sony, Beyerdynamic where the design was actually intended to produce a broad frequency range correctly.
What's better: speakers that go to 40 kHz, but have a big dip at 4 kHz, versus ones that go flat to 15 kHz and roll off after that?
The application allowed you to benchmark headphones in real-time, revealing "how accurately" your music was being recreated; you'd pit two headphones against one another: clash of cans!
Ultimately, yeah, there's the uber-uber high-end, the really clear low-end, and a +-$900 muddle of everything else.
I like my Sennheiser HD600's (and MDR-1000x for the office) which are $300 headphones, but equally happy to use Superlux HD-681 EVO or Soundmagic E-10 which cost around $30
Isn't consistency an important characteristic of a headphone? Perhaps even more important than some ideal frequency response. You want the same sound every time you listen to a song, you don't want it to vary.
What industry convinces you to buy things you do not need? Advertising
The idea that a particular frequency response is the thing that separates good headphones from bad is ridiculous.
I know some low end headphones add weights to increase "luxury feel". It'd be interesting to see some research about when adding weights stops.
No correlation would mean that if I bought a random headphone that cost $2 (they exist, you can go to ali express right now and put in a maximum of $2 in a headphone search), and a random headphone that cost $500, then if you had to make a bet about which one would come closer to reproducing the bass of a song with a heavy bass, you would be betting even money. It would be a toss-up whether the $2 or the $500 came closer to producing that bass. Because there is no correlation.
Here is an example of correct usage of "no correlation": there is no correlation between a headphone's price and the md5 checksum of its SKU.
I skimmed the paper. A better title (for HN) would be "No correlation between frequency response and price quartile in 283 headphones".
I like old-fashioned ephemera...
Curious, who is the founder of this project? Interested to hear more about it's background and the team behind getting this off the ground.
A couple of questions:
- How come there is no search function?
- Why are authors sorted by first name?
- Do the results of the proof reading get fed back to Project Gutenberg, et al.
- Will readers like FBReader be able to add this catalogue?
So, sounds like a good idea and I hope it succeeds but it's not quite there yet.
You may also be interested in our toolset (GNU-compatible only at the moment, we're working on converting everything to Python but we're not there yet): https://github.com/standardebooks/tools
I'm happy to answer any questions anyone has. We're also more than happy to have new contributors, if you're interested in working on and proofreading a public domain ebook that you've been meaning to get to.
Some of you have mentioned concerns about the modernizations we do. The key word I think is "light modernization". Mostly that just means bringing spelling up to modern standards, and removing a lot of hyphens in words that are no longer hyphenated. A common one, for example, is to-morrow -> tomorrow. Another one we recently added was lacquey -> lackey. Generally we leave punctuation and grammar alone. I liken this to modern books replacing the "long s" character--it's just presentation that doesn't affect the meaning. Modern readers would rather see "successful" instead "uccesful" even though the latter is what was originally printed.
I struggled for a long time with my desire to see older books with modern spelling and typography, versus preserving the intent of the author and original publishers. Over time I've come to realize two things:
1. Many books back in the day were heavily edited by the printer and publisher without the author's input anyway, so you'll get various editions over time that look totally different. Jane Austen books are a good example of this--early editions often have a pathological overuse of commas, while later editions published after her death just remove a lot of them without comment. So when we're producing our own ebooks, we accept that there's a level of editorial discretion involved, and that "the author's intent" was a very fuzzy and often totally ignored topic hundreds of years ago anyway. How can we tell what the author's intent was in the first place, if various printers and publishers have meddled with the editions for hundreds of years already?
2. For those of you who want to read the originals in their totally unedited form, other projects like Project Gutenberg or Wikisource already have those faithful transcriptions for you, and places like Internet Archive, Hathi Trust, and Google Books have the page scans for you. By lightly modernizing our own productions, we in no way diminish your access to the painstakingly-preserved digital editions; we're just adding another option for you to read.
Liberated? That ephemera might actually be integral to the story and you are NOT the arbiters of intent. Please keep your modernizing out of my lit'ratur.
Pap, The Adventures of Huckleberry Finn
I see that the page is a bit slow. If you need any help to port it to a static format (for performance), please let me know.
Thanks for all your work!
Most of the consumers so far have been neuroscience researchers and statisticians, but we do hope (and think) that there's value for a wide variety of interests.
There's a bunch of different data, but the highlights are fMRI scans of people watching and/or listening to the movie Forrest Gump, eye tracking, and detailed annotations of the movie. We are also about to begin acquiring simultaneous EEG and fMRI.
Accessing the data is easy, and, as great admirers of Joey Hess, we also have it available in a git annex repo. :-)
[EDIT] Given that this thread is about open source datasets, it's probably worth mentioning that the license is PDDL.
Unfortunately, Open Source does not help here -- I do not see how OS can be used with data sets. The main OS leverage with software development is that if you use software X to build software Y, X is usually present in some way, shape or form in your deliverable Y. Not so with training data -- once algorithm development is done you can (and usually do) strip training data out and have a finished product that does not require X to run.
Even if one were to require open sourcing derived datasets it is usually easy to segregate the dataset with a tainted (open source) license as you build up your data so the new datasets are not formally "derived" and thus would not need open sourcing.
I would love a better way forward on this, or at least a cleaner explanation of options.
"Starting in 2009, the Global Facility for Disaster Reduction and Recovery (GFDRR) and its partners developed GeoNode: web-based, open source software that enables organizations to easily create catalogs of geospatial data, and that allows users to access, share, and visualize that data. Today, GeoNode is a public good relied on by hundreds of organizations around the world ... GFDRRs direct and in-kind investment in GeoNode over the past six and a half years has been in the range of $1.0$1.5 million USD. Partners have also made significant investments in GeoNode; a conservative estimate of these partner investments comes to approximately $2 million USD over the same time period. GFDRRs investment in GeoNode would be a reasonable amount even viewed strictly as a software development cost: the GeoNode software today represents an approximately 200% return on investment in terms of code written, since thh current GeoNode project would most likely have cost $2.0 3.0 million USD if GFDRR had produced it alone as proprietary software, without building an open source community around the codebase."
This is an unusual situation; many people need geospatial databases, and contributing their local data is useful to them. The value here is in the data, not the code. This is more like Open Street Map than a software package.
I'm all about open-source, but I wish people wouldn't focus on how companies should do it because it's good for them financially (although granted that's probably more effective with the intended audience than what I would say). I wish a bigger deal was made about how it's just a douche bag move to sell software and proactively prevent users from having freedom to understand, fix or modify it for their needs - that applies to more than just the source availability and license.
A lot of times I hear the implementation cost is where all the money is so it doesn't matter what the software costs. That is sort of true, but large companies are not incentivized to make it any easier to implement, less they put their System Integrators out of business and/or push them to other vendors. The Open Source community does not have this incentive obviously.
Note that the study does not actually measure ROI from a revenue perspective, but estimates based on theoretical saved costs: The company invested $1M in open source infrastructure and potentially saved $2M in direct development costs (given that the code base is current worth $3M). 
Most interesting takeaway for me is the implications of open source for government funded projects, and a ratification of the idea that contributions of code for some public tool can save the general public tax money. A forward thinking org could try to broker some sort of tax cut based on SLOC contributed to public, government-sponsored projects? Maybe that already exists.
Would suggest studying Red Hat's rise to $2B in yearly revenue to understand how a company takes open source and turns it into revenue.
 GFDRRs direct and in-kind investment in GeoNode over the past six and a half years has been in the range of $1.0$1.5 million USD...GFDRRs investment in GeoNode would be a reasonable amount even viewed strictly as a software development cost: the GeoNode software today represents an approximately 200% return on investment in terms of code written, since the current GeoNode project would most likely have cost $2.03.0 million USD if GFDRR had produced it alone as proprietary software, without building an open source community around the codebase.
Anyway, it looks like an interesting report and I look forward to reading it in more detail, but I think the headline in the blog-pointer is unwarranted.
A more meaningful measure is how quickly you can resolve a problem with open-source for X amount of investment, versus other options. With that, if a package doesn't do what you want then investing nothing appropriately yields NO return; whereas, investing certain amounts of time (asking questions, filing bugs, etc.) may yield more return, and fixing it yourself may yield the most.
The current title, "World Bank-Sponsored Report Shows 200% ROI on Open Source Participation," the contents of this link, and even the World Bank's own blog's title, strongly suggest that this was a World Bank-commissioned study across multiple open-source projects/communities. Note the plural in OP: "to quantify the benefit of contributing to and participating in open source communities." And the World Bank's blog title: "Leveraging Open Source as a Public Institution New analysis reveals significant returns on investment in open source technologies."
But that's not the case at all. As noted in other comments, this is a single community, a single project. Granted, it's a successful one. But we shouldn't get our hopes up about "oh s*, this is an article I can forward to the C-suite to get us to invest in open source!" What we have here is technically accurate clickbait that relies on the brand of the World Bank's analysis. And, in being disappointingly vague, it tarnishes that brand.
Which references link:
Which references PDF:
OPEN DATA FOR RESILIENCE INITIATIVE & GEONODE A CASE STUDY ON INSTITUTIONAL INVESTMENTS IN OPEN SOURCE
Edited: It's interesting to see how comment which states facts, can get upvoted and downvoted this much. Sometimes voting in HN does not make any sense (to me). I understand that upvote is "thanks for letting us know those facts". What are downvotes representing? That I should not write at all, that price increase for all 3 telcos is fine, that everyone should be happy? Rhetorical question.
I can also note that this law has resulted in alot more unlimited plans. I myself have just gotten one which includes 30gb of roaming. Is it cheaper than before? Hell no. Do I have to care about how much I surf, when or where? Not anymore - and freedom is worth the extra 20.
The frequent travellers (presumably wealthier) get subsidized by the infrequent travellers (presumably less wealthier).
so yes receiving calls or using internet will be same as home, you still have to watch what number you are calling, if it's from country of your carrier or different
please correct me if I am wrong
EDIT: so I was right, now it's even more insane than before:
For example: If you have a Belgian card and you travel to France and call either a hotel in France, back home to Belgium, or to any other country in the EU and the EEA, you are roaming (refer to legal text on the regulation on roaming) , and you will pay Belgian internal domestic prices (refer to legal text).
However, if a Belgian SIM card holder calls from Belgium to Spain, she/he will pay the international tariff. Calls from home to another EU country are not roaming and are not regulated.
TLDR: there are no fees for international calls while you are in roaming, but when you return back home enjoy fees for international calls, using your SIM in foreign network is cheaper than using it in your own network at home
Surely some more central regulations will remedy the situation! "To each according to his needs." What could possibly go wrong?
So if you want to use your phone in a different country during the holidays, you'll need an EU roaming subscription.
The politicians once again failed to be sufficient precise in formulating a law that would produce the decided result. They should have added a clause that state that all subscriptions are to cover the entire EU.
on Three I've had free roaming for years, at no additional cost, across the EU and a good chunk of the rest of the world
eg people who never "roam" are going to be subsidising those people that do.
A negative move spun as a positive... clever EU, clever.
But members break the tools all the time and don't take responsibility for it. Even though there are cameras and people have to swipe their card at the door it still happens.
I think one reason sharing is not as common is because people are jerks.
I remember when I was a kid we used to borrow each other's NES games all the time and never give them back.
I applaud any effort to rebuild our fractured society.
For higher value items, I've been meaning to extend the above apartment-wide setup with a Google Doc inventory of things that people are willing to share, but want where participants want face-to-face confirmation, like loaning a camera or a mountain bike. I wish there were a way (a social institution moreso than a technical solution) to make quick contracts for borrowing things. I'm privileged enough to be able to replace minor things, but I am definitely relucatant to loan big things if I don't know if a friend can/would replace the thing if something bad happened on their watch. And no I don't want to rent themI don't like the cognitive overhead of markets, and that's not the point.
In all seriousness, as others have noted, I see this as a rather damning comment on how badly human contact is getting abstracted away to businesses more and more. It's rather sad that people no longer feel able to just talk to others without some organisation to mediate.
Student loan was shown to be the primary deterrent. Of course what wasn't to blame was that the disparity between median wage and comfortable life is growing. Another thing they fail to attribute this to is that millennials are smarter, avoiding the spendthrift mistakes like large mortgages which tie them down to a place and make them paycheck away from homelessness.
source: Just google 'Millennials aren't buying <insert anything here>'
It's hard enough to get enough time to do something, imagine requiring that time to be in commercial hours, prefaced by a drive or walk somewhere, a talk with somebody, then postfaced by the same. And then you forgot something...
I once lived out of two bags for 11 months. After living in a one bedroom apartment by myself for about a year, I was surprised just how much I had to sell, give away and git rid of. I even tried to keep in my head that I wasn't /really/ buying anything, but basically renting it until I took off again. I always tried to buy used or from thrift stores whenever possible.
Also take the idea to places where people aren't used to having and owning these possessions --get them while it's still a nascent idea.
Still, I find it interesting that he managed to raise $30k on IndieGoGo. It signals that people care about those ideas/ideals.
The problem is that people aren't 'settling down' and instead are frequently moving around for work. This makes it hard to build up communities with the people around you. Your work becomes your 'stand-in' community - which has it's disadvantages. This has ramifications for health and happiness far beyond borrowing stuff. The studies that show people living longest aren't correlated with western health care or even good diet - it's the strength of their bond with their community.
I'm currently doing this with a car for my visiting son. Too young to rent for but not a big deal to buy an old car for 2 months and sell it when he leaves. Basically paid the registration fee for 2 months + gas which is $300 for a 2 month rental. Then the "fun" (which it is to some people) of dealing with cragistlist crazies while selling it. I actually enjoy dealing with the flakes, putting myself in their shoes and getting practice negotiating.
I did grow up on the phrase "he who dies with the most tools wins", so it's taken me some time to transition to rent/loan. But I've got so many tools and supplies now, and I've reached a point in my life where I'd like to do more and own less, and all those tools are now somewhat a burden. I bet I'm not alone and that these tool libraries could probably get a lot of high quality donations.
It's as eternal and essential balance as CPU vs MEMORY.
There's also the Edmonton New Technology Society, which was the original Makerspace in Edmonton. Unfortunately for me, the location means that I'm unable to visit regularly.
You should buy what you need with the rights you're entitled to, and figure out the difference between what you need and what you really don't.
But feeling that way, at least initially, about a startup that becomes big often seems to share that characteristic.
People go camping in Hawaii, buy gear and give it way when they leave.
If I were to make such an app, how could I get a company to underwrite and provide insurance for these items?
Worse still, with ~3000 citations, Dworks Differential Privacy (ICALP (2) 2006: 1-12), should rank even higher in the Theoretical Computer Science list. But Google Scholar has completely lost track of that foundational paper; its got it all confused with a completely different paper, Dworks 2008 Differential Privacy: A Survey of Results. Note that this also means that anybody searching for the general topic differential privacy on Google Scholar will not get to see the most-cited paper about it! https://www.microsoft.com/en-us/research/wp-content/uploads/...
Disclaimer: Dwork and I have been seen together, for 24 years.
This almost sounds like collecting my most liked pics from 2006 on Facebook and creating an album "Best moments of my life".
Do they not have data before 2006 ?
For more papers, there is a nice list here: http://jeffhuang.com/best_paper_awards.html not limited to 2006
There is a bunch more places to get papers listed here too: https://github.com/papers-we-love/papers-we-love#other-good-...
I'm thinking about research versions of Lord Kevin's favorite edict: "Heavier than air flying machines impossible" or the patent person (examiner? head of patent office?) who in the nineteenth century said everything that can be invented has been invented.
This... doesn't seem like a very representative selection of 'timeless' papers.
Things that had a major impact on the problems they focused on which many other papers doing something similar built on or constantly referenced. I'm skeptical of citations in general since those who chase them usually do a high number of quotable papers in whatever fad is popular instead of hard, deep, and critical work. Those I listed are the latter with who knows what citations. The collection is probably still nice for finding neat ideas or just learning in general.
no, a collection of titles. a collection of papers would be very useful; these are just links, e.g., to paywalled sites.
It took me about ten years to figure out that Twitter is a cesspool of useless noise and ego. Everybody tries to outdo each other with noise and follower count. What Reddit does right is focus on topics, primarily, not personalities. (Although I actually like the new user profiles, since they tend to be secondary focus).
Twitter could have been something different, and I think that expectation for something more got priced into what it is valued at today. Based on Twitter's current market cap (12.12 billion dollars!) it's already overvalued by a LOT; and there's really nowhere else for it to go but down. Any new users it gets are just bots or other political warfare tools.
For me, the final straw was that Twitter wouldn't "verify" Ecosteader as a legit account. So I deleted my Twitter accounts, sold a small investment I'd opened a couple years ago, and now spend more of my idle time reading Reddit rather than Twitter. And I feel so much better for it...
A subreddit is far more useful than a hashtag... it has stay power, searchability, and (like Twitter) is the kind of place where people will vent and where companies can interact with customers / users. The key for Reddit, I think, will be to do what is right for its users to achieve information awareness... Conde Nast is a news platform, after all. Let's just hope they don't let themselves go the way of Yelp.
At about the start of 2017, Reddit saw a noticeable increase in the activity growth rate, which investors love (although the biggest chunk occurred around October/November 2016, due to the U.S. Election)
And here's the BigQuery to reproduce the aggregation:
#standardSQL SELECT DATE_TRUNC(DATE(TIMESTAMP_SECONDS(created_utc)), MONTH) as mon, COUNT(*) as num_submissions FROM `fh-bigquery.reddit_posts.*` WHERE (_TABLE_SUFFIX BETWEEN "2015_12" AND "2017_04" OR _TABLE_SUFFIX = "full_corpus_201512") GROUP BY mon ORDER BY mon
Sometimes they do that under the covers, so to speak, and some users don't like it (e.g. hailcorporate) but often they will promote their products transparently, and often normal users don't mind. The most obvious example is Netflix on /r/movies. Multiple employees were observed continuously posting things to the site, via submissions, comments etc and users liked it.
By introducing a charge for these corporate users they can reap in a substantial income. Whether it would make sense for them to label these corp users as such is another option, but they certainly know about them. I find it interesting that most normal users find that they don't mind interacting with paid marketing employees and that they consider it organic and natural, very interesting. I also find it worrying.
Reddit is only a thing because people feel safe there. What other site of its user base does that?
I'm surprised Reddit stays so popular. I stopped using it ~8 years ago. The habit didn't stick with me (beyond a year or two).
Their struggles with advertisement is something they can actively address now. Before when they were smaller than digg once digg made a huge mister and alienated their users this allowed to flourish into the site it is today. Due to the fear of what occurred to Digg their advertising was less aggressive their advertisements have been less aggressive and attempt to be targeted. But, since reddit is so large now the risk of alienation is much less. Also, other condenders like imgur and voat have tried to take the throne from reddit without success. Also, they have made great strides in making it accessible to everyone and you can get anyone from any political spectrum. If reddit wants to make money they have to broaden their appeal greatly and I believe thats the purpose of the $150m. Hopefully when they do this they won't turn out like digg because they are a much larger network now.
Disclosure: Worked in this space 10+ years.
YouTube is the only one I see who is really trying to make it work. Instagram worked it out by placed ads for the most popular models.
And a key sin of all these sites is they think the content is theirs. It's not. Stop trying to regulate it to be what you want. It is what it is and you should be thankful, very thankful, they chose your site to host it. But this doesn't seem the case. The reddit admins in particular seems to hate a large part of their content-creating user base.
I'm (obviously) not the target market, but I absolutely detest disingenuous behaviour like this.
1. They have some weird policies about how they want their employees to work. At one point of time they were ok with being remote, then they moved everyone to SFO or told them to pound sand. 2. Scaling issues: Seems like that's an after thought and only gets addressed "when it happens"* (Never thought put in after the fact) 3. The Modmail.. it's bad when you're dealing with lots of it. There are features completely lacking. (Like searching, or a CRM for users and how they contacted the mods) 4. The non-obvious spam.. it's gotten worse now that they took down r/spam.
Keeping in mind this has remained a mystery to me ever since Facebook didn't sell for a million dollars an eon ago. Facebook's founder is today worth some ridiculous sum of money. Why is that? I'd have sold Facebook for a million dollars and then just made another website. What am I missing. YouTube sold for some amazing sum ages ago to Google, who has from my understanding still not turned a profit with it.
Since I am clearly so out of touch please make the explanation easy to understand.
It would be interesting to see some potential marketplace addons or payment processing.
Thus reddit could long term replace or absorb Etsy and maybe maybe Craigslist.
Steemit "Your voice is worth something"https://steemit.com/
Of all the supposed reddit-killers that have been put forward, this is notable because of the funding mechanism, and because everyone there seems so damn excited. However, a lot of the most popular articles seem to be about... Steemit itself.
I know reddit gets a lot of flak for shit-posting content, especially during and after the recent US elections, but it has huge potential to become a consistent part of a users media consumption & participation diet.
It's really addictive.
Most of all I remember "Creature of Havoc" ( https://en.wikipedia.org/wiki/Creature_of_Havoc ) as being amazing and extremely hard. Instead of being an adventurer you play a monster with limited IQ forced to unravel the mystery of your own existence. It employed various techniques that prevented cheating like "If you have the key, add the number written on the key to this page number to open the door". One of those puzzles still has people discussing it http://laurencetennant.com/bonds/creatureofhavoc.html ( contains spoilers ). At 13 years old it took me and a friend 2-3 months to finally crack it.
A play-through can be attempted very quickly, every time experiencing something new -- you are racing through the world attempting to return to London in 80 days.
The creator, Inkle, have a more traditional RPG, Sorcery. Also good for re-creating feel of a classic D&D adventure, but I enjoyed encountering automatons in Vienna in "80 Days" more.
The map will be more linear-ish, or rather one mail path with side loops -- imagine passing levels in a game, you are provided ways to practice a new skill until you are able to pass to new level.
More interestingly, the progress through the book can be itself constrained by a kind of crypto.
The chapters in the book will be numbered and ordered at random. At the end of each chapter it will say "goto chapter 234." or "goto chapter 34 mod 12"
Now imagine the player wants to cheat and starts with a random chapter in the middle of the book. He won't be able to find previous chapter (it's kind of a one-way function). Morever, if progress to chapter N+1 is gated by a puzzle that requires skill learned in chapter N-1, he can't move forward either.
Some initial notes are here: https://github.com/sustrik/crypto-for-kids
This is all so obvious but it never solidified concretely like this for me until now.
The following domains have a bunch of stuff taken away from them, you're left with a narrow domain of very few concepts, and once narrowed it is intuitive to make a visual tool:
webforms - google forms relational forms - airtable computer aided design music - OneNote mathematica video games - unity website - squarespace crud app - hyperfiddle.net world wide web - internet explorer, or html
What other ways can we attack a large domain like "enterprise business apps" and take things away until left with a few simple composable primitives?
Web blurb, slightly different from print
Meanwhile began as a series of seven increasingly complex flowcharts. Once the outline of the story was structured, a computer algorithm determined the most efficient way to transfer it to book form, using a system of tabs to interlink the panels and pages. The problem proved to be NP-complete; it was finally cracked in spring of 2000, with the aid of a V-opt heuristic algorithm which ran for twelve hours on an SGI machine.
As a kid, I played 'City of Thieves' by Ian Livingstone. When entering that city, there is a crossing and one can pick three roads, all leading to the same city market.I always wondered: what is the best route of the three?
To do so, I first ported 'City Of Thieves' to console and desktop and Nintendo DS (after mailing the book company for permission, which I got). Then I wrote an AI that assigns payoffs to the different chapters. Not only did this result in such a map, but also the payoff it assigns to each chapter: https://github.com/richelbilderbeek/CityOfThieves/blob/maste...
The question is still unsolved though, as I do not trust the implementation of the AI :-)
I didn't like the idea of lying about reading but I was OK with gaming the system by reading 'choose your own adventure' books.
I would pick the dumbest options because I knew it was likely I'd die fast and the book would be over.
I only read a few of these way back when, so I don't remember exactly if this happened, but another possible take would be a sort of 'where were you when the big [whatever] happened'. How do different choices early on determine how you're affected by the Plague/ day Dublin's streets ran with Guinness/ Chicxulub impact.
I ran into a lot of unexpected technical complexities in the compilation step, for example trying to remove unreachable branches of the story, and optimizing situations where branches merged back together, or removing variables that were being tracked that no longer had any effect on the story from that point on. It was a fun exercise. It makes me wonder how earlier CYOA books and series were actually written. How hard was it to keep track of the various branching plot lines? Were there ever cases where "bugs" were published? I was a massive fan of CYOA, especially Steve Jackson and Ian Livingstone's Fighting Fantasy series.
Japanese Visual Novels are basically CYOA with pictures. The number of decision points varies depending on the book in question, but the basic structure is still there.
Twine  is a system that allows you to create stories, essentially CYOA but with the option of adding variables. For instance, you could have an option that is only selectable if you found a key earlier. This bridges the gap between CYOA and classic text adventure. Since Twine outputs HTML it is also easy to port wherever you want.
Finally, there are a number of online community CYOA. This being the Internet, the quality is varied and many of them are pornographic. Probably the biggest is Addventure
 http://twinery.org/ http://www.addventure.com/
In one short story he proposes a kind of reverse CYOA book: A book with many beginnings but only one ending.
Still the maps match up with a lot of the examples I have seen of flow charts/maps/grids from authors who scope out their stories and then fill in where its important and interesting with the actual text we get to read
One more that I don't think was reachable from the article is on the blog These Heterogenous Tasks:
Or I was maze-solving. Probably both.
I wrote my own gamebooks using a simple notepad and "turn to page XXX"-style narratives. In the end, they are just programs that you follow. :)
To this day, I'm still fascinated by them and recently wrote some sites that let you create CYOA-style adventures yourself and with others; http://www.thiswayorthat.club is one of them.
I really enjoyed http://chooseyourstory.com/story/ground-zero and http://chooseyourstory.com/story/dead-man-walking-(zombie-su...
After we were done I added some code to output the graph structure of the game, rendered it with GraphViz, and gave it to the artist, who came up with this: https://twitter.com/rmodjeski/status/455184159401472000
I've had a blast reading this together with my 7 year old son.
(Disclosure: I am one of the programmers)
Having newspapers write an article about it is probably one of the more effective ways of dealing with this.
Konami treated Kojima like shit, Kojima left and took most of the talent with him. What Konami has left is a pretty cool engine and a hot IP, no creatives. MGS fans know it and don't respect Konami's next project even a little. Zombie-survival alternate universe, really?
In contrast I don't even know what Kojima's next thing is, it'll probably be pretty cool though. Kojima is good at over the top absurd and awesome.
Also, it's a fucking travesty that Fox Engine won't be used for anything important ever again, because for an open-world engine it runs like a dream. Too bad the online multiplayer is a non-stop cheatfest.
Am I missing something? Perhaps it's more common in Japan's video game industry to work in the same place for life?
The judge will only want to know one thing: is it true whether or not that person worked for Konami.
Is there no paper trail to substantiate that? Contracts?
Did they have no signed contracts, and get paid in cash?
It's unfortunate that a slew of IPs will go down with Konami, but they haven't been looking great in awhile and this is just a conclusion of the signs in the past. Silent Hill, Metal Gear, Suikoden, ZoE, even those weird late 90s PS1 games like Broken Helix will always be remembered by me.
I had a start up that tried to do this a year or so later, after I left the company on bad term mind you. The lead basically tried to bully me. I just up and left on the same day that he tried that.
I read this and look around where I'm at, palm trees and beach. Oh right I'm in California.
I tried several emails to explain to the lawyer. He made bunch of bs excuses.
So I immediately googled for a cease and desist letter and send it to the buddy.
He basically asked me if I know who he is.
I don't care if your Donald Trump. If Trump couldn't pass the muslim ban good luck with you trying to ~~force~~ coerce me to remove this experience from my linkedin.
After that I never hear from the lawyer again or his company again.
A team I worked with in South Korea regularly slept at the office because the Korean lead was there slave driving the crunch. South Korea in itself is hard to even launch a game there because they demand percentages and you must have internal teams/representatives, similar with China. China team, the employees were always in a fear state of making the boss angry or doing the wrong thing. I noticed that it led to releases just to meet dates even if the work was incomplete just so the boss would not get mad. Working with them the devs told me they regularly ship when not complete just to satisfy dates and it led to many issues especially at hand off points because they knew it wasn't fully functioning.
I think overall companies in games think they can get away with this ownership/authoritarian type attitude anyways, but it might be easier in Asian cultures where there is a more authoritarian lean.
I know it's generally a bad idea to badmouth employers, past or present, on Facebook or other social media... but punishing employees for merely "liking" a post is more excessive than I've heard before. Maybe it's a cultural difference, but it seems very Big Brother of them nonetheless.
I'm sorry, "could not show this application to the chairman"?
Goes to show you once more how many people on positions of power have the emotional intelligence of a bully teenager but they can't be touched.
and yet here we are, slaves to whims of a few thousand people globally who control 35 trillion dollars in assets.
This virtually guarantees that if you want to make a liveable income, you either need to work for or sell to a truly psychotic group of individuals who control an arbitrarly large portion of wealth thanks to a host of monetary and legal policies.
(Don't tell me US is the same: the transparency with which they wash their dirty laundry in international public waters, and the fact that they have many layers of "social backups" are just two reasons to believe otherwise.)
(And I just noticed I should not have include the post as part of the signsorry for any inaccuracies I may have caused)
Say to detect if something is or isn't a hot dog?
-GPS position, intent/goal, domain etc.
I'm at a dog show I would want breed etc.
I'm on the street I just want it come back dog maybe dangerous dog, friendly dog.
Also, would be cool/scary to just get back movable object 1, person 1, living movable object 3 etc. and if I give it multiple scenes from a video it knows person 1 is the same person 1 and if I name (them) Tony it keeps tracking tony.
Thank god the author hasn't lived through an event where everybody you know is affected by the event. The ability to say "I'm okay", say it once, and have everybody you know on FB see it is a huge stress reducer. It cuts back on the number of "are you okay?" messages you receive during the event when you may not have a lot of battery or a lot of spare brain to dedicate to answering lots of bullshit inquiries.
If he's feeling stressed out because of FB opening the "I'm okay" service in that small area for that catastrophic fire, he's being a self centered jerk. I guarantee that FBs service is helping some poor soul mixed up in that mess.
Edit: The Facebook safety check feature is not unique to Facebook. In many ways it mimics Japan's disaster message board feature. Every teleco in Japan offers this service:
I really think the author has lived too lucky a life in a place that has not suffered overmuch from disasters. He can't see beyond his own narrow vision.
Why does everything Facebook do have to be so heavily criticized?
Safety Check is a wonderful feature. If I remember correctly, it started off as an internal hackathon project that got turned into a full feature. They get shit if they turn it on (here), and if they don't turn it on (past tragedies where they failed to turn it on).
Why does everything Facebook do turn into a riot? After years in the industry, 95% of things that are "bad" that come out of big companies end up being well meaning and just look malicious without context. Hanlon's razor is real.
Yeah, Facebook has to fund all this somehow. Yeah, they are going to make their ad space extremely valuable with all the information they have. They don't sell your raw data. They sell access to you like the rest of the industry.
Those creepy ads that you saw based on some conversation you had? Turns out that they're NOT listening to your mic or whatever. It's either confirmation bias or something you're not thinking about.
Those friends that they suggest with a new account? Turns out your friends posted pictures of you on Facebook, and Facebook knows how to do facial recognition.
It feels like everything Facebook is overblown on HN. What am I missing?
Edit: I should have said this originally, but I'm a former Facebook employee, now at another big tech company. I try not to be too controversial in writing, which is why I made a throwaway.
That's the problem.
It should be a positive notification only, without any negative one. People can say they are safe (I see value in that). But facebook should not say anything at all if someone has not declared themself safe.
Stop using Facebook. Start telling your friends and family to do the same. As the "smart computer person" in many people's lives, you can be the voice they need to hear.
>Facebook CEO Mark Zuckerberg committed to turning on Safety Check in more human disasters going forward, responding to criticism that the company turned on its safety feature for Paris but not for Beirut and other bombings.
I'm not buying this statement. Where is the evidence of this? The article features two tweets from nondescript people stating they think the feature spreads unnecessary fear, but features no tweets from people who actually felt unnecessary fear. Are there any cases of people who felt afraid because their loved ones didn't check in even though they could have? Otherwise to me this argument is just speculation.
We would be much better off if we stopped accepting fake apologies and 'the algorithm did it not us' excuses.
Facebook employees programmed this thing under, I assume, the direction of management. This is Facebook's fault not some magic, wibly, wobly force. It's one thing to have a bug, but this is working as specified.
It was the same recently when we had a storm in New Zealand, and they activated safety check for the entire country. I don't think it even ended up raining where I was at the time.
Back in 2001, I was in India when one of the worst earthquakes to hit that state in recent memory struck. I lived with my grandma and grandpa, and rest of my family - mom, dad, sister, uncles, aunts, cousins were at a wedding. For literally 7 hours we had no way of communicating with each other - they didn't know if we made it or not.
So yeah, Safety Check tool is just fine in my book, just mere act of being able to say "I'm OK" makes a massive difference.
Wed be better off checking in as safe after our morning commute
I think Facebook's Safety Check is a good feature, but the implementation is pretty dreadful.
I don't really buy the argument that you would just assume someone is safe before; you would absolutely have thw worry that if there was a disaster in London, you would want to know if your friend is safe. Previously you couldn't easily contact them, though: if you even had their phone number, calling them internationally wouldn't be easy or sometimes even possible, but often you'd have their address, and hope they would respond to you. Now it's much easier to keep in touch.
I also feel like people are making controversy over nothing when they think that asking if they're safe when they're in London during that fire is too much if they're not in the vicinity. Facebook is in a catch-22 here; Facebook either knows your (roughly) exact location and knows if you were in or near the apartment building at the time, which would make people cry about Facebook tracking you everywhere, or it doesn't and it asks if you're safe if you're in London. Even in the image from the tweet that this article references there is a "Not in the area" button you can press. There's really no way to correctly do this without having really accurate and very up to date information about the people using Facebook, which isn't always possible.
Could Facebook improve the ways it determines if a user is in the area? Yes, of course; a simple way would be to look up IP address block(s) and see if the user is in a block they look up, then prompt them, although it's not really that simple. I also run into issues with Facebook thinking I'm in Japan when I'm not, even though I left nearly a month ago. Facebook really could also improve the UI around it; the point at the bottom of the article when it says that the writer has 100 (probably) London based friends, 97 of which are not "marked as safe", which is terrible UI. But I do absolutely disagree that this feature is worth removing based on the arguments presented in this article.
Sounds like a neurotic grand mother.
In any event, Facebook these days is only useful anymore as a convenient login mechanism for sites that use the Facebook login widget, and even there, Google's version works better.
Over 400 miles away.
This article is cynical clickbait nonsense, right down to assuming that it's impossible for Facebook to implement something for any reason but engagement. I'm no fan of Facebook but the idea that, to a man, they're faceless stock price maximizers is just stupid and frankly insulting.
I know plenty of people who immediately think of (and often call) family/friends in an affected area when a disaster happens. Depending on how far away it is, it can even be at the level of a city.
Hipster's demand to respect that shallow outward sophistication they cosplay cannot be considered as a universal standard.
For most people there is nothing stress-inducing. Merely boring stuff.
Recidivism rates in the US show it is objectively not working, with state prisons leaving inmates to re-offend 76% of the time.  In Norway, much derided for their lavish prisons, it's 20%. 
Throwing people away and treating them like animals is an abject failure, compounded by the mandatory fill rates in private prison contracts.  It's time to revisit the whole system, top down, and make it less about punishment and more about making sure it doesn't happen again. And it should be done with data.
Where I was, there was no outside fenced area for the hour mandated rec time. It was a 6'Lx3'Wx6'H fenced dog cage. At least there was a large open window to the outside to look at from the other side of the room.
You're also mandated an occasional hour at the "Law Library", which was really just a single computer with LexisNexis and Microsoft Office in an otherwise empty 4'x6' room. That VB class I had taken really came in handy.
I learned to make some reasonable dice out of toilet paper. Too. Much. Yahtzee.
In the case of the man in the article, his case was overturned. Hopefully, he won't have a criminal record. Getting a job today with a criminal record is incredibly difficult. That's the biggest reason why recidivism is high [x]. It's great that there's a push to "ban the box" (that is, to not ask about criminal history in job applications), but it hasn't made it to all the states. Furthermore, many companies blanketly don't hire felons[x] regardless of the context of the crimes and/or rehabilitation of the individual. Background checks aren't a fair process in their review. Good-bye any real life.
[x] I'm sure somebody is going to argue that the bigger deal is a lack of quality mental health or addiction services provided to inmates and the dearth of such programs prior to conviction, and they'd be right too.
Rikers is a great example of a clearly flawed jail system, with inmates getting stuck for years without trial and sometimes killing themselves after losing hope.
Having recently gone through the judicial system as a white male, I can't image being a black man going through the same thing. I was able to buy my freedom, buy excuses, buy a lawyer to get me out of everything. When an oppressed people who already starts out behind falls into the same trap, there's little left for them to do.
Interesting read: https://www.nytimes.com/2017/04/05/nyregion/rikers-island-pr...
If you see an animal pacing back and forth in a cage, it is generally considered neurotic behaviour and a sign that the cage/enclosure is too small. My point being that it probably did make him slightly crazy and that solitary confinement is psychological torture.
People are products of their environments just like animals. A mistreated animal is also bad behaved and violent.
And if you think solitary confinement is a nightmare, how about putting 2 people in a solitary confinement cell?
"Imagine living in a cell that's smaller than a parking space with a homicidal roommate."
After reading The Jaunt by Stephen King, I was on edge for a day or two afterwards.
That we are powerless is bullshit.
And it's only one of about 4-5 good reasons right now.
Let's do it.
> Employing homomorphic encryption techniques, PIR enables datasets to remain resident in their native locations while giving the ability to query the datasets with sensitive terms.
I can imagine a few scenarios there. One perhaps is when db admin should not find out what someone, possibly working on a classified project is querying.
Or say one compartment / project collected the data and now they want to share it with another project. Those read into the second project don't want to reveal to the first one what they are querying because it would reveal classified information.
Another scenario is a database which has results of possibly illegally intercepted communications. If the NSA can argue that the Constitutionally defined "search" doesn't occur until someone actually performs a search (as in runs an SQL query over the data). Then having PIR capability means being able to break the law but only let as few people as possible do it.
Also https://github.com/redhawksdr is pretty damn impressive. It looks like a complete parallel implementation of GNU Radio. Completed with an IDE and such. Wonder how it compares?
Accumulo (a popular NoSQL distributed key-value store)
Apache NiFi (data processing system)
It looks like the last commit was over a year ago, though. Is there information I'm not seeing of whether these projects are actively maintained (or still in use at NSA?).
I'd be very interested in more public cryptanalysis of this. It's a damn simple cipher to implement, and if it were at least as secure as say Salsa20/12 it'd be very nice for all kinds of applications.
Also interesting is splitting the repos: that the NSA and IAD have different repos, and that one seems focused on defensive tech while the other is publishing analysis tools.
I know there's a lot of people who aren't fans of the NSA (or what they do), but I think most of us can see a need for a military-grade organization to research defensive technologies for helping secure our infrastructure. I don't think many of us would be unhappy with the NSA if that's all they did. (Or phrased another way: most of us are unhappy because of how they conduct intel work or compromise defensive capability for offensive ones, eg, that whole business with ECC.)
So I think it's important to respond positively to things like the IAD github page, even if we're not fans in general.
It was literally a one letter change in the README file, but I still have the privilege to call myself the very first civilian to contribute to the NSA's open source project: https://github.com/NationalSecurityAgency/SIMP/pull/1
I stared at it for what felt like minutes and then said, if I looked in your search history would I see you looking this up on stackoverflow?
The guy said "yes" and I said I would make it work by asking you to send me the link to the stackoverflow answer.
He laughed and said "you got me".
Same company different interviewer asked me to explain the "pros and cons of Java vs Rails."
I turned the job down.
"Where do you see yourself in 5 years?""I don't know. Are you reading these questions from a textbook? FYI they're not very effective if you want to find someone who will do the job."
"What's your greatest weakness?""Trick questions in job interviews."
This is obviously not good advice; I have just reached a point in my life where I will not be made to dance to the whims of the interviewer, despite which the job would likely go to one of the employee's friends rather than it being given to me anyway.
One of the few merits of this approach is it tests if you can be frank with them or not. If they get offended at your lack of sucking up, then you probably wouldn't want that job anyway, because I find that if you suck up during the interview, you will find yourself struggling to maintain that ideal forever if you do get the job. It's better to be upfront about what things you actually care about.
In reality, the sad state is, in my opinion, confluence of the following forces:
1) HR people more often than not have barely any clue about the topic. They must, unfortunately, play this charade because if they knew the topic they wouldn't be working in HR.
2) The technical people who prepare the questions typically believe they are too busy to spend time thinking about the problem and instead decide to settle on any test. The assumption is that a good candidate will be able to navigate any kind of test better that a bad candidate. In view of the technical person, this is just a screening, the real purpose being to elliminate as many phonies as possible.
3) The candidates more often than not have highly exaggerated view of their abilities. Unfortunately, high demand means that the market reaches for lower and lower quality of "resource" leading to comical situations where a large portion of the workforce (especially in countries like India) is developing software by shuffling around keywords until the code compiles which entitles them to call themselves Senior Engineers. Real senior people have no problem finding a job to the frustration of others who find the situation "unfair" and the entire process "rigged".
One more note on the process: while the cost of failed interview for candidate is quite low, the cost of making a mistake is very substantial for the company.
Indeed, you (almost) never need to reverse linked lists in practice - but you often need to chase references of one kind or another (database, pointers, etc.) and do some manipulations on them that would result in a different list. If you have 20 data points, it doesn't matter what you do, but if you have 100M or 10B points, it makes a great deal of difference whether you do O(n), O(n log n) or O(n^2) or O(n!).
I think this is a good question, because it is an abstract version of the kind of problems that any non trivial code has to address. It is easy to describe, and easy to solve if you know what you are doing. I don't use it anymore because it's too common to be useful.
And yet, through the years I've gotten the feedback that it is "far detached from real work", "tests how good I did in school rather than how good I am" and various other comments -- and almost always from candidates who did poorly; I've never gotten this feedback from anyone I would consider competent.
 I try to get feedback through whoever referred the candidate, whether it's the friend-of-acquaintance, or the recruiter. I don't, in general, trust direct feedback from someone I turned down (or hired, for that matter).
This part got a laugh out of me.
Otherwise, the analogy feels really stretched and at times feels straight up incorrect. For instance, I've never sat through a technical interview administered by non-technical staff. I've likewise never been quizzed on the history of computation.
I agree that the programming interview in the US can be overly algo-and-whiteboard happy, but I think this critique is unfair and possibly even outdated (my most recent round of interviews involved more live coding, and less whiteboarding, than when I last interviewed 4 years ago)
> Weird analogy. Companies don't ask candidates the history of binary search trees, computer architecture, or anything like that.
A better analogy would be if they gave this translator a particularly challenging piece of text to translate -- for example, one that didn't have a clear right answer and the candidate had to discuss different tradeoffs.
But... then that doesn't seem like quite so silly of an interview process.
There are absolutely valid criticisms of whiteboard interviews, but most criticisms made are either based on terrible implementations of whiteboard interviews or based on stuff that's just incorrect. (Yes, it's totally fair to criticize a company who conducted a flawed whiteboard interview. But that criticism doesn't apply to the system as a whole. That same company could mess up whatever your favorite interview style is, too.)
> By the way: I don't actually know how translators are interviewed. But one of my best friends interviewed to be a journalist with some major New York newspapers (WSJ, etc).
She was already a journalist before this, so they had lots of public writing samples for her (analogy: GitHub code samples).
Did they just hire her based on this? Nope!
She had to do a live writing test (analogy: whiteboard coding interviews). She also had to do a pitch session to talk about different potential stories she could theoretically write about (analogy: design/architecture interviews). Plus some behavioral interviews.
Why not just look at her writing samples? Unlike for coders (which might not have public portfolios representing a significant portion of their work), basically all of her work product was actually public. So why not just hire from that?
Well, because all they see is the final output. They don't know what direction she was given, how long it took her, how much editing/collaboration was involved, etc. A crappy writer in a particular environment can churn out good work -- because someone else is doing a lot of the work. Looking at the final result is actually not a great measure of someone's skills.
Coding interviews aren't that special.
Interviewing bad people at good companies goes more like this:
Q: Can you explain the difference between a noun and an adverb?
A: I've worked at the UN for 20 years! I'm an accredited translator! I translated for Putin and Obama!
It is important when doing lots of interviews to have a question that you know well and can be used to benchmark across your interviews. Something relevant to the job is an added benefit.
And also, http://www.jasonbock.net/jb/News/Item/7c334037d1a9437d9fa650...
"Could you translate 'Donde es el bao?'"
"I'm sorry, I don't do well on tests. I thought we were going to talk about my past projects on my rsum. In fact, I'm quite offended by your question. Good day, sir!"
Google: 90% of our engineers use the software you wrote (Homebrew), but...
Haha, what feedback?
lol... I actually won a trans-comp when I was in (middle?) school: I was attending a catholic school and we were taught latin and went to some translation competitions: you had to translate a chunk of text as fast and accurate as you can. It was fun :)
...we do Agile. Everything is in a flat structure, except when it comes to salary and responsibilities.
I really should have left at that point: >1 hour lost there...
And regarding "full-package" translators I think that web developers should be able to write both frontend and backend part of an application. It is not that difficult to learn. Programming is not something you can learn once and then repeat the same actions for the rest of your life.
I mean, we can whine all day and remind each other that it really does suck, but that does little to address the problem.
Oh, I can only hope...
Here in NYC I have never had unreasonable interviews even close to that. And I interviewed for a lot of senior developer positions and consultant positions.
In our own startup we have a completely different approach. Our motto is "People live lives. Companies build products."
We like to hire and work remotely because that eliminates geographic restrictions and lets people work asynchronously. We've found that the better the system for asynchronous communication, the better the long-term productivity and maintainability.
We use a common folder structure, code conventions, for each project. Developers build fully documented reusable components that are re-used across projects. Every developer is very replaceable (meaning our losses are limited if they leave or scale back their time). This is actually a great thing for developers given our compensation model (see below).
If a developer does something wrong (like checking in syntax error), we first check if this is something we should fix in the system (add a linter to the pre-commut hook). There are so many amazing open-source tools today. It's a compoundibg snowball to design a good system. Sometimes the COO job feels like an architect/developer, just like DevOps, but for people and configuring processes and systems instead of programs or servers.
We hire from anywhere and prefer to work over the internet. Even our compensation model is different than what most companies do - it aims to attract independent people and entire teams, and compensate them based on the contributions they actually do. We want to grow a snowball in a transparent way, and motivate people by giving them ownership of a product or feature instead of focusing on making them sell their time as full-time employees who commute to an office.
I'd love to get feedback on the compensation model btw: https://qbix.com/blog
Does anybody have a link to an online server (with public domain books)? I'm curious to see what the presentation is like. What's the typography like? Does the screen dim after 30s? What's the browser battery consumption like compared to an ereader app?
Long term, my big concern about ebooks is DRM. Amazon's most recent version (KFX) hasn't been cracked and workarounds involve getting Amazon to send you an older version of the file with older, crappier hyphenation and layout. I've started mostly buying DRM free books from Amazon, but they don't make it easy to find them.
Am I the only one who is turned off to calibre due to how "heavy" and clunky it feels? I suspect this is due to the program being written in Java. I think the author does great work maintaining the project but frankly wish it was more modern.
Perhaps this is a good side project for me to delve into ;)
EDIT: thanks to users who clarified that Calibre is written in python.
Moon Reader (http://www.moondownload.com) would be great if it had a desktop or web-based client...however, it's only supported on Android. If Calibre can give me this experience, it's value just increased immensely. Looking forward to trying this.
So far I only tested on moonreader
I was hoping that this could replace my Google Drive folder with various papers on interesting topics I'd like to read.
I hope it gets better though, because it's a great concept
EDIT: also, the forums aren't letting me register due to an error in their captcha being broken, so I can't discuss/submit my bugs there.
EDIT2: I was able submit a bug report to lanuchpad.
I like it better than amazon's kindle apps and you can use an open format.
It's handy if you've only got access to an epub, and a bit less clunky than sending via email.
Edit: This is not to say that it's OK to destroy people's health to make semiconductors. Toxic chemicals are unavoidable in semiconductor manufacturing, and we need to handle them properly even if it causes a rise in prices.
Also, the current political climate in the US (EPA being bled to death slowly) will setup a legal climate where companies' practices that are damaging to employee health suddenly becomes legal/non-issue. It might save a few jobs from getting outsourced, but will leave behind a sick employee pool, with the state bearing the cost of health care.
> 'Dirty' Industries: Just between you and me, shouldn't the World Bank be encouraging MORE migration of the dirty industries to the LDCs [Least Developed Countries]? I can think of three reasons:
> 1) The measurements of the costs of health impairing pollution depends on the foregone earnings from increased morbidity and mortality. From this point of view a given amount of health impairing pollution should be done in the country with the lowest cost, which will be the country with the lowest wages. I think the economic logic behind dumping a load of toxic waste in the lowest wage country is impeccable and we should face up to that.
> 2) The costs of pollution are likely to be non-linear as the initial increments of pollution probably have very low cost. I've always thought that under-populated countries in Africa are vastly UNDER-polluted, their air quality is probably vastly inefficiently low compared to Los Angeles or Mexico City. Only the lamentable facts that so much pollution is generated by non-tradable industries (transport, electrical generation) and that the unit transport costs of solid waste are so high prevent world welfare enhancing trade in air pollution and waste.
> 3) The demand for a clean environment for aesthetic and health reasons is likely to have very high income elasticity. The concern over an agent that causes a one in a million change in the odds of prostrate[sic] cancer is obviously going to be much higher in a country where people survive to get prostrate[sic] cancer than in a country where under 5 mortality is 200 per thousand. Also, much of the concern over industrial atmosphere discharge is about visibility impairing particulates. These discharges may have very little direct health impact. Clearly trade in goods that embody aesthetic pollution concerns could be welfare enhancing. While production is mobile the consumption of pretty air is a non-tradable.
Considering all of the different things that computers make possible (from advances in medical research to improved logistics, to more efficient designs for everything), my best guess is that computer chips produced this way are quite a leap forward for overall human well-being.
None of this makes me revel in or ignore anyone's suffering. All I'm trying to say is don't lose sight of the big picture, too. More and better technology (i.e., applied knowledge) is the answer to this problem. The specific form this probably takes is automation of the most dangerous work. And what controls those machines? More computer chips.
Anytime there's a mismatch in laws and a lack of appropriate tariffs, it just asks for an externalization of cost and pollution.
Stop trying to get foreign governments to enact these laws. It seems like little progress is being made on that front. Don't ask that products from a foreign country can't be sold here if the country doesn't have enough worker protection rights.
Just don't let the product be sold here, if in its production, certain worker protection right weren't respected. It would mean more ethical products, and as an added benifit, it would mean less value in outsorcing, so the retaining of jobs within the country.
I understand you couldn't immediately have laws as strong as you might have them locally. You want the cost of lifting the quality of life for worker to be lower than the cost of stopping sales in your country. These laws would probably have to be enacted with a similar graduality as they were enacted for local workers.
I also understand that they wouldn't be easy to enforce, at first. But don't let the perfect be the enemy for the good. Propper enforcement for these kind of policies can be something that would develop over time.
In the US and Europe there are very strict rules on the sulfur content of diesel. (less than 5 parts per million in Europe now).
The exact European company that makes diesel for that market also sells in West Africa. It is 5000 parts per million there.
Disgraceful. Environmental laws should be universal, not country by country, especially when it's a company from a first world country doing it, clearly just to boost profits.
There was Ecuador, where once Torrijos was "knocked out", the Standard Oil siblings went in and in their greed leaked millions of barrels of crude and waste chemicals into the Amazon. There is India, where Bhopal victims were paid a pittance, and continue to die from the after effects. Russia, where resources formerly under public ownership was sold off in rigged auctions to US (or equivalently Boris Yeltsin) approved oligarchs. If the American media (and probably more importantly, it's pop-culture figures) spent half as much time on introspecting as they did on selling 'human rights', we'd have a better world.
But hey, a better world would mean the corporatocracy would have less money.. why bother ?
Seriously though, John Perkins' book 'Confessions of an Economic Hit man' is a wonderful read for the 'conspiracy theorist'.
It's an essential component in optimizing a company for efficiency, but it carries lots of costs and they tend not to stay hidden for long.
Same goes with Solar panels. An acquaintance with chemical engineering degree worked in the industry and he said it's so ironic that the ingredients for manufacturing solar panels are so incredibly toxic when the goal of having solar panels is to reduce pollution caused by fossil burning power plants.
Every type of person, whether they be male or female, no matter their skin color, race, or the education and job they now hold are sadly capable of hurting their fellow human being.
Murder comes in various forms. Matthew 5:21-22
Why do we do this? We want assistants of the future to respect user privacy, and not stream your voice or your most important questions to servers that you do not control.
With Snips, 100% of what we do runs on the device (the platform ships for Raspberry Pi, more platforms are available for entreprise customers, firstname.lastname@example.org)
We are using state-of-the-art deep-learning Automated Speech Recognition and Natural Language Understanding to allow makers to plug a voice assistant in their device in 5 minutes.
We are actually benchmarked our NLP and are outperforming most of the commercially available NLU providers: https://medium.com/snips-ai/benchmarking-natural-language-un...
It'd be like an artist having a macaroni collage that they made during kindergarten in their portfolio.
What's the pricing though? I have an impression this is a paid product, but no pricing info is present.
EDIT: also, in trained_assistant.json, what does "tfidf_vectorizer_vocab" represent, and why it includes words like "nazi" and "hitler"? ;).
I've been struggling with alexa and google assistant, to make something useful for embedding. So much backwards and forwards with setting up infrastructure for skills.
This is smashing.
Shelved that project while I was busy working on other things. This has me excited to give it another go!
The future is so cool. 1990's childhood me approves greatly.
Edit: Im referencing the demo video. https://www.youtube.com/watch?v=wThoRtIeExo
Right now, there doesn't seem to be anything aside from the RPi build (the download page for which also requires me to login with my email address, so I didn't proceed to actually download it). That's a shame.
It's nice to have single turn interactions for turning the lights on. But not so nice for Eg: booking a complex flight, or navigating unknown options.
Snips is an AI
There isn't a day that I get by without using software influenced by Ian and his role in all this is hard to overstate.
We initially wanted to publicly dedicate the Moby launch to him, but decided against it, because we didn't want to give the impression that we were cynically using his name to help our launch succeed.
It makes me very happy that this release of Debian is dedicated to him. It's hard to describe the shock of losing a colleague so brutally in the middle of a mind meld, when you're so intensely focused on building something together. It still haunts me.
At any rate, happy exclusive Debian user here for the past 15 years. Thank you, Ian, gone too soon. Your name lives on in half the name of my most belovd OS.
I'll always be grateful to him for that, in addition to all the Free Software I use every day that he had a hand in shepherding.
Thank you Ian, the world is a better place for having you in it, and was diminished by your passing.
I don' use Debian (not directly), but I'm aware of the Debian philosophy and its impact on free software. If it were up to me, I would dedicate and re-dedicate every Debian release to Ian for the next few releases, at the very least.
In the old days of the USSR, while very difficult, it was at least conceivable that you could just fly to moscow and see if they were eating their children there or burning priests or god knows whatever else.
There was a natural limit to the deception that could occur and further a normal person could make conclusions about the things they saw with their own eyes.
Now, the enemy that "we have always been at war with" is a completely isolated (and economically trivial) state that virtually nobody travels to and who is attacking us with secret cyber weapons that only a domain expert with highly specialized experience could even recognize, much less qualify.
And the people that are telling us are those same people that are, or are not, secretly recording all of our conversations.
There's not one little thing there you could take at face value.
I can't comment about specifics linking WannaCry to the Lazarus group, but that seems to be the consensus in the security community.
DISCLAIMER: I worked with the people who wrote that report
"In May, No dumps, theshadowbrokers is eating popcorn and watching "Your Fired" and WannaCry. Is being very strange behavior for crimeware? Killswitch? Crimeware is caring about target country? The oracle is telling theshadowbrokers North Korea is being responsible for the global cyber attack Wanna Cry. Nukes and cyber attacks, America has to go to war, no other choices! (Sarcasm) No new ZeroDays."
As a result, no online currency exchange will touch it, said Jake Williams, founder of Rendition Infosec, a cybersecurity firm. This is like knowingly taking tainted bills from a bank robbery, he said.
Could anyone give some more details about this?
Does a trustworthy bitcoin mixer exist? Would the attackers be able to use it to launder the coins?
EDIT: Does anyone know anything about the operational error mentioned in the article?
The coins are easy to track, but that's the default for bitcoin. Mixing the coins should restore anonymity in most cases, right? And at that point it would be possible to move the coins back to an exchange, or sell them on localbitcoins.
On the other hand, have the exchanges blacklisted most of the large mixers? It seems like it should be theoretically possible to track whether coins have been mixed. Then exchanges could simply close any account that receives significant sums of tumbled coins.
They say "the NSA has linked the North Korean government..." then tell us the assessment was not made public, that it is inconclusive, and that the NSA has declined to comment. "One agency..." supposedly has a "building block for this assessment but they are not named. I understand that the government would like to protect their sources, but I don't think we should simply take them at their word. In my opinion, this piece is doing exactly that. What little concrete data I've managed to gather is all circumstantial, I've seen nothing that point to any sort of technical "smoking gun".
Maybe I am paranoid, but my concern is that this finger pointing at foreign governments does nothing but generate fear. When the legislature finally introduces a bill to defeat encryption across the board, they'll have widespread support and everyone who argues against it will be painted as some kind of imbecile. All of the sudden, the largest tech companies in the country will be accused of wanting to aid and abet North Korea and Russia.
And security doesn't materially improve. The assessment Reality Winner released isn't much better than these articles, but at least it's more straightforward and the means to the end were clearly disclosed. Yet no one is talking about putting training in place at the companies involved (to defeat phishing or social engineering attacks via phone or email) or source code audits (even private is better than nothing). It's infuriating.
A scalded cat knows better than to dip its paw in hot fudge again.
2) For a country so isolated and brainwashed, how can they train and develop the talent needed for complex hack like that? It seems it would require quite a complete education system. Does NK have a full proper education system?
One could argue: great, it means less wars, let's not overreact over a few bits flipped in a machine. I'd argue the contrary. If countries do not respond militarily to hacking aggressions, it will only make them escalate with increasingly serious consequences (disrupting hospitals to me is already a pretty severe consequence). There needs to be some form of accountability.
This is incorrect - crypto exchanges have had no problem cashing in bags of dyed notes before, e.g. the coins from the Bitfinex hack. They really just do not care.
The only issue I foresee is tools being built around the use of ES6 that compel you to abandon it.
I've learned that less is better. I don't use JQuery, I don't use Lodash. I don't use React, I don't use ImmutableJs, I don't use Webpack or CommonJS. All of these tools are more a burden that a blessing and you just end up stacking on 100 dependieces asking "Is this really better?"
I learned this lesson with Cucumber/RSpec/Caybara and etc.. I started asking why I had to use these over plain old TDD and so I used TDD for a month and I found out everything was totally fine.
I don't even really use Arel in ActiveRecord. I just write raw SQL that serves JSON directly back. I made it easier for me to organize SQL in partials just like views and to injects variables and conditions into my SQL.
I went to great lengths to evaluate all these tools because as a work-form home contractor I can afford the time, and the "what if" really bothers me.
I still working in Rails with Sprockets and CS and I write all my SQL by hand.
Less is Better.
All these kids want to do a bunch of busy work and for no good reason. It makes the feel productive.
I have tried dropping CS but the difference in productive was too great to drop. Its not that I'm married to CS its that it does what it says. It makes your JS concise so you can be more productive.
I felt AngularJS dying and so I spent 3 months researching React and building client apps in React. I just didn't get all the work extra work and settled on Mithril.
The hardest thing was giving up writing HTML-like templates like I did in AngularJS but I remember that was my first aversion to AngularJS where I swallowed the medicine. I had to swallow more medicine to unlearn that. Mithril paird with CoffeScript makes writing markup in CS a joy. If I had to do that in regular JS I could see why people would be compelled to use ugly JSX.
In particular, there was CJSX - JSX with CoffeeScript. Writing React components using it was an absolute joy in comparison, and helped eliminate a lot of noise making it much more obvious what a component was doing.
But yeah, it's been hard to convince others that it's worth the investment, when honestly it's really just a personal preference at this stage.
Now JSX is not weird anymore, it's just horrible.
Anyway, I didn't have enough time to finish the project, but somebody took the idea and did most of the work .
Would love to explain more, but have to board the plain now :) These links might help:
Other weird coffeescript quirk: "x isnt y" is not semantically to "x is not y".
The whole JS stack switches every 2 years or so?
CoffeeScript was an important crutch and stepping stone but we're maybe 2 years past where it has been past it's prime and this will make updating and improving legacy projects a lot easier.
Personally was never a fan of it - it used too many Ruby idioms for my taste and produced noisy code that was a pain to debug (at the time) - but it did spur the development & adoption of other, better, systems (ESNext transpilation, TypeScript, etc)