I do not think that anyone's ability to write should disbar them from discussion. We can not expect perfection from others. Instead we should try to understand them as human beings, and interpret them with generosity and kindness.
We're in the early stages of deploying a new RESTful stack, and versioning is a hot topic (along with getting people out of the RPC mindset and into a resource-based paradigm). While version bumps should be much less common, we'll probably end up doing something similar to our cascading transformations. Essentially, the old version becomes a consumer of the new version, and as long as the new version continues to hold to its API contract, everything should work with minimal fuss. Of course, that's assuming that we don't change the behavior of a service in ways that aren't explicitly defined in the API contract...
For anyone else who's interested, they've written/talked about this a few times over the years, to fill out the picture:
It sounds like their YAML system has changed to be implemented in code instead, which maybe allows the transforms to be a bit more helpful/encapsulated. If anyone from Stripe is here, it would be awesome to know if that's true and why the switch?
In general, the concepts employed by Stripe really encourage better design choices. All changes, responses, request parameters, etc should be documented and then handled automatically by the system. We took this approach in our design, although we don't do it with an explicit "ChangeObject" like Stripe does; it's a great idea though.
Hoping to be able to put out a blog post once we start implementing the system and getting feedback on what works and doesn't work well.
From a consumer of Stripe's API's perspective, doesn't this make debugging or modifying legacy code a real pain? Let's say I'm using Stripe.js API's from a few years ago; where do I go to find the docs for that version? Do I need to look at the API change log and work backwards?
Does anyone know of packages that do this already? I have been contemplating creating one in PHP/Laravel for a long time but haven't had the time yet...
This is a really smart way to do it.
One question is, over the years, wouldn't you add a lot of overhead to each request in transformation? Or do you have a policy where you expire versions that are more than 2 years old, etc? (skimmed through parts of the article so my apologies if you already answered this)
Hey, @pc with all the spare time your team has accumulated by using this api model maybe you could put it to good use. Might I suggest it's time to divert most of your tech resources into creating the next Capture the Flag? Because those were just awesome!
I'm joking, in case it's not obvious (but I would absolutely love another Stripe CTF).
It was a delight to get a peek behind the curtain. :)
I wish other payment services treated their long-time clients with the same respect (looking straight at you, GoCardless).
When the whole thing is an exercise in competitive pedigreeing, of course it's going to be gamed. If it were more about human development, we'd be focusing on the deltas instead, and they'd be harder to fudge.
We have an enormous private school system (part govt-funded, which is gross) that can get rid of students as they please. Then they go on to verbally dump shit on the public system, where they get to dump their problem students. Nice.
We also have a real fetish for selective schools, which are essentially selection bias driven to a huge extent. Any suggestion that this isn't a good idea is met by an chorus of "you must hate smart people".
For this to work the troubled kid needs to be kept out of the gangs and whatever else outside of school will lead to bad examples. However selection bias already does this: parents who care enough to send their kid to a private school are involved enough with their kid that they would probably do this anyway. In short these are kids that might have been troubled, but they still would have been at the top of the troubled group.
Not quite kids don't get shorter over time, but they can easily get worse grades/scores. It's true that having a screening test makes it more likely that your students will score well on other tests, but it's not a guarantee (as it is with the height example).
I'd be curious to see a study of younger people to see if perhaps there are some assimilation effects.
Consider bussing kids from 'good' places to bad and from 'bad' to good. See after how long their outcomes become equivalent. Do so at different grade levels to measure the convergence time versus age of displacement.
And now for the actual experiment: how do you impactfully present such results? I assume no outcome. Just the existence of an outcome.
I posit that priceless data would be worthless in American society.
I had to end up creating a link component that would automatically link to an archive.org version of the link on every URL if I marked as "dead". It was so prevalent it had to be automated like that.
Another reason why I've been contributing $100/year to the Internet Archive for the past 3 years and will continue to do so. They're doing some often unsung but important work.
The BBC also donated its Networking Club to the Internet Archive: https://archive.org/details/bbcnc.org.uk-19950301
Also, sites are a very volatile medium. I often bookmark pages with interesting information to read later, and it inevitably happens once in a while that a site went down and I just can't find the information anymore.
On another note, the more dynamic the web becomes the harder it will be to archive so if you think that the 1994 content is a problem wait until you live in 2040 and you want to read some pages from 2017.
> The average lifespan of a web page is 100 days. Remember GeoCities? The web doesn't anymore. It's not good enough for the primary medium of our era to be so fragile.
> IPFS provides historic versioning (like git) and makes it simple to set up resilient networks for mirroring of data.
How I laughed.
What's cool isn't how fast some of these technologies become obsolete, such as various Java applets and cgi-bin connected webcams. It's the static content that can survive until the end of time.
Like Nicolas Pioch's Web Museum. Bienvenue!
The MBone was not a "provider", it was an IP multicast network. This was the only way to efficiently stream video content to thousands of simultaneous clients before the advent of CDNs. https://en.wikipedia.org/wiki/Mbone
It really stresses the importance of directly quoting / paraphrasing the content you want in your plain text, and not relying on external resources for posterity.
I noticed that the wayback machine no longer lists historical sites if the latest/last revision of robots.txt denies access. Has anyone else experienced this?
In the late-90's I helped build one of the first fortune-500 e-commerce web sites. The website was shutdown years ago, but it view viewable on the wayback machine as recently as a year ago. The company in question put a deny-all robots.txt on the domain, and now none of the history is viewable.
It's a shame -- used to use that website (and an easteregg with my name on it) as proof of experience.
And yes, the way I got on the internet in those days was to dial into a public Sprintlink number, then telnet to a card catalog terminal in the Stanford library, and then send the telnet "Break" command at exactly the right time to break out of the card catalog program and have unfettered internet access. Good times.
I had lots of fun reading them as an Internet-addicted kid -- but several of the links were dead even before it was officially published.
Makes me want to try to read a Markdown-only Internet browser, which treats native Markdown documents as the only kind of Web page.
Secondly, the book is more analogous to a map or dictionary, and it ought to be a descriptive source, not a prescriptive one. Some language purists may disagree, but I could care less :-). And similar to an old, outdated, map, you'd expect that the details may have changed, but the landmarks are most likely still accurate. NASA's still nasa.gov, MIT's still mit.edu -- well, IIRC www.mit.edu used to point to their library's portal, and web.mit.edu their main page; I see that's changed -- and CompuServ still...exists.
Answers a question I always had about "Snow Crash" by Neal Stephenson. The main character, Hiro Protagonist (I still giggle at that name), sometimes did work as a kind of data wrangler - "gathering intel and selling it to the CIC, the for-profit organization that evolved from the CIA's merger with the Library of Congress" (Wikipedia).
I always wondered what made that feasible as a sort of profit model, and I guess now I know - that was the state of the internet in 1992, when the book was published. Seems like a way cooler time period for Cyberpunk stuff, I'm almost sad I missed it :(
The web is ephemeral unless somebody archives it. Many companies offer an archive service for your sites for a fee, and archive.org does it to provide a historical record,
"For some reason"! That's the flipping multicast backbone you're talking about there! One of the great lost dreams of the internet!
Zilch. Nada...couldn't find it anymore. Gone. Something I had easily chanced upon before I know couldn't find with directed searching. They must have restructured their site.
I could have switched it to a PHP include but that would either break all existing links, take a bit of work to make .htm files execute PHP or to make them forward permanently to their PHP versions. Or I simply could do the only sane thing: loading menu.php on every page within an IFRAME and changing his 15 years old Dreamweaver template.
The internet has been saved! A bit at least!
Maybe we should built a DHT containing UUIDs for all pages als alternative, stable URIs :D
A web with content-based addressing and versioning built into the protocol could also deal with this situation more gracefully, but again there are copyright issues.
What I saw instead was a subset of subprocesses in isolation from each other, presented in an admittedly artistic fashion. It's impressive, and maybe the purpose is more to whet one's appetite for more information rather than be informative in itself, but that's not really what I was expecting or hoping for.
And you've shown what the value is long before I asked myself the question "how much" - which I usually ask early in the process - but not here.
At least that is how I found it... great work.
Would be interesting to see conversion figures for something like this.
And this was Alex's follow-up a year on:
Very well done Alex!
It's nice to see something that was designed with maintainability in mind. Designed to be disassembled, repaired and re-assembled later. Impressive engineering.
So different from most consumer products sold today which never use screws and are not designed for repairing. If it breaks down you're expected to buy a new one...
and your ad is one of the best i've seen since MasterClass ads in my Facebook feed. i felt like the ad was basically free content. i was learning!
Really great video too
I like to take things apart, and it made me a bit nervous, as each piece was separated, that I would never be able to put it back together :)
Where is the Reddit post of this? You're going to front page, for sure.
I don't mean to underplay the work involved in programming and marketing this project, but just not giving up is perhaps the hardest part of things like this.
I did have a slight giggle when the promo at the end says you explain everything about 'modern cars', while you are working on a car introduced 27 years ago.
Of course I understand that disassembling a new car does not make financial sense, I'm not trying to be negative here.
I just subscribed to the video course and I can see the preorder offer is a no brainer, skimming through the PDF provided I can see there is enough value on it to easily make it worth the $20 by itself.
So as a suggestion: Highlight the PDF and its content on the preorder page, there is only one mention about it but it doesn't specify its contents.
The motorcycle equivalent is this: https://www.youtube.com/watch?v=MkHJuU01-Wk&index=43&list=PL...
I watched about 3/4 of these ^^ videos, really learned a lot about how a combustion engine works.
Really, it's a pretty great engine, but with 233k miles a little grumpy.
I'm really interested in seeing where you go with the 3D modeling. As a coder/DIY mechanic (one of many I'm sure), I'm pretty psyched by how this tech could be used.
I also want to say that I appreciate your price point. I think it's at a good point where it might be less than the potential value of the product, but attracts those who would otherwise dropout of purchase or seek other means to obtain the media.
How is the course delivered? Downloadable or streaming only? Can I watch it on Linux?
Nice tip of the hat to Luxo Jr. at the end there.
#1 PayPal returned me to an invalid URL after finishing the payment
#2 I've paid & logged in, nevertheless the website still shows me links to "buy the course".
Something something electric motors are far simpler. ;)
How did you make those flying parts? Photoshopping out the holders?
It's going to be really interesting to our purchasing and maintenance patterns for EVs.
I wouldn't be surprised if private donations will eventually be responsible for the eradication of Malaria (1000 deaths daily, much more suffering and cost to society).
If you're in tech you're likely to be in a great position to create value beyond your company. For example, donating equity from your startup or a fraction of your income to the charities that can prove they are having the most cost effective impact on the world:https://founderspledge.com/https://www.givingwhatwecan.org/pledge/
Bill Gates and Warren Buffet pledged to give half of their net worth away during their life or death to charity. They're practicing what they preach.
"The first question concerns accountability... The Foundation is the main player in several global health partnerships and one of the single largest donors to the WHO. This gives it considerable leverage in shaping health policy priorities and intellectual norms..."
"Depending on what side of bed Gates gets out of in the morning, he remarks, it can shift the terrain of global health..."
"Its not a democracy. Its not even a constitutional monarchy. Its about what Bill and Melinda want..."
"In 2008 the WHOs head of malaria research, Aarata Kochi, accused a Gates Foundation cartel of suppressing diversity of scientific opinion, claiming the organization was accountable to no-one other than itself."
"As Tido von Schoen Angerer, Executive Director of the Access Campaign at Mdecins Sans Frontires, explains, The Foundation wants the private sector to do more on global health, and sets up partnerships with the private sector involved in governance. As these institutions are clearly also trying to influence policymaking, there are huge conflicts of interests... the companies should not play a role in setting the rules of the game."
"The Foundation itself has employed numerous former Big Pharma figures, leading to accusations of industry bias..."
"Research by Devi Sridhar at Oxford University warns that philanthropic interventions are radically skewing public health programmes towards issues of the greatest concern to wealthy donors. Issues, she writes, which are not necessarily top priority for people in the recipient country."
More in the article...
I definitely don't mean to diminish the contribution of the Gates Foundation though. I often hear that they're one of the good ones.
- Edit- Nevermind, found it on here: https://en.wikipedia.org/wiki/Cascade_Investment
Hopefully other billionaires can take inspiration from him and recognize that helping the species is a more fulfilling game than "How many 0s in my net worth."
There have been some good words from the foundation regarding the (health, primarily I believe) programs in Tanzania. I wonder if this is towards scaling those projects.
Anyone have the scoop?
Keeping a bit of wiggle room.
Malaria, low literacy rates, etc., are the byproducts of failed political systems and corruption.
Musk's impact on electric vehicle technology will drain a great deal of despotism from the middle east as dependence on oil wanes, far more effectively than any philanthropic contribution he might have made would have.
There are a number of technologies that can drastically change the dynamic between the elites (officials) and everyone else worldwide. Our most gifted thinkers and entrepreneurs should be inventing the next printing press or cotton gin, not attending charity functions.
What accounts for this monstrous difference? He has cashed some out over the years, but not ~$80 billion worth.
Also, maybe they cant utilize all that cash at once. Therefore it would be best to be illiquid until you need the liquidity.
Maybe we can give Gates the benefit of the doubt but for everyone else, this is just a tax scheme.
"The world today has about 6.8 billion people. That's headed up to about 9 billion. Now if we do a really great job on new vaccines, health care, reproductive health services, we could lower that by, perhaps, 10 or 15%"
- loading bar until website is fully loaded
- animated buttons that bounce and flash
- full screen 2 second transitions from page to page
- all in one page, no urls!
feels like 2010 again : ]
So we will see how this turns out. It's been a couple years since they last raised funding so its possible they didn't really have another choice. Chances are if the numbers on the S-1 were truly great they wouldn't have done it confidentially.
Then they came for the Trade Unionists, and I did not speak out Because I was not a Trade Unionist.
Then they came for the Jews, and I did not speak out Because I was not a Jew.
Then they came for meand there was no one left to speak for me.
From what I read, it was suggested that it would be hard to even figure out what The Daily Stormer could've done 'wrong' according to Google Domains' Terms of Service.
This is a ridiculous statement. Domain registrars are already required (by ICANN) to receive, investigate, and respond to abuse complaints.
Given the distributed nature, HDFS runs on multiple machines. In linux distributed service security fits well with kerberos. Normally if you want a "secure" HDFS you must "kerberize" the services such that any hadoop operation requires a valid/authorized TGT.
To most people kerberizing a hadoop cluster is a major barrier to getting hadoop running. I dont see this changing but certain vendor hadoop distros break down some of the barriers.
Sometimes it is OK if you run a cluster insecure. Please dont do it if youre handling my financial or medical records though. As Mr.T once said 'dont write checks that yo ass cant cash'
Didn't we learn anything from register_globals?
from an outsider perspective (I've never used/run hadoop) I cannot see much reason for exposing the cluster to the outside world - either a web-app acts as an intermediary or access can be provided via VPN/ssh-tunnel/etc
... just curious why a fully/publically exposed cluster would be a "requirement"? or does it come down to the fact that firewalling an AWS environment is as painful (if not more) than "kerberizing" a [hadoop] cluster? (I kind of assumed AWS has firewalling functionality that is fairly plug'n'play ... a quick search does really back that up though)
Now it's revealed that dung beetles can perceive the galaxy. Coincidence? I think not.
Obviously dung beetles are descended from a race of astronavigators who taught the Egyptians everything. They are the ancient astronauts. [Cue theremin music]
A dung beetle goes into a bar.He doesn't order a drink.He just takes a stool.
I never thought a dung beetle could sound so cute.
Once I understood what they were I realize that I have encountered the behavior in my life anecdotally. Also, I realize the ones that I am deficient in.
EDIT: The study has a definition that I missed when reading it. See tboyd47's comment below.
That leadership piece is what's in demand; and IMO that requires strength in both areas.
> Thus, the labormarket appears to increasingly value individuals possessing high non-cognitive relative tocognitive skills over time.
From my own experience non-cognitive skills are becoming rarer, so increasing returns make sense.
Isn't this a contradiction? "Increasingly sorted into occupations that were intensive in cognitive skill" and simultaneously "greater increases in the relative return to non-cognitive skill"? So are we just saying they saw the heaviest rise in skills in general?
I didn't read past the abstract, maybe the body is clearer.
> To make it a real thing I'd start by calling morestack manually from a NOSPLIT assembly function to ensure we have enough goroutine stack space (instead of rolling back rsp) with a size obtained maybe from static analysis of the Rust function (instead of, well, made up).
> It could all be analyzed, generated and built by some "rustgo" tool, instead of hardcoded in Makefiles and assembly files.
Maybe define a Go target to teach Rust about the Go calling conventions? You may also want to use "xargo", which is specially built for stripping or customising "std" and to work with targets without binary stdlib support.
Is there some issue with this approach that I'm missing? Is the additional process overhead really enough that it's worth bending over backwards to avoid it?
Go strives to find defaults that are good for its core use cases, and only accepts features that are fast enough to be enabled by default, in a constant and successful fight against knobs
He is just writing a more direct manual version of CGo in assembly that bypasses a lot of what CGo does, to be much faster.
> Before anyone tries to compare this to cgo
The only meaningful message in this blog is it possible to write a faster CGo, that's it. Comparing it to CGo is the only useful possible outcome, but...
> But to be clear, rustgo is not a real thing that you should use in production. For example, I suspect I should be saving g before the jump, the stack size is completely arbitrary, and shrinking the trampoline frame like that will probably confuse the hell out of debuggers. Also, a panic in Rust might get weird.
So when you actually fix all those things you might be back where CGo was at the beginning.
This guy comes across as a classic "but i wanna be cool" hacker who discovers that when you bypass all the normal protections in a library and make some kind of direct custom call, things can be faster.
I guess so what?
Ha. If anything the clash of cultures that the web facilitates is more likely to start wars.
Something to ponder. May be the only alternative option. But on a global scale, cuz the internets know no national boundaries.
Also, notice how so not focused on the actual tech this article is. How it's all about ideals and grand sweeping narratives. It is this that turns text-boxes into something with cultural value. It's simultaneously bathetic and comical.
Very visual example of how Shor's algorithm works to solve factoring. Nothing more than basic arithmetic required.
The big takeaway for me was, it's not just "try every combination at once" as per pop lit on the subject. QC doesn't really do that. To get QC to work any better that traditional for any task, you need to get lucky and stumble across an algorithm that QC can excel at for that task. Just from reading Scott Aaronson's article, it seems likely that most tasks simply don't have a QC optimization, so perhaps QC won't change much at all. (Well, except cryptography, which may change everything...)
This is IBM Quantum experience. Click on "experiment" to start. It has a nice tutorial.
I like this one much better, because you can see the internal state of the machine at any moment. And it has much more options and is much faster.
The only thing that caught my eye as off was totally minor. They say the many-controlled-Z gate used by Grover's algorithm can be done in O(n^2) constant-sized gates with an argument-by-reference, but with that type of argument you might as well give the tight bound of (n).
Well, is there much "physics" in (theoretical) quantum physics anyway? It's pretty much all math - just like in this paper!
Edit, to clarify, it just seems like an os with the capability to host a large number of user processes as here would really allow an order of magnitude reduction in hosting cost. Ie if a machine can host 1,000,000 paying accounts vs 10 vps/containered apps.
xeon126# uptime 1:42PM up 9 mins, 3 users, load averages: 890407.00, 549381.40, 254199.55
They are just four bits away from hitting a really big number.
Thats much appreciated but I was kind of hoping you (the author) would go into more details about request time. Most people can (and should) do the above. However I would be more interested in what Elixer + CRUD is like particularly for TTFB. Like does the author do streaming (I don't necessarily mean websockets or comet)?
After all if the TTFB is really slow the CSS optimizations and what not matter little.
In traditional request per thread (or whatever is analagous) web framework paradigms the request is a single thread and often waits for the database to finish before moving on to display the page. I would imagine Elixir has a better answer at least for read only pages.
While implementing the ranking algorithm, which is very similar to the one mentioned in the article, I decided to run a periodic job every 60 seconds that updates the rank for each submission and stores it in the database so querying the ranked data is more efficient than recalculating the rank on every page request. Are you doing something similar or did you take a different approach?
Ranking all stories works well if the total number of submissions is a small number, but I imagine the approach is a little different for large sites like HN. Ranking all submissions periodically seems like a waste since people rarely view submissions beyond 10 pages. One approach is just to rank submissions from the past n days, where n depends on the average daily submission volume.
For the part that displays time since a submission was made, I implemented the HN model, where it displays only minutes, hours and days. Python code here : https://dpaste.de/5d1w
> Elyxel was designed and built with performance in mind. Styles and any additional flourishes were kept to a minimum. My choice of Elixir & Phoenix was driven by this consideration as well. Most of the pages are well under 100 kilobytes and load in less than 100 milliseconds5. I find it's always helpful to keep performance in the back of your mind when building something.
Once you start to scale, the bottleneck is rarely the application layer. For the typical crud web app it's likely to be the database.
For the lazy, here is the github link from this article:
"As a result of the failures described in Paragraph 18, on or about May 12, 2014, an intruderwas able to access consumers personal information in plain text in Respondents Amazon S3Datastore using an access key that one of Respondents engineers had publicly posted toGitHub, a code-sharing website used by software developers. The publicly posted keygranted full administrative privileges to all data and documents stored within RespondentsAmazon S3 Datastore."
https://www.ftc.gov/system/files/documents/cases/1523054_ube... Page 5
For a particular six-month period, Uber only monitored access to the account information of a select group. Who? Certain high-profile users, including Uber executives.
What was the upshot? In May 2014, an intruder used an access key an Uber engineer had publicly posted on a code-sharing site to access the names and drivers license numbers of 100,000 Uber drivers, as well as some bank account information and Social Security numbers. The FTC says Uber didnt discover the breach for almost four months.
The proposed settlement prohibits Uber from misrepresenting its privacy and security practices. It also requires Uber to put a comprehensive privacy program in place and to get independent third-party audits every two years for the next 20 years. You can file a public comment about the settlement until September 15, 2017.
The complaint: https://www.ftc.gov/enforcement/cases-proceedings/152-3054/u...
Links from complaint:
Agreement Containing Consent Order (19.87 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...
Decision and Order (57.66 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...
Complaint (35.88 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...
Complaint Exhibits A and B (1.2 MB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...
Analysis of Proposed Consent Order To Aid Public Comment (56.14 KB)https://www.ftc.gov/system/files/documents/cases/1523054_ube...
Press release: Uber Settles FTC Allegations that It Made Deceptive Privacy and Data Security Claimshttps://www.ftc.gov/news-events/press-releases/2017/08/uber-...
Settlement agreement quote:
Under its agreement with the Commission, Uber is:
prohibited from misrepresenting how it monitors internal access to consumers personal information;
prohibited from misrepresenting how it protects and secures that data;
required to implement a comprehensive privacy program that addresses privacy risks related to new and existing products and services and protects the privacy and confidentiality of personal information collected by the company; and
required to obtain within 180 days, and every two years after that for the next 20 years, independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order.
The author writes "Meanwhile, at least one influential researcher (whose work I respect) had harsh words publicly for her result", and then quotes some of these words:
Note that (smartly enough) the PCG author avoids carefully to compare with xorshift128+ or xorshift1024*.
In the linked test  by John D. Cook (who uses PactRand, a test similar to the (obsolete) DIEHARD), xorshift128+ and xoroshir0128+ fail within 3 seconds, while PCG ran 16 hours producing 2 TB of pseudo-random numbers without any suspicious p-value detected.
On the other hand, Vigna claims that the xoroshiro family does "pass" PactRand.
I've submitted an answer to StackOverflow a while ago , recommending xoroshiro and PCG, thus I'd be concerned if PCG turns out to be flawed. It's actually quite hard to get academics in the field to give an authoritative recommendation (I've tried) - their response is typically along the line "It's complicated"...
Edit: remove italics due to asterisk in PRNG name, & add link to John. D Cook's test.
Most of the analysis is about the LCG or the final output. The suggested mixer is just
output = rotate64(uint64_t(state ^ (state >> 64)), state >> 122);
This isn't crypto-grade; both that mixer and a LCG generator are reversible with enough work.
I think there are two topics here. One is whether academic research and work is becoming less relevant to practice. The other is whether the formalism of academic-style publishing are becoming less relevant to the modern world where more and more venues for publishing, rating, and discovering work.
On the former, I believe that academic work is as relevant as ever. There are some areas (like systems) where I'm doubtful about relevance from the point of view of a practitioner, but other areas (like hardware and ML where work remains extremely relevant). I haven't noticed a trend there over the last decade, except in some areas of systems where the industrial practice tends to happen on cluster sizes that are often not approachable for academia.
On the latter, academic publication does indeed seem to be getting less relevant. There are other (often better) ways to discover work. There are other ways to tell whether a piece of work is relevant, or credible. There are other, definitely better, ways to publish and distribute work. In some sense I think this is a pity: as an academic-turned-practitioner I like academic-style publications. Still, I think they are going to either change substantially or die.
This article raises another very good point: sometimes the formalism of academic publication makes the work harder to understand, less approachable, or less valuable. That's clear harm, and it seems like this professor was right to avoid that.
As a tenured professor I want to say two things about this piece:
1. I think academic publishing will be forced to change. I'm not sure what it's going to look like in the end, but traditional journals are starting to seem really quaint and outdated now.
2. As far as I can tell from what she's written on the PCG page, the submission to TOMS is a poor example, because no one I know expects to be done with one submission. That is, no one I know submits a paper to one journal, even one reputable journal, and is done. They submit and it gets rejected and revise it and resubmit it, maybe three or even four times. After the fourth or fifth time, you might give up, but not necessarily even then.
I have mixed feelings about the PCG paper as an example, because in some ways it's great: an example of how something very influential has superceded traditional academic publishing. In other ways, though, it's horrible, because it's misleading about the typical academic publishing experience. Yes, academic publishing is full of random nonsense, and corruption, but yes, you can also get past it (usually) with just a little persistence. In still other ways, it's a good example of what we might see increasingly, which is a researcher having a lower threshold for the typical bullshit out there.
That she ran into a paper wall doesn't bother her because she's openly publishing is even better.
It worked as far as I can tell. But I don't trust the statistical tests. Who is to say there isn't a very obvious pattern in the numbers that I didn't test for or notice? How do you prove a random number generator is good?
> And it is not even entirely clear what really random would mean. It is not clear that we live in a randomized universe
At the quantum level it really is clear that we live in a really random universe. What's the meaning of really random? The outcome of a quantum process.
On-topic. Yeah, you have to know your audience. As OP mentions, just because the paper wasn't published doesn't prevent anyone from thinking about it and even building on it. On the other hand these scientific publications have styles and target audiences, and maybe she got rejected not due to lack of relevance or rigor, but because the paper didn't match the publication's non-scientific criteria for publication.
On the other hand, maybe spending more than a line explaining what the birthday paradox is should be cut out and put in a backgrounder paper or appendix so that the paper can focus on the actual novel ideas.
[EDIT] The actual paper is here: http://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf
1. The paper itself is extremely readable by the standards of most cryptography research. On one hand, this is great because I was able to follow the whole thing in essentially one pass. On the other hand, the paper is very long for its result (58 pages!), and it could easily do without passages like this one:
Yet because the algorithms that we are concerned with are deterministic, theirbehavior is governed by their inputs, thus they will produce the same stream of randomnumbers from the same initial conditionswe might therefore say that they areonly random to an observer unaware of those initial conditions or unaware of howthe algorithm has iterated its state since that point. This deterministic behavior isvaluable in a number of fields, as it makes experiments reproducible. As a result, theparameters that set the initial state of the generator are usually known as the seed. Ifwe want reproducible results we should pick an arbitrary seed and remember it toreproduce the same random sequence later, whereas if we want results that cannotbe easily reproduced, we should select the seed in some inscrutable (and, ideally, nondeterministic) way, and keep it secret. Knowing the seed, we can predict the output, but for many generators even withoutthe seed it is possible to infer the current state of the generator from its output. Thisproperty is trivially true for any generator where its output is its entire internal stateastrategy used by a number of simple random number generators. For some othergenerators, such as the Mersenne Twister , we have to go to a little more trouble andinvert its tempering function (which is a bijection; see Section 5), but neverthelessafter only 624 outputs, we will have captured its entire internal state.
That's a lot of setup for what is frankly a very basic idea. A cryptographer being verbose in their writing might briefly remind the reader of these properties with the first sentence, but they'd still likely do that with much more brevity than this. I understand wanting to make your research accessible, but for people who understand the field this detracts from getting to the "meat." It might make it harder to get through, but a 10-30 page result is preferable to a nearly 60-page one that assumes I know nearly nothing about the field. If I don't know these details very well, how can I properly assess the author's results?
2. The author's tone in her writing is something I take issue with. For example, passages like this one...
Suppose that, excited by the idea of permutation functions, you decide to alwaysimprove the random number generators you use with a multiplicative step. You turnto LEcuyers excellent paper , and without reading it closely (who has time toread papers these days!), you grab the last 32-bit constant he lists, 204209821. Youare then surprised to discover that your improvement makes things worse! Theproblem is that you were using XorShift 32/32, a generator that already includesmultiplication by 747796405 as an improving step. Unfortunately, 204209821 is themultiplicative inverse of 747796405 (mod 232), so you have just turned it back intothe far-worseperforming XorShift generator! Oops.*
...go a bit beyond levity. If you're trying to establish rigorous definitions and use cases to distinguish between generators, functions and permutations, this isn't the way to do it. This isn't appropriate because it doesn't go far enough to formalize the point. It makes it intuitive, sure, and that's a great educational tool! But it's a poor scenario to use as the basis for a problem statement - research is not motivated by the failure of an engineer to properly read and understand existing primitives, it's motivated by novel results that exhibit superior qualities over existing primitives.
3. The biggest grievance I have with this paper is the way in which it analyzes its primitives for cryptographic security. For example, this passage under 6.2.2 Security Considerations:
In addition, most of the PCG variations presented in the next section have anoutput function that returns only half as many bits as there are in the generator state.But the mere use of a 2b/2-to -1 function does not guarantee that an adversary cannotreconstruct generator state from the output. For example, Frieze et al.  showedthat if we simply drop the low-order bits, it is possible for an adversary to discoverwhat they are. Our output functions are much more complex than mere bit dropping,however, with each adding at least some element of additional challenge. In addition,one of the generators, PCG-XSL-RR (described in Section 6.3.3), is explicitly designedto make any attempt at state reconstruction especially difficult, using xor folding tominimize the amount of information about internal state that leaks out.17 It should beused when a fast general-purpose generator is needed but enhanced security wouldalso be desirable. It is also the default generator for 64-bit output.
That's not a rigorous analysis of a primitive's security. It is an informal explanation of why the primitive may be secure, but it so high level that there is no proof based on a significant hardness assumption. Compare this with Dan Boneh's recent paper, "Constrained Keys for Invertible Pseudorandom Functions". Appendices A and B after the list of references occupy nearly 20 pages of theorems used to analyze and prove the security of primitives explored in the paper under various assumptions.
Novel research exploring functions with (pseudo)random properties is inherently mathematical; it's absolutely insufficient to use a bunch of statistical tests, then informally assess the security of a primitive based on the abbreviated references to one or two papers.
This looks interesting, but we're talking about money here. If you're looking for something similar to this consider a solid infrastructure provider first. We built an app on Coinbase's shoddy early API, only to have it go down in the middle of a YC interview -_-
 https://www.blockcypher.com https://www.blockcypher.com/dev/ethereum/#contract-api
I need an endpoint that gets me all address holders of a contract, its fine if its a little slow
etherscan used to have one but they deprecated probably for the performance issue
Cmon guys, a US Masters for 7000 USD? Are you kidding me? Its totally worth it. In fact I feel blessed that such a thing even exists. GaTech has been a trailblazer in this regards.
Did I learn a lot?
I learnt a ridiculous amount. For the time+dollar investment it is amazing. The program is definitely not easy either.
It has been amazing to learn the concepts in ML (Dr. Isbell) and AI (Dr Starner) courses and then a few weeks later think "I think I can actually use these concepts in my workplace".
Why the mixed feelings?
Not all courses had the same quality to it. From the top of my head, AI, ML were probably the best 2 courses. Other well ran courses I would add was computational photography, edutech, introduction to infosec (besides the rote learning...), however some of the other courses I had a relatively negative experience.
The degree does suck up a lot of time and I would say it is the real deal.
Knowing what I know now I can't say 100% that I will "re-do" OMSCS - to be fair on GaTech I'm not sure whether the challenges that I feel above are due to an online program and I personally would be more suited to an in-person program but the experience has definitely been better than Udacity's nanodegree and any MOOC which I have sat.
Overall I would say if you do it for the sake of learning and that alone - OMSCS is worth it. For any other reason please don't do it.
My wife did an online master's degree (at a legit university that also had an online program). You have to be very good at self-pacing, diligence, and learning autonomously. You have to be so good at it, in fact, that the type of person who would succeed in an online master's program is the same type of person who would succeed in self-learning without the master's program.
So if your only goal is to learn, then I say no, it's not worth it.
However, you're in Brazil and not a lifelong programmer. Credentials may work against you if seeking a job in the US. Many US companies look at South America as the "nearshore" talent, much better in quality than devfarms in India, but also still cheaper and -- because of that -- slightly lower in quality than US talent.
In that case, spending $7k and completing the program and getting the degree may help you get a $7k higher salary in your first (or next) job. It may give US companies more confidence in your abilities, as you received a US graduate school education.
So from a financial perspective and the perspective of job opportunities inside the US as a foreigner, then I think it may be worth it. If you don't care about getting US jobs then still probably not worth it.
Best of luck!
Honestly I think your time is better spent working on real projects. In my CS master's program I met many students with no real-world experience. One was a paralegal before school, and after he graduated he became...a paralegal with a CS master's. Experience > degrees, every time.
There's value in the program (algorithms and data structures being the most applicable), but just go in with your eyes open knowing that the degree is not a glass slipper that'll turn you into Cinderella overnight. Too many IMHO falsely believed my program was a jobs program and really struggled to find work in the field.
If you can do it at night while working FT, great but don't take 1-2 years off work. It sounds appealing to be done ASAP but you're unlikely to make up that 60-120K/year in lost wages. Unless you're fabulously wealthy.
A couple of things to consider: As you mentioned, it is more focused on Computer Science than Software Engineering/Development. There are a couple of Software Engineering/Architecture/Testing courses but I haven't taken them so I can't comment on how relevant I think they are to my day job.
It's an incredible bargain... 7-8K for an MS (not an online MS) from a top 10 school in CS. That on it's own makes it worth it for me.
It's not easy and it's not like a typical Coursera/Udacity course. Depending on which courses you take it can be quite challenging (which is a good thing). You typically don't have much interaction with the Professors but there are a lot of TAs and other students to help you along the way.
Here's a reddit in case you haven't come across it that answers many questions:
And here's an awesome course review site that a student built:
(Source: current OMSCS student, hopefully graduating in December)
I made an "informed decision tree" awhile back that goes into much more detail about my thought process when signing up for this degree:
I also reviewed the OMSCS program in detail here: https://forrestbrazeal.com/2017/05/08/omscs-a-working-profes...
Hope that helps!
Got a job at Google directly because of this program (a few classes like CCA helped a lot with interviews). I'm aware of at least a couple dozen of us from OMS here.
The program cost me dearly. It cost me my relationship with the SO and it cost me my health (staying up late nights, lots of coffee).
* $5k cheap, it's nothing, the real way you pay for it is via your time.
* The teachers like the flexibility as much as we do. Many are top notch. I took two classes from professors that work at Google (Dr. Starner and Dr. Essa), one at Netflix (Dr. Lebanon), and a few others have their own startups.
* One of the classes was taught by Sebastian Thrun, with a TA at Google, but I think that's changed now.
* The lectures are good, but you have infinite ability to subsidize them with Udacity, Coursera etc.
* You learn squat by watching videos. The true learning happens at 2am when you are trying to implement something, and end up tinkering, debugging, etc. That's when things click.
* The hidden gem is Piazza and some of the amazing classmates that help you out. Lots of classmates that work in industry and can explain things a lot better. I.e: Actual data scientists and CTOs of Data Science companies taking the data science class. They were amazing and I owe my degree to them in part.
* Working full time and taking classes is not easy. Consider quitting and doing it peacefully.
* From within Google, I've heard from people that did the Stanford SCPD (I'm considering it) and also OMSCS. Lots of people that say the SCPD program wasn't worth the time and effort. No one yet that's said the same about the GT program.
I've heard from people that have done the program in-person, and they say the online lectures and materials are significantly better.
The program does have its hiccups here and there. Some courses have been reported as being poorly organized, but this is certainly the minority. Also, you may not receive as much individual attention as you would in a on-campus program. This is aided by the fantastic community of students in the OMSCS program which provide a support system for each other through online forums/chat. If you are not much of a self-starter and need specific guidance, this program may not be for you.
Otherwise, I think OMSCS is totally worth it. It is hard though. Really hard. I have a family, significant engineering experience, and I find the workload intense. It puts pressure on my family at the same time because I'm not available as much. So I'm taking it very slow, no more than 2-3 courses a year.
It feels great to be 'back at school' after so many years. I love learning new stuff and the challenges of hacking away at low level things. The kind of thing you rarely get to do professionally unless you're very lucky (or not getting paid much). Almost makes me wish I had done a Ph.D.
I don't know if it will help me get a better job or whatever, but it definitely fulfills my own internal itch.
I'm about halfway through and many of the classes assume that you have the equivalent of an undergrad CS degree. It's not intended to replace an undergrad degree.
That doesn't mean you can't do it, but your going to spend a lot of time catching up. From what I've seen, the students without a CS degree, even those with significant industry experience, have had a much harder time with the more theoretical classes.
It's also a graduate program, and the classes are pretty rigorous compared to what I did in my undergrad CS degree.
Also keep in mind that admission is fairly competitive. And admission is only probationary. You have to complete 2 foundational classes with a B to be fully accepted.
One thing I'd warn though is that you'll get out of the program what you put into it - so it's really up to you to choose classes that will set up your career the way that you want it.
Cons: I've noticed some students who come to get their MS degree from a reputed institution because it is cheap. Due to coursework pressure, they take short-cuts, like doing group-work, discussing solutions when you are prohibited, plagiarizing in assignments, etc.
It's hard for me to estimate how much prep I would need to do to come in to this program and feel comfortable with the tasks at hand.
Here are my thoughts on what people need to succeed as an OMCS student:
* Be able to program in C, C++, Python and Java at an intermediate level. And, know one of these very well. * Be able to use a debugger (GDB) and valgrind. * Be able to administer and configure Linux systems. * Understand data structures and examples (std::set in C++ is RB Tree backed, std::unordered_set is hash table backed) * Understand basic networking concepts and key technologies (TCP, UDP, IP, switching, routing, etc.). * Understand the x86 computer in general.
I've done well so far, but I have the programming/logic background to do the work. If you don't, brush up on the skills listed above before enrolling.
Edit: The class projects are a lot of work. Be prepared to give-up your weekends and evenings. Even if you know the material and the language, it's a job to get through some of the projects.
I'm through my second OMSCS semester, and it you want to know if I think it's worth it...you'll have to read the post ;)
1 - The people I've seen doing it are learning A LOT - more than another online program I've seen.
2 - They're also working A LOT - it intrudes on all aspects of their personal life. It's as much or more work than doing an in person CS degree.
3 - The folks I know don't have CS undergrads, which also makes it more difficult.
Net - it can be worth it if you missed CS as an undergrad, but you'll have to work. You need to ask if there are enough people in Brazil who value the credential (or implied skills) to make it worth the time. The time investment is more expensive than the $s. (It will be thousands of hours)
Would anyone who works full time and gone through this program care to share their thoughts?
Edit: Just found this great article from another comment
I don't know how it would be looked at in Brazil or what the economic cost/benefit are in terms of your own income. I did know a few folks from the University of Sao Paulo that did grad and postdoc work while I was at GT though, so clearly some people are aware of GT in Brazil. That might be another avenue to get opinions from. I would be interested to hear how the costs compare to an institution that was local to you.
The classes are cheap. The hours are long. In the end your grade depends on teammates who haven't been vetted. Three teammates who can't code? You get a C and don't pass.
Course content is extremely dated. UML and SDLC paradigms from the 70's with xerox pdfs distributed to "learn" from.
This is a money grab.
I don't think it will have an immediate impact on my earnings or place in my company, but I think the long term value of having it far exceeds what I'm paying for it.
edit: Answered my own question - You can't have two consecutive semesters "off". I.e. the slowest possible pace would be 2 classes in the first year, then 1 class every other semester. So I suppose it would be:spring/summer 'xx: 6 credits, 24 remaining, spring 'xx + 1: 9 credits, fall 'xx +1 : 12 creditsetc.
 - per https://www.reddit.com/r/OMSCS/wiki/index
Does anyone have insight if doing Georgia Tech's - Master of Science in Analytics will help me land such role?
The classes take a lot of time (see https://omscentral.com), but the learning has been a lot of fun. I loved it.
That's something you could learn on your own. But your knowledge of "technologies" are more valuable to employers than CS degree - especially if you have work experience.
The tech industry isn't like academia ( economics ) where you have to build up credentials. Work on projects that deal with web technologies or even better learn the back end ( databases ) or even the middle tier/server code if you are a front-end developer.
Becoming a full-stack ( front-end, middle-tier and especially back-end ) is going to be far more important to employers than if you know what undecidability is or computational theory.
Degrees are very important if you want to break into the industry ( especially top tier corporations ). But if you are already work in the industry, employers want to see the technologies you are competent in.
If your employer is willing to pay for it and you have free time, then go for it. Learning is always a good thing. But if you want to further your career, go learn SQL ( any flavor ) and RDBMs technologies - SQL Server, Postgres, etc ( any you want but I recommend SQL Server Developer Edition if you are beginner on Windows OS as it is very beginner friendly from installation to client tools ).
A full-stack web developer is rare and you could even sell yourself as an architect/management. That's a difference from being a $60K web developer and a $200K full stack developer/architect.
First and most important: your internships and work experience, and what you accomplished during those jobs. They should tell a story of increasing and accelerating personal growth, learning, challenge and passion. If you can share personal or class projects, even better.
After your experiences, your degrees will be considered based on the number of years each typically requires, with early graduation and multiple majors being notable.
1. PhD, if you have one. A STEM PhD was particularly helpful for ML/Data science positions, but not required. 2. BS/BA (3-4 year degree) 3. MS/MEng (1-2 year degree)
International students get a raw deal. The online masters will barely help you get a job or launch a career in the US. US universities appear to offer the chance to work for major US companies with a notable university (such as Georgia Tech) on your resume, only to feed their graduates into our broken immigration and work authorization system, H1-B indentured servitude and no replies from the countless companies that have an unspoken higher bar for those needing sponsorship.
To round out a few other contexts HN readers might experience:
If you are an international considering an on-campus MS/MEng, US universities are charging full price while giving you a credential of limited value and utility. Apply the same comments above but at a much higher price than GA Techs OMSCS.
If you are completing/just completed a less notable undergrad degree, paying for a masters program at an elite CS school (like GA Tech) is usually a bad deal. If it not a requirement for the positions you seek, it won't help your career chances much.
If you have an undergrad degree and your employer will pay/cover your MS/MEng at night/personal time (and that is your passion), awesome and go for it! It will be a lot of work and lost sleep to get everything out of the experience, but a lifelong investment in your growth and experience.
If you are completing/just completed a notable undergrad degree (tier-1, internationally recognized program), you don't need the masters. Feel free to get one for your learning, sense of self and building research connections while you ponder getting a PhD. The hiring and salary benefit will be very small--you are already the candidate every company wants to meet. If you decide to get a PhD, that will open some new doors but take 5+ years to get there.
At my previous company, we made it our forte and team passion to get authorization for employees--given a global pool of candidates and a hiring bar to match. I'm really proud of our effort here given the broken and unfair system. Sadly, many companies do not share this value or cannot justify the time, effort and expense, or cannot scale such a program to a larger number of employees across a less selective bar.
Employers will ignore you the second they find out your master is not legit.
This is a really cool approach! I maybe have to fork to add in support for our data mixing platform.
Also, if the original poster is reading this: I am at foss4g with a 360 gopro camera rig, perhaps we can go shoot some high fps immersive video of old Harvard buildings and brainstorm about how to get that into Blender
Although like using blender from the UI, reading through the code I feel like there is probably a large learning curve here.
- There are N (usually 21) tokens in a pile.- A turn consists of removing 1, 2, or 3 tokens from the pile.- The player who removes the final token is the winner.- The opponent will always take tokens equal to n mod 4 if that is a valid move, otherwise will play randomly (this is the optimal strategy).- The AI plays first.
You can see my write-up here: . One of the most interesting things for me was visually inspecting the action scores (at the end) to see how the agent learned the optimal strategy over time. My configuration took 3000 games to reach the optimal strategy against against a strong opponent (opponent epsilon = 0.1), and substantially longer as the opponent starts to play worse.
Do you guys see yourselves sticking to a model that spits out analysis, and let customers decide what insights to gain from the data? Or could there be a path where eventually it lets users take specific actions based on the data?
I might have missed it on the website, how does pricing work?
Also, do you have any integrations with other tools like Intercom or Zendesk to ease data-sharing? A monthly insights report generated directly off of my main customer support tool can replace hours of manual work.
I for one really liked the demo and the blog - specifically, (a) I have great exemplars for what you mean by "theme", and (b) this post shows great insights into your thinking about the problem faced by your customers
> Developed on canonical text like news article or Wikipedia, they either failed to understand the variety of expressions, or were too hard to explain.
It appears to me that the current methods and resulting tools are heavily dependent on the problem formulation (or domain in general). Moreover, no matter how fancy your technique is (or "how deep is your net"), the resulting model won't work unless you take specific steps to train it on data from the domain.
Yes, what I just said sounds borderline truism. However, I am more interested in discussing why it is so? Here's my initial thinking:
Let us look at (one of) the definition of Machine Learning, from Prof Tom Mitchell's textbook,"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
Here, experience E can be loosely considered as the amount of data you have for training - obviously, more data (i.e. training) should improve learning. However, the abstraction of T and P hides an important underlying problem of specification - or in other words, formulation of T (and E).
> I wrote a new approach [capitalizing] on my PhD and new Deep Learning approaches.
I hope we get to see some of your insights in a paper or article (or blog post :)
I'm also happy to make introductions if you're ever thinking about expanding up north to Canada.
I don't know much about NLP but are you only using unsupervised learning on the raw data? I would think you would need an NLP layer as well that sorts out basic synonymical issues, phrasing differences etc.?
I work on the same, just for my own company to automate customer interaction (well, at least 99% of it).
is the implication here that that intersection has a lot of red light runners? if so, are they so dense in not understanding how normal people running red lights is less of an issue here than a machine running that red light?
normal humans run red lights because they're either not paying attention or they're assholes. how is a machine safer or better if it can't pay attention (or even worse, is an asshole).
someone could have died because uber decided the rules didn't apply to them. it's ridiculous that they're still allowed to operate in california.
While Uber followed Googles cars closely, it was Tesla and Elon Musk that the duo discussed most frequently.
9/14/2016 Levandowski: Tesla crash in January implies Elon is lying about millions of miles without incident. We should have LDP on Tesla just to catch all the crashes that are going on.
9/22/2016: Weve got to start calling Elon on his shit. I'm not on social media but let's start "faketesla" and start give physics lessons about stupid shit Elon says like [saying his cars dont need lidar]"
Does anyone know what they're referencing here? I don't take Elon as a person to lie, his character seems too strong for that - he understands public perception and seems to deeply cares about it.
Travis shows dog fooding at its best.
A lot of people see a building full of books and wonder why it can't be replaced by a bank of terminals and Google. I won't get in to the relative merits of dead trees vs. electrons, and largely don't care about it. What that line of thought misses is two-fold: the librarians and the community space.
Decent librarians are hugely underrated resources. Great ones can be incredible. Maybe natural language systems will become good enough in my lifetime to handle some of the vague requests librarians routinely manage to match to the right book, but the leaps of association to related topics, the knowledge of the edge cases of information classification to navigate them well, and the general mass of knowledge they accumulate is massively useful to have on hand. And so few people take advantage of it.
Meeting spaces in this context (both formal, sign-up-for-your-group and informal) serve an important role as well. It seems like they're becoming rarer as government buildings use security as an excuse to close to the public, and in calling around to private groups with spaces that previously did that sort of thing have been much more reluctant to do so when I've tried to organize things over the last several years.
To personalize this a bit, I grew up in a poor family. One thing that was heavily emphasized to me was the value of learning - I think it was reaction to missed opportunities. Who knows what would have happened, but I do know that my college essays (written referencing library books, building on interests fostered in the math and the American Lit sections) would have been very different without them, and I kinda doubt I would have gotten a free ride to a top-10 school if I had been only drawing on what public school offered.
 Anecdata alert!
Talk about diversity, the library is a place where you get to see people from all walks of life outside the silicon valley bubble (different race, age, handicap). It builds a learning community where people have the opportunity to help each other at a more human level.
I think donating to the internet archive would be a better donation which a lot more benefit to society than funding physical libraries.
Libraries solve one of the worlds most important problem - keeping societies important information history safe. Websites are not immune to this problem. They require maintenance. When a webpage goes down its gone forever. Without something like the internet archive, we would not have a modern day library equivalent for the web. We are losing a lot of important information. Physical libraries today are in comparison much less important than digital ones.
I co-founded Peer 2 Peer University  a non profit that brings people together in learning circles to take online courses. When we switched from online-only to face to face meetings in public libraries we started teaching adults who had fallen out of the education system and who were not benefiting from online courses. And I can't say enough positive things about the librarians who we work with.
Raise taxes and on people like Bezos and Gates for the needs of society.
Libraries should evolve with the change of technology and move their function from curation and access to information to something that is able to benefit more people. Books occupy volume and removing them would make more room for desks and rooms where people with no access to quiet areas could use to be more productive.
I think one of the best places for a mega-philanthropist to invest would be in the time and places that kids spend outside of public schools. Many of the biggest disadvantages in opportunities for kids are created when they fall behind before and after school and during summers, relative to kids who are better off socioeconomically. These disadvantages compound and are lasting. Safe places to engage in healthy recreation, productive endeavors, and getting something nutritious to eat that they wouldn't otherwise have access to would go a long way for underprivileged youth and have an impact for the rest of their lives.
Then do the same with legal records, although that is more of a legal problem than a money problem.
I don't know how it is in the US, but for instance German libraries offer to loan ebooks: http://www.onleihe.net/
Donating ereaders and rights to ebooks to libraries seems more effective than printed books.
Big caveats here are Amazon's monopoly position, DRM and copyright and loans for ebooks vs. physical books.
Bezos should spend (or not spend) in ways and things he values, to maximize what he gets out of what hes earned.
(P.S. libraries compete with his book selling business! Why wouldnt he rather sell a library pass on a kindle for a monthly subscription?)
Carnegie's legacy, the example used in the article, doesn't translate to the present.
If Bezos wanted to democratize information in a comparable way, perhaps he could underwrite universal access to high-speed Internet. Many many parts of the country still do not have reliable, high-speed, low latency Internet connections.
If the Kindle ever was jailbroken, well then the kid or whoever just learned about jailbreaking/hacking. Without Wifi or LTE support, likely no one would really bother.
I find this one inspiring:https://www.ted.com/talks/curtis_wall_street_carroll_how_i_l...
I would hope were going to make large strides in these in his lifetime. If we could effectively funnel more into R&D sooner, wed all see the benefits sooner. Cancer(s), for example, might be cured in say 2060 with our current effort, but if we solved the problem by 2030, hundreds of millions would benefit.
Just books, staff and facilities: the three things that libraries always need, won't become obsolete in a few years, and are equally available to all patrons in an area.
Yes, public libraries need to evolve to meet their community's needs as they change. But just as a new coat of paint or solar-powered lighting doesn't strengthen an aging bridge, focusing on the flair rather than the core of what makes a library a library would be foolhardy.
I'm not rich in the popular sense of the word (besides having the fortune of being American middle class), but I do have investments by virtue of almost never spending on consumer goods. And having no wife or kids. My coworkers realize after years of seeing me drive the same beater correctly assume I'm in better shape financially, and some have the audacity to jokingly ask me to put them in my will.
Now, I will not deny that I am an extremely fortunate person who is cognitively able, like Bezos or anyone well-connected with material wealth, but what's with the 'he should donate to this cause instead'?
It's his money. He could buy a fleet of yachts, set them on fire, and upload the video footage - why shouldn't he be allowed to do that? At what arbitrary level of wealth does 'his' money become everyone else's money?
just keep the money in banks. that's what they do.
It seems to imply that someone, who wasn't competent enough to make billions of their own, is somehow more apt to know how to better spend them than the one that actually did.
If things keep getting digitalized at the current speed, all knowledge of the world will be accessible online in our lifetime.
Unless you believe that a large percentage of citizens will not be able to afford a device for accessing the internet, libraries are a waste of money.
Oh and since librarians were mentioned, if AI keeps advancing, we will be able to have a conversation with a search engine within 30 years. So who needs a librarian?
For example, I don't think many on the right disagree that funding prenatal care is a good thing--but some major prenatal care providers, such as Planned Parenthood, also provide abortion services and so some politicians want to cut all their funding to make sure none of the Federal money goes to abortions. A whole bunch of women's health services get cut in order to make sure there is no chance the money ends up helping abortions.
So I'd like to see some billionaire, or some well-funded charity like the Gates Foundation, build several clinics that provide free abortions around the country in the states with the least restrictions on abortions, and fund a program that provides free travel to and from those clinics for women in the states with restrictive laws that have forced most such clinics to close.
Then organizations like Planned Parenthood can get completely out of the abortion business, taking away the major excuse that is used to cut their funding.
State legislators can stop spending a lot of time coming up with new ways to try to shut down abortion clinics in their states (because shutting down such clinics will no longer stop the abortions), and state attorney generals can stop wasting time defending those attempts in court, and maybe they will finally realize that the best way to reduce abortions is to make it so people don't need them in the first place. Maybe then states like Texas can drop their idiotic "abstinence only" approach to sex eduction (which has resulted in soaring teen pregnancy rates...) and switch to something actually effective.
Edit: any down voters care to name specific objections? That Planned Parenthood provides a lot of useful women's health services that are not related to abortion should not be controversial. That abortion is the main reason Congress wants to completely defund PP should also not be controversial. That "abstinence only" programs are a massive failure is pretty well documented. That many states keep passing abortion restrictions which then get challenged and often struck down as unconstitutional is not controversial.