In fact, I would NOT expect the post office to spend millions in technology to track letters. They shouldn't have any INTEREST in doing that - it isn't important to their main function of delivering mail.
I expected them to use tech to better be able read zip codes & route more efficiently, and to count mail to better place people and resources where most needed.It should stop at that.
Simply because something is public does not mean the govt should be spending resources tracking and storing it
Why isn't this data used to do something for good, rather than for what we can safely presume to be evil? For instance, I'm sure we could use this data to track down every evil junkmailing sub-human "direct mail marketing" moron, publicize their contact info, and let's see how they like getting nothing but poop in their mailboxes?
License plate capture, facial recognition, mail and email envelopes, contact lists, credit card receipts, CCTV, social network trawling... soon enough drones will capture and store 24/7 aerial video of all major cities.
I take it for granted that the government will know everyone I know, know everywhere I go, and know everything that I buy, sell, earn, and save. Pretty much the only thing left you have a chance at keeping private is the content of your conversations, and good luck with that.
Uhh, I'm pretty sure the NSA scrutiny included the bodies of said calls and emails. I'm slightly offended by an agency tracking who I'm in contact with, but I've very offended by an agency knowing why I'm in contact with said person.
I'll go one further: the high-speed sorting machines for envelopes could easily be modified to photograph the interior of envelopes and that this is already happening. You only need to shine a light through them to do this.
When Ms. Das says that our brains constantly use prior information, she (probably) means prior in a certain specific, technical sense. Modern cognitive scientists often think about perception and cognition in probabilistic terms, so you might characterize the brain's task in interpreting that utterance as finding the most probable sentence (S) given the acoustic input (X), or P(S|X). Bayes' rule says you can write this expression as P(X|S)P(S)/Z, where Z is a normalizing constant (don't worry about it for now).
Because these expressions are so common, we've come up with names for referring to their parts. The first part, P(X|S), is called the likelihood, which tells us how probable the input we experienced would be given a particular interpretation. For instance, if the sentence actually read "competition center" rather than "Constitution center", the sound in the recording would be less likely (although maybe still possible, aka non-zero probability, due to noise, speaker variance, etc). The second part, P(S), represents the prior probability of the sentence. Given our knowledge of English, some sentences are simply more likely than others. For instance, the sentence "colorless green ideas sleep furiously" is grammatically well-formed but tremendously unlikely.
So, to conclude, when the presenter says we use prior information, she (probably, no pun intended) means that upon hearing what the correct interpretation should be, we increase the value of P(S), thereby allowing us to compute the proper perception.
Here's a nice and fairly readable overview paper (with a whole section on prior knowledge) if you think this stuff is as cool as I do -- www.indiana.edu/~kruschke/articles/JacobsK2010.pdf
*edits for clarity and formatting
The whole thing is worth a listen ..but in the relevant clip he talks about audio illusions and gives a great example.
This just reinforces the lesson, it is the Brain that is Hearing!
It gave rise to another thought, is there an audio equivalent of color blindness? Not deaf so much as unable to process certain sounds?
I did horrible in elementary school and high school until I realized I was what I labelled myself a "visual learner". I excelled in college and am just about to finish my PhD in evolutionary biology, largely because I stopped attending lectures and decided to learn everything on my own. After listening to this illusion I decided to search for auditory dyslexia and sure enough there are disorders like this and I definitely fit the definition, especially central auditory processing disorder. Does anyone know if this test is correlated with audio disorders or where I can get more information on this?
If CAPTCHAs are becoming increasingly easier to break, could illusions give stronger guarantees because they use more inherent "human" features of our brain - things that bots will not easily decipher in the foreseeable future?
Fun story, this was a long time ago: I was interning at Google at that time. One day I tell my then cube neighbor about an interesting experiment on visual perception that I had read about. A professor at MIT had carried out experiments on his class. Students were asked to wear prismatic goggles that shifted their vision and then try to catch objects. Hilarity ensued, but soon enough the brain adapted to the shift. Same with inverting glasses, soon the students would not even realize that their vision was inverted. The fun part was when they took their glasses off, their motor reflexes would still compensate assuming that they were wearing those glasses. Much hilarity again. I was telling all this to my cube neighbor Michael Riley, not knowing who he was, he says with a twinkle in his eye "Yeah, that was us".
The most remarkable thing about these experiments that I learned from him was that the professor would provoke an illusion on the students on the first day of class. I dont remember exactly what the illusion was, but it was some visual artifact, seeing patterns that werent visible a moment ago, much like the OP. At the end of the semester the professor would demonstrate that the entire class could still see that illusion, although they have not been exposed to it in the intervening 4 months !
I tried hard to find an articles on these experiments and phenomena, but my google fu is not working today. I distinctly remember wikipedia articles on it, but am not able to retrieve them. Either my keyword memory has gone down or Google's search quality/relevance.
Navigating Google was such a nerd minefield, but in the best possible way. The excited student that I was, I ended up lecturing about longest common subsequence to Thomas Szymanski not knowing his association with the history of diff on unix. Same thing happened with SVM's, I was explaining its merits and demerits to Corinna Cortes, my other cube neighbor, not knowing she was the first author of the paper on SVMs. Not only would they not take offence they would all keep indulging. Then one day I step out for a break, a senior person whom I knew had a cube on the row behind me, approaches me, apologizing profusely and ad infinitum that he had got locked out, could I please let him in. No big deal, but he just would not stop apologizing and thanking me. A few days later a co-intern asks me if I know that guy. I said sure, I let him in once. He says no, do you know who he is. He asked me to checkout the name tag on his cube. I saunter off, "Brian Kernighan" !
An important takeaway of this internship was to experience the humility of all these people, and the sense that you are surrounded by such iconic stalwarts in CS and you wouldn't even know it because they are so... normal.
Coming back to illusions, another visual/auditory one that does not stop working even when you know exactly what is going on is the McGurk effect https://www.youtube.com/watch?v=G-lN8vWm3m0
EDIT Ummm so many downvotes ? I did not see that coming, would greatly appreciate what you found downvote worthy. It is always insightful to know how ones comment may rub someone the wrong way. Feel free to reply, I promise no offence will be taken and I will learn something along the way.
@tbirdz thanks for the perspective, I did not realize that it could come off as bragging. IMO you can brag only about things that you have achieved using your own efforts. For me it was a mix of foot in the mouth and an important learning experience, especially in humility.
Not a native speaker. if that's important.
The first time through, I heard "[jibberish jibberish jibberish] is at the next stop." (Perhaps I've spent too much time on public transport.) What does that say about my brain?
But yes, once I heard the whole sentence, I couldn't not hear it.
The modeling exercise herein is basically attempting to use a game theoretic model to test out some really dumb/simplified models of cooperation and whether the behaviors observed approximate anything approaching what our intuitions might say is moral behavior, up to and including an 'eigenjesus' and 'eigenmoses' up against tit_for_tat bots and the like.
Scott Alexander suggests (http://slatestarcodex.com/2014/06/20/ground-morality-in-part...) you could instead use DW-nominate, the tool that does meta-cluster-analysis to mathematically detect "party lines" in congress (which are basically just clusters in human-utility-function-space anyway), to find what preference-subfunctions (e.g. helping old ladies cross the street, returning a wallet you find laying on the ground) correlate together into a cluster (that might be called 'goodness') -- and then grounding/normalizing the PageRank analysis with that, such that you can tell whether the system as a whole is in a 'good' or 'evil' state.
My problem is not with the "eigenmorality" concept, nor with the various takes on playing it out across consecutive Prisoner's Dilemma sessions. That aspect is extremely interesting. Rather, my problem is with the Prisoner's Dilemma as a valid ground on which to test something like morality.
The Prisoner's Dilemma is a foundational, theoretical framework for evaluating human behavior. And it's a wonderful, elegant framework. But it treats humans as emotionless agents, and the "punishment" as an abstract, theoretical, rationally navigable scenario. Place real human beings into the Prisoner's Dilemma, with real-world consequences, and you get all sorts of unexpected results. The Prisoner's Dilemma is notorious for holding up perfectly fine in vitro, but less so in situ. Cultural conditioning plays a huge role in how real people act in the game. So do emotions, and irrational heuristics like overemphasizing loss aversion. (Tversky and Kahneman's work has a lot to say about the latter.)
Using the Prisoner's Dilemma as a proving ground, I think you'd arrive at an abstract model of morality -- but you wouldn't capture how morality actually plays out with quasi-rational, emotional, circumstantially driven, human agents. And, philosophically speaking, that's where morality actually counts the most.
Don't we have the ability to do this now by visualizing or analyzing citations? A set of "fake" think-tanks which promote bogus ideas should be identifiable as a mostly-disconnected component of a graph today. We don't need to get each think tank's explicit opinions about the others. Aaronson points out this single-purpose inquiry would encourage gaming, but analyzing a graph built for other incentives may give more "honest" results (at least for a while).
And we have, at least five years ago: http://arstechnica.com/science/2009/01/using-pagerank-to-ass... . You can follow links from there to a project called EigenFactor, academic research about shortcomings of PageRank in this application, and more.
Results of such analyses should be used as input to human thought processes and not some sort of legislative robot.
- either morality is an absolute concept (things are inherently good or evil, theists might say this good/evil is defined by a god or gods). This is http://en.wikipedia.org/wiki/Moral_absolutism
- or morality is relative, defined by people, defined by cultures (what one culture might consider immoral, another culture will consider it moral, and nobody is inherently right or wrong). This is http://en.wikipedia.org/wiki/Moral_relativism
If moral relativism is right, it would be absolutely expected that the 98% are "almost perfectly good", since they do things that the majority consider good. What a fantastic essay...
That said, I find this approach to defining morality fascinating. Maybe if the definitions are refined it will manage to tell us something we already know (not entirely sarcastic; that would be legitimately impressive for a mathematical construct regarding morality).
I think Aaronson realizes this, because he does talk about how Eigenjesus and Eigenmoses don't accord with our moral intuitions in some cases. He also addresses this somewhat in the section "Scooped by Plato." His major point--that something like Eigenjesus can be useful, even if it cannot deduce terminal values--still holds.
Scott mentions the "forget the past" and "address root causes" sides, but how do you deal with things in the middle?
Even being able to provide a model that allows for injustices from centuries ago would be impressive, but how should such things decay? Again, the same pressures come into play, based on the interests of the judged parties.
It's strange to exclude intent from your model when it's an important factor in almost all systems of morality.
(Aside: If I have two completely different thoughts about an article, should I post them in two separate comments or in the same comment?)
please help us!
right now all we have is a way to state which facebook users a person trusts. there's a chrome extension to help with this. it's extremely basic.
i have a server running at https://dewdrop.neyer.me - we need a lot more help!
i'm just putting it on github now - so i'll update the readme in a few minutes.
The author uses the example of climate-change deniers to express the opinion that minority groups have "withdrawn itself from the main conversation and retreated into a different discourse."
Is this true of other minority groups - feminists? Homosexuals? Minority ethnic groups? It seems highly awkward to claim the same thing.
A better system would be one which considers how to cater for individuals rather than declaring a populist majority to be a special, protected ingroup. There's enough of the latter already.
It's also immoral to call for all of us to sacrifice industrial output for future generations to solve the supposed climate change problem. There is no reason to presume that future generations are more important than the present generation (in fact, it is demonstrably the case that they are not). Thus, this position is profoundly immoral.
However, the implicit assumption that sacrifice is moral is common to most world religions and also altruism, which is probably where he imported it from. All of them are morally bankrupt. A scientist shold be able to be skeptical and see such logical flaws, even if he is not able to propose the correct solution.
I again wrote a longer response but have shortened it because the author seems to have committed a rather grave error which is to assume that human moral 'intuition' is in any way consistent. There are heaps of evidence (cue the trolley car) that human moral judgements really should not be considered a guide for anything. The fact that we can capture the disasters of collective morality observed under various regime's during the 20th century ought to tell us that following those models as a universal foundation for human relations is a terrible idea.
Might also be worth paying a visit to eigennicolo and not adhere to such rigid systems.
2 x 3 = MULTIPLY 2 3 : ( abc.a(bc)) ( sz.s(s(z))) ( xy.x(x(x(y)))) = c.( sz.s(s(z)))(( xy.x(x(x(y))))c) = cz.(( xy.x(x(x(y))))c)((( xy.x(x(x(y))))c)(z)) = cz.( y.c(c(c(y)))) (c(c(c(z)))) = cz.c(c(c(c(c(c(z)))))) = 6
This makes Brainf*ck look elegant!
I find "To Dissect A Mockingbird" a more intuitive and simpler explanation of Lambda Calculus. I think the visuals help a lot.
I think there's a MUCH bigger privacy issue here than what the author focuses on.
Couldn't you deduce many passenger identities based on addresses? There's a lot of scenarios where passenger identities could be effectively de-anonymized, just based on GPS data. You could then use this data set to analyze their comings and goings.
1. For people who live alone in a single family home, you can pretty much completely track when and where they went by taxi. From this you can deduce a lot about their interests, lifestyle, workplace and schedule, private life, etc. It's profoundly invasive.
2. Even if there's a few people sharing an address, the other dropoff/pickup point can be used to narrow down the likelihood of who it is, especially when combined with other easily obtainable data.
For example if you knew an employee (e.g. that cute barista) lived in a certain neighborhood you could track their trips to/from work and deduce their home address.
Or if you knew there was only one senior citizen (or Muslim, etc.) living in a building, a regular trip to a senior center (or mosque) would reveal when their apartment is vacant.
Or if there's only one young man in a building, a single trip home from a gay bar could out them.
Holy shit.. can you imagine someone just plotting all the trips from a single gay bar? Listing off all the connected residential addresses? And not only that, any subsequent trips home from those addresses the next morning? Taking the walk of shame to a whole new level!
Likewise trips could be used to deduce affairs and other deceptions by fellow residents. "You said you were working late, but the only taxi trip to our building that night was from a bar."
This is just off the top of my head.. I feel I could go on for hours listing all the possible ways this data set could be exploited.
How is this not front page New York Times???
The obvious countermeasures for this are, sadly, inadequate. Netflix could have randomized its dataset by removing a subset of the data, changing the timestamps or adding deliberate errors into the unique ID numbers it used to replace the names. It turns out, though, that this only makes the problem slightly harder. Narayanan's and Shmatikov's de-anonymization algorithm is surprisingly robust, and works with partial data, data that has been perturbed, even data with errors in it.
Nit: this is a lookup table, not a rainbow table. Rainbow tables involve a clever optimization that compresses multiple passwords (in a chain) into a single entry in the table, saving a great amount of disk space.
I have moved to Torch7. My NYU lab uses Torch7, Facebook AI Research uses Torch7, DeepMind and other folks at Google use Torch7.
Now if we let the social side of it aside, I would really like to know why Torch7/Lua and not say Numpy/Scipy/Theano or Julia for that matter. To me scratching an itch is a fine enough excuse, just wanted to know if there are any compelling technical advantages to do it this way.
Another story I remember is all the flak we got when we opened the osx.freshmeat.net section - we got so much criticism about how we'd sold out etc. etc. but it actually turned out to be quite a good repository for OS X apps for a while until iTunes kinda took over.
Good times :D
Before, it was possible to find, for example, a TUI email client written in perl with a BSD license, thanks to the ability to drill down into the trove. After the redesign, it was goddamn near impossible to find anything -- especially things with specific licenses.
I, and just about everyone I know who used it, stopped using it not long after they started focusing on toy web programming more than information curation. I'm sad they mismanaged it to death, but I'm not going to miss it in its terminal state.
Anyway, as I recall I was able to purchase 140 shares at $30. The day of the IPO it hit $300+ and I was too stupid to sell (gotta get those long term capital gains rates..doh). I finally sold those shares years later at something like $1.
Oh well. You win some you lose some.
As a former employee of [VA [Research[ Systems]|Linux|Software] SourceForge] I also heard this. Though I believe Slashdot was also a profit center for awhile.
But in the tar xzf ... ; ./configure && make && sudo make install time it was really nice.
I wished something like that existed for JS libraries.
I wonder how that's working out for them...
An institution for a very long time, definitely something from a different era. Farewell old friend.
This would be a nice addition to http://ckan.org/.
I feel like the late 90s was such a Wild West time for Linux. Linux is in a great spot now, best it has ever been, but for whatever reason the community just feels incredibly different for me now. It's probably just me aging.
I'll echo several other replies - it was great - nay, essential - before package managers became good.
Maybe they should have bought and changed their name to yum.com or apt-get.com (instead of freecode) and then more of us would still remember why they went to the site.
Thank you Freshmeat.net (aka freecode.com)
"Bare Metal" - does this mean the RPi can run blob-free?
A possible improvement I suggest is to gfx_draw_line in gfx.s - using a fixed-point algorithm could be simpler and faster: http://hbfs.wordpress.com/2009/07/28/faster-than-bresenhams-...
There is a bare metal chess game  for the Pi that was presumably another teams entry for the same assignment (they are both from Imperial).
Good show, I've often felt low level was a dying art, perhaps I'm wrong and stuff like this will push people to learn what a register is and what "flags" are.. :-)
About the first section where it talks about lack of anarchism in academia. My friend is actually working on her master thesis (on history) about anarchism (here, in Mexico). She's always telling me about how she struggles with her teachers and how some of them just want tell her off with no arguments like just telling her that anarchists are violent.
This is partly a result of the move to multi-lateral trade talks (bad idea but when WTO rounds take a decade it's not unexpected), and partly inertia - there is little common agreement on how to prevent the next crash, so without a better idea we seem to follow the inverse of Einsteins quip - "the definition of insanity is repeating the last mistakes but hoping for a different outcome
Minor thought : this leak is still a pretty big deal. But it feels like wikileaks is the wrong place for this - like the main media organisations should have already got their investigatory acts together and made wiki leaks irrelevant.
Just wondering ...
> The draft Financial Services Annex sets rules which would assist the expansion of financial multi-nationals mainly headquartered in New York, London, Paris and Frankfurt
These multi-nationals are purely a product of regulaiton (in this case, more specifically, regulatory capture). Without regulation, there would be thousands of healthy medium-size banks in the US, as there apparently used to be.
Overall, to characterize this as "deregulation" is completely sloppy thinking. We are never going to get a better situation when people think sloppily like this. To do so is, in practice, a moral crime. It supports maintaining the status quo through confusion.
Not by me. Overly bulky code is one of my pet peeves. Here is my implementation of HQ2x in 6KB, 188 lines of code (not including the header file, which is 1KB and used for external linking.)
I devised a few observations to reduce the code. First is that the 256 patterns have four cases for each corner. If you rotate the eight surrounding pixels twice, the case blending rules match another corner (sans two cases which appear to be typos in the original algorithm. I asked the author, he apparently made the entire unrolled table by hand.) Now that the algorithm is cut down by 75%, put the switch statement into a lookup table.
To speed up the algorithm (and which further reduces its size), I use an old 16<>32-bit expand/compact trick, a YUV lookup table, and a really clever mask compare that blargg came up with. At this point, my version is already significant faster than the original algorithm. But I also added in OpenMP parallelization support, which really makes things run fast on multicore systems.
But anyway ... this guy clearly bested me. 560 lines with HQ3x and HQ4x included is even more impressive. Hats off to him!
( I also have inflate in 8kb of code, unzip in (inflate+2KB) of code, PNG decompression in (inflate+8kb) of code, SHA256 hashing in 4KB of code, an SMTP client with authentication+MIME attachments+HTML in 8KB of code, an XML parser (sans DTDs) in 6KB of code, etc.)
At ContentMine we're doing something totally complementary to this. Some of the tools will overlap and we should be sharing what we're doing. For example, I've been working on a standardised declarative JSON-XPath scraper definition format and a subset of it for academic journal scraping. I've been building a library of ScraperJSON definitions for academic publisher sites, and I've converged on some formats that work for a majority of publishers with no modification (because they silently follow undocumented standards like the HighWire metadata). We've got a growing community of volunteers who will keep the definitions up to date for hundreds or thousands of journals. If you also use our scraper definitions for your metadata you'll get all the publishers for free.
Our goal initially is to scrape the entire literature (we have TOCs for 23,000 journals) as it is published every day. We then use natural language and image processing tools to extract uncopyrightable facts from the full texts, and republish those facts in open streams. For example we can capture all phylogenetic trees, reverse engineer the newick format from images, and submit them to the Tree Of Life. Or we can find all new mentions of endangered species and submit updates to the IUCN Red List. There's a ton of other interesting stuff downstream (e.g. automatic fraud detection, data streams for any conceivable subject of interest in the scientific literature).
I have a question. Why are you saying you'll never do full texts? You could index all CC-BY and better full texts completely legally, and this would greatly expand the literature search power.
> Scraping Google is a bad idea, which is quite funny as Google itself is the mother of all scrapers, but I digress.
It's not really "funny"/ironic/etc -- Google put capital into scraping websites to build an index, and you're free to do the same, but you shouldn't expect Google to allow you to scrape their index for free.
EDIT: just saw this:
> Right now, PLOS, eLife, PeerJ and ScienceDirect are supported, so any paper you read from these publishers, while using the extension, will get indexed and added to the network automatically.
Yeah, they're not going to like that. You might want to consult a lawyer.
I'd be happy to answer questions or get feedback on the project!
It has a nice feature where it records the retailer's location using the iPhone's location services, so when you enter new transactions for a retailer you've been to before it can automatically guess the payee and category. After a week or two, the app had learnt the payees I used the most, so entering transactions became super fast.
The downside with YNAB is that you must sync it with a desktop version of their app. The iOS/Android apps are free, but only support a small subset of the desktop app's functionality.
With YNAB there's also a whole financial management system that they want you to buy into. It works well for me, but might not meet everyone's mental model of how personal finance should work.
Despite that I did not have one finished application of my own, lots of prototypes, but nothing in the App Store. So this is my first one, it is highly targeted to me, which you will notice if you read some of the text in the link. But I hope it is also something others will like.
Link to app in the App Store: https://itunes.apple.com/us/app/spendy/id872831308?mt=8
Anyway, I thought I'd just post it here because why not.
Excellent work, OP. It looks like you're a unicorn.. or could be one with a bit more design polish :)
One feature I may have missed is reconciliation. That is, some way to review my manually entered transactions with statements provided by my bank.
I have tried heaps of apps for this, 'cost' and 'spendee' are the ones I found to be the best so far.Will give yours a try soon.
I think you need to fix the flow for the first time users. If every first time user must add a currency, why not have first time user screen be the add currency screen? Also, the add currency screen should have suggestions. It wasn't initially clear that I needed to type 'USD' instead of 'dollar'. It also wasn't clear that the currency needed to be in capital letters. I had usd in lowercase and clicking the check mark did nothing. There should at least be feedback.
The app is great once getting past the first screen. Nice job.
It doesn't look like this can be changed within the app. It only provides the option of changing the content that's displayed. This was an initial turn off for me, but other than that I'm really happy with this!
Last year, after requesting an ITIN for my wife (who is not a US citizen and thus requires an SSN-esque replacement to be able to file joint taxes), we received two letters from the IRS. One had the ITIN. The other denied our request for the ITIN on the basis that "We have already issued her an ITIN." Apparently, the IRS explains, that second letter should have immediately set my accountant and I to doing forensic debugging of their protocols, because it means that something "seriously wrong" happened to our returns.
What? Glad you asked. See, the IRS had lost my return. "What?" The paper was "in the building somewhere" -- we had gotten a receipt -- but they were unaware of what desk it was at. (Their first hypothesis was "You failed to file", and they threatened penalties for that, until being confronted with a Post Office return receipt. Which is, by the by, why you should always get a receipt.)
My accountant took over yelling at them for a while to find the return, and they eventually did, and -- miracle of miracles -- they typed it into the computer. Twice. Thus generating two separate and equal returns going through non-idempotent processes, such as ITIN generation.
When those two returns met up in the reconciliation stage, they blocked each other from processing. No one at the IRS noticed this, for approximately 7 months, until my third call to their CS line got someone to actually look at the file. She hit "delete" on the duplicate. (I really hope she was simplifying that for me, because it scares me if they can actually delete anything.) Return processed almost immediately, refund check cut 48 hours later.
I almost feel sorry for them on being unprepared to unearth potential political malfeasance, because that is after all a distraction from the day to day administration of the Revenue Code, but processing returns is, as the saying goes, "their only job."
The principle job was to build a fraud investigation system by integrating a COTS analysis tool into the IRS systems to automatically generate cases for fraud investigation officers to review. One hitch, as I found out after getting through the gauntlet of interviews, the COTS product that they wanted to use wasn't built to support this kind of integration and the vendor wasn't interested in forking off a special build just for the IRS.
So I asked them a simple question "knowing what I know about this product, and the fact that it can't be integrated as desired, it seems that this is an impossible task, as the senior architect, I'd want to be clear that I have powers to restart the selection process of the tools and systems so I can build a solution that would actually perform as required, would I have this authority?"
At this point, the senior manager from the prime and the pm from the government side got very agitated. You see, there was only 2 months to get a basic system functional and as a result the selection and purchase process had already been completed.
"Without anybody leading the process?"
This was apparently the wrong question, as I had hit some sort of embarrassing point I shouldn't have dug into. They pretty much just wanted somebody in the role to rubber stamp the crap decisions they had already made. They became very defensive, there was some raised voices. I told them I wasn't interested in that kind of position and walked out.
2 years later I found out that they had scrapped the program completely after spending goodness knows how much money and were restarting the entire thing from scratch.
My gut feeling is that this new program too will fail since any working system would likely detect the fraud in their IT acquisition and management.
I havent worked at that job since 2007. I find it unbelievable that any major government organization such as the IRS does not make sure ALL emails are never lost.
edit: the fact that they lost emails is a huge scandal in its own right IMO
"The Treasury Departments current email policy requires emails and attachments that meet the definition of a federal record be added to the organizations files by printing them (including the essential transmission data) and filing them with related paper records."
I think AND is more appropriate than OR here.
What? I'm almost certain the hard drive I had in 2000 was less than 10GB. At 500MB per email user, that's 20 users per entire disk...
1993 called and wants its micro-services. That is great. At least it seems like he is describing Go's channels, Rust's tasks, and most of all Erlang's processes.
It is interesting, perhaps it is a reflection on how some of these concepts are abstract that anything can be read into it, but it seems that the initial design and motivation behind OO has been perverted by C++ and Java.
I started with C++ and Java in college. To me OO was inheritance, composition, and polymorphism, and so on. Years later, when distributed and scalable computing is talked about other languages and platforms seems more OO than the classic OO.
And finally one more excerpt:
1. Everything is an object 2. Objects communicate by sending and receiving messages (in terms of objects) Objects have their own memory (in terms of objects) 3. Every object is an instance of a class (which must be an object) The class holds the shared behavior for its instances (in the form of objects in a program list)
Pretty funny. Replace 'object' with 'process' and you have Erlang, the last language you'd call OO : 1) everything is a process 2) processes sending messages to each other. Each process has an isolated heap (won memory). 3) Functions in modules hold the shared behavior of many possible process instances spawn from it.
Nevertheless, I shall persist and so should you.
I know someone who worked at a treatment center for disadvantaged inner city girls (drug treatment, mostly court mandated). From what I understood motherhood is a status symbol, also a right of passage. A way to earn respect. All of the sudden people pay more attention to you. It also affords a pass into a club of other unwed mothers, many older ones, that might have served as role models growing up. Like the observation said, it is seen as the next step in life.
One deeper level, I think it also provides companionship and family where there is none. Deadbeat deads or moms, girlfriends and hookups who are abusive, come in an out, but this one little person, will be there looking up to them never going anywhere, providing love and attachment that they never got much. That is at least my interpretation of it. It is unfortunate because in most cases these children and parents will have a hard time. It is very selfish to bring children into the world just to be used a status symbol or someone to provide companionship when there are just no resources to raise safely.
On an even deeper, perhaps unconscious, level, maybe having children can be seen as giving up on accomplishing more in life and instead choosing to procreate, hoping maybe the offspring might have a better shot at it.
> The good father is somebody like your friend.
I can see how that would be an attempt at reversal or mend their own experience with their fathers growing up. Their father wasn't there. Their father wasn't their "friend". Their father used to beat them and be harsh. So they vow to be the opposite.
The one hope in this is that it would also reverse some of the stereotypes about men. Men are the default guilty party in family disputes. They are the stereotypical predator and abuser while women are given a a great leeway and only with concrete and absolute evidence will they be considered as unfit to take care of the child. This mentality has permeated the court system, the school system, the culture in general. Hopefully this leads at least to a re-evaluation of those stereotypes.
I think a better metric would probably be something from information theory like mutual information, but I'm not sure which one exactly.
Other facets of forecast "goodness" exist and are often considered in meteorology. A seminal paper on the subject was penned by Allan Murphy, who identified three types of "goodness" (consistency, quality, and value) and ten subsets of quality (including reliability and accuracy). See http://www.glerl.noaa.gov/seagrant/ClimateChangeWhiteboard/R.... [PDF warning]
A popular companion to the reliability diagram is the Relative Operating Characteristics (ROC) curve. Here different forecast probability thresholds are tested to calculate likelihood of success if the event occurred, and likelihood of error if the event did not occur. This evaluates what Murphy calls discrimination (forecast quality conditioned what was observed) which complements reliability. See, e.g., http://www.bom.gov.au/wmo/lrfvs/roc.shtml.
Curiously, accuracy tends to take a back seat in forecast verification to other aspects of quality, particularly in rare event situations. This trend first began in the mid 1880's with the "Finley Affair", a series of published articles debating how to evaluate tornado forecasts issued by the US Army Signal Corps. Murphy published a fascinating literature review on the subject and showed that many of the skill scores and debates born during the Finley Affair are still active today. See http://www.nssl.noaa.gov/users/brooks/public_html/feda/paper.... [PDF warning]
In my totally un-scientific opinion weather.com has slowly gotten worse over the past several years. I think it's become a lot hard to monetize weather and the weather service has really stepped up the game on providing public outlets that are digestible by the general public.
My guess is in the early days what weather.com and the weather channel where really doing is translating the difficult to understand NWS messages and helping the general public quickly understand: "Do you I need a rain coat today?". That said over time as NWS has stepped up their public interfaces that value add is sliding backwards and getting harder to maintain.
That's my $0.02CPM worth.. :-)
Does anyone have experience getting a feed of the raw NWS forecast data for many points in a large region (e.g. a state or the whole country)? I was thinking the other day that it would be great to have a web site that showed the forecasted chance of precipitation across a region, to answer questions like "Where in the Colorado high country should I go camping this weekend?"
 It could be done, for weather.gov , using only data available from the web site .
 I don't really care about the other forecast sources.
 To trust rainfall observations obtained from weather.gov in order to compare them to predictions made by weather.gov seems vaguely wrong but there is no other comparable source of observations. They are physical measurements, after all.
 Some geographic areas probably have a coarser net of observation points. In some places, e.g., San Diego, the weather is inherently easier to predict. Some local forecast offices may be more skilled than others.
First of all there are only two agencies on the planet that do the number crunching to work out some reasonable forecast data that encompasses the whole globe. These are the NWS and the UK Met Office. As well as having to have a lot of big computers these agencies also need source data, this data - observations - comes from airports and plenty of other places where things like wind speed, precipitation temperature and so on is actually measured. At times the observations are wrong - imagine the baking tarmac of that big airport and how that differs from the tranquil yet noisy houses close to a nearby river.
The NWS differs from the Met Office in that they don't charge for the GRIB data. The tax payer has paid for it already in the USA so they don't have to pay for it again. Hence the proliferation of things like The Weather Channel that use NWS rather than Met Office data.
One thing that outsiders to weather forecasting do not realise is what it is that weather forecasters actually do. They imagine them to be very scientific - which they are - but they don't realise that they are essentially in the 'betting shop' business. To take an automotive example, if you had perfect knowledge of every car that is entering tomorrow's F1 race and you had perfect knowledge of the well-being of every single driver, mechanic and tea lady involved in the event, can you actually predict which of the 22 drivers is going to win? Will it be the guy on pole? The guy who has one most of the races so far? The guy who consistently comes second? Or some random outsider?
The GRIB data is far from perfect knowledge, it is a forecast of what is going to happen and the accuracy depends on the time window going into the future. The data is fully 3 dimensional, think of it as lots of onion layers going around the whole planet. Data points are on a grid - what happens if your town is next to some huge mountain with 'your' data point on that grid being several thousand feet higher than where your town is? The GRIB data for your town is not actually for your town, it is for the mountain. A meteorologist will have rules of thumb plus the science to arrive at a more accurate guess than the GRIB gives - this is interpretation of the data, not some sixth sense, however, it is still nonetheless a gamble/guess.
As well as the GRIB data there are things like satellite images - from lots of different flavours of satellite - plus there is radar data. This can all be layered up on top of GRIB data and pretty maps to create an interpreted forecast. The 'wet bias' is more likely to be rookie mistake meteorology rather than devious ploy to get viewers watching. Look at any satellite image and see the low level haze from things like jet plane 'contrails' plus coastal fog etc. There is an awful lot of it on satellite images and it is very easy to permanently be predicting rain from seeing such cloudy greyness. Hence at the local weather station this is more likely to happen. On the Weather Channel where they have excellent interpretation tools for their forecasters this is less likely to happen, not so much because of the tools but because of the forecasters - they are more experienced gamblers.
The other thing to remember with weather forecasting is that today's predictions can be checked against tomorrow's observations. Things can be consistently wrong for a given town/area due to the way the GRIB data works (i.e. does not factor in local topology), and it can take a while before this error in the model is discovered and fixed. There may not be observation data available for smaller towns so some errors might never be fixed.
The weather prediction industry is fairly ripe for disruption. The tools that meteorologists have used to require big workstations to run, nowadays a Google Earth type of app would suffice, if someone could be bothered to write it.
Amongst themselves meteorologists know a lot more about the current factors influencing the big picture of the weather. For instance, the storms that start off on the west coast of Africa, cross the Atlantic and 'bounce back' to the UK, losing energy on the way to end up as mere rain. Clearly such weather patterns take weeks to do there thing, however, for a gardener in the UK it would be good to know if rain was on its way over the next few weeks. Yet the demands of forecasting format mean that the forecaster has to tie that down to 'rain expected teatime next Tuesday' (or whenever). Returning to the 'app' idea, it would be great for everyone if they could explore the raw data and have these bigger events pointed out by an expert, so that the raw data can be interpreted in a meaningful way. Instead we have banal 'insights' such a this article (that probably did not intend to be banal or naive but that is the way things sometimes happen despite trying hard).
I do have to agree with @willu tho, their commenting system is way too elitist. =(
> ...Product Hunt is an excellent sourcing tools for VCs looking to discover little known early stage startups.
If ProductHunt is "quickly becoming the hot new destination for sourcing startup investment opportunities" it cannot also be "an excellent sourcing tools [sic] for VCs looking to discover little known early stage startups." Lack of awareness of startups listed on ProductHunt is inversely correlated with ProductHunt's popularity.
If there's one thing Google will not stand for, it's highly trained customer support reps.
In that context, insurance is an interesting automation problem which is totally solvable.
I wonder if a company like Google that can possibly determine non-obvious risk factors, has a major advantage here.
E.G., imagine if people who searched for the term "dui attorney" were 40x more likely to be involved in a vehicular homicide as a defendant. Google could refuse to insure those searchers, and, as such, significantly cut everyone else's premiums (giving them a major competitive advantage).
I have no special knowledge about the distribution of insurance payouts, but I would guess it follows a power distribution. If so, removing the top 10% of payout insurees could HUGELY decrease insurance payouts.
Does anyone know if the major costs to auto/home/health insurers are payouts?
Excluding healthcare, do people care if folks who exhibit risky behavior pay higher premiums? In the US, young men (actuarian-proven to be higher risk) pay higher auto insurance, and everyone seems fine with it.
The government of course loses money on this scheme, but the people would tear down any government that tried to revoke it. It's an important and fair middle ground between national free ride and a USA-style 'sins of the father' system. Also note that the repayment amount is nowhere near the true cost of the degree, due to Conmonwealth-supported places.
If you have time to work your way through school at current US tuition, you are probably taking courses below your level.
A program like this makes sense if the school has skin in the game. Ideally, the potential to lose its investment would make the school only extend this offer to students with a realistic chance of breaking even.
On the other hand, if the state extends this to all students without concern for ability to repay, it seems likely to be just another taxpayer subsidy for education - which may be good or bad, but it's not clear why a roundabout method like this is better than a direct tuition discount.
This captures my intuition. English and Sociology majors will choose this, not Math majors and Computer Science.
I had heard that Yale had experimented with a similar program. The problem is the distribution. It sounds great for the median student. The low end don't make enough to pay their way. The high end make a ton of money, and push back on paying. In Yale's case they may not have had all the forceful levers of the state. (And Yale wouldn't want to push too hard on the top 1%, alienating their best alums)
Really, the headline makes me cringe, the idea that some people consider this a good idea makes me cry and the domain name ("marginalrevolution") is a joke in this setup.
What a sad idea..
Edit: The state/the country already HAS equity (Ignoring the bullshit and that I think that this link only made HN because of this trigger word). It's called 'taxes'. If you flip burgers, the state gets little money. If you work for Google, FB, Apple, MS (yeah, all not in Oregon. But stay with me) the state makes quite a nice sum, every year, for ~40 years that you're supposed to work (k, feel free to reduce that number).
I haven't seen so many bullshit alarms related to a HN story going off for for quite some time. I would love to sit down and talk to someone that seriously considers this 'cool' and 'a nice idea' and try to understand how that is even possible.
Wait, what? Can I get a source on that, bub?
I am concerned that making it all the way through a degree program has already started to indicate you aren't world-class, and things like this will only accelerate this trend.
I worked part-time during school and full-time during summer and put myself through college at Western, graduating with no debt. I lived with my parents and commuted to school. I don't see why this program is necessary.