No results found for site:wsj.com "create secure professional profiles".
They took a nice page out of the FB playbook, ie import as much data from your competitors before they cut you off. I've never understood why LinkedIn gave them access to the API to begin with.
Now, is there anyone here who fell for their spammy messages and actually USES BranchOut?
* Little helicopters can now lift a substantial weight.
* They aren't very expensive.
* They're easily controlled, more so than a full-sized helicopter (primarily because of computer-aided controls and GPS guidance). So you don't have to be Chuck Yeager to fly one.
* All you need to do is mate the helicopter with a decent camera that can simultaneously beam a picture to the ground for guidance and preview, and take high-resolution pictures on command by way of the radio link.
* Uses: real estate (who desperately need a way to take high-quality pictures of houses from above), surveillance, art, video productions, etc..
This is an opportunity waiting for someone willing to take it on.
I notice that Gizmodo's already published a story based on this article: http://www.gizmodo.com.au/2012/09/check-out-lgs-ipad-from-20...
Was LG's tablet ever released to the public?
As requested: Basically, the gist is that you accept the upload via a local JS file that acts as a conduit. You then turn the dropped / selected file object into a blob object and transfer that blob JS file that lives on S3 (using postMessage and a hidden iframe). That JS file on S3 is what actually performs the upload and tracks the upload progress. On progress events, I send back postMessage payloads to the local JS file to show updates to the user.
Convoluted, but it works. :)
At least now I won't launch something only to have Amazon eat my lunch when they finally came around to providing this much-needed feature.
Could somebody explain CORS to me? How is making the server you're contacting specify it wants to receive requests, in the response header, secure? The request has already been made!
I've pled for this feature in the AWS forum, over their commercial support (which I bought just to bug them about this), and to werner vogels directly.
Also, are you sending email to users whenever they get comments? Emails to people who sign up but never post anything? Welcome emails to people who sign up telling them what to do? Create sets of automated email campaigns and it will dramatically improve your usage stats. Articles from Patrick trend on HN all the time, but here's a link in case you missed it (+1 from my professional experience): http://www.kalzumeus.com/2012/05/31/can-i-get-your-email/
Best of luck to you!
by way of an example, http://www.sublimetext.com/ has a set of features that really make it stand out as a text editor. they could be written out in text, but it's hard to understand what they are when described in words. instead the author has put some animated pictures showing the actual features in use. this presentation instantly shows how you would use those features and most people understand right away where the benefit is.
i'm not sure animated would work for yours, but perhaps a swapping image showing a particular artists progress over a few weeks/months.
just an idea, hope it helps. 1 other tiny thing that got to me, the top nav text isn't vertically centered in the space. it's a small gripe, but sometimes little details can make the difference, especially on a site for artists.
edit: another thing that comes to mind, you feature art on the homepage in a similar style to a portfolio. including what's new or popular. what if instead you featured the artist, and then change between pictures of their progress up to that point. you could highlight the learning aspect by showing progress of each artist you feature on the homepage, rather than just 1 piece of art by that artist.
I tried your tool and I didn't really understand why it taught me anything. It put up a picture, waited for me to draw in your limited drawing tool, then asked me to evaluate myself. I just scribbled and gave myself four stars an everything continued happily.
It seems like the practice engine lacks useful feedback, but maybe that is a feature I would get if I signed in?
But when I scanned it, all I saw was "$2.00", not the freebies.
Maybe style the buttons, so they display what is free?
Any insight as to what I could be doing wrong?
This is particularly relevant because the valley's succession story du jour is Apple and whether Tim Cook et al can take the reins in the wake of Steve Jobs. The following quote struck me:
"It's not going to be easy for Yoshikazu to succeed his father at the same restaurant. Even if Yoshikazu makes the same level of sushi it will still be seen as inferior. If Yoshikazu makes sushi that's twice as good as Jiro's, only then will they be seen as equal." (32:06)
This is exactly what Apple has been going through in the last year, exacting a level of polish that is on par if not above what they released last year, but still leaving nagging doubts in the hearts of the faithful. The one thing that would silence critics and quell fears would be that something twice as revolutionary as the original iPhone be straight up imagined, developed, and hoisted by the post-Jobs Apple--just to claim par.
It was entertaining though. I especially liked the part when he said something like "Welp. I'm ready to go. Why am I even here [at his parent's shrine]. My parents treated me like crap."
The man was Jiro, their father. Is that the childhood you want your children to live? For me, the mastery of a craft is not worth this price.
(The three expert reviewers shown on Amazon are all very impressive researchers on human intelligence in their own right, so their joint endorsement of Flynn's book carries a lot of weight for people like me who follow the research.)
Here is what Arthur Jensen said about Flynn back in the 1980s: "Now and then I am asked . . . who, in my opinion, are the most respectable critics of my position on the race-IQ issue? The name James R. Flynn is by far the first that comes to mind." Modgil, Sohan & Modgil, Celia (Eds.) (1987) Arthur Jensen: Concensus and Controversy New York: Falmer.
AFTER EDIT: Replying to another top-level comment:
I don't understand how anyone could not have an emotional response being told 'your IQ is x'.
People have emotional responses to most statements about themselves that they think are overall evaluations. Some of those emotional responses are more warranted than others. Devote some reading time to the best literature on IQ testing (besides the book under review in this thread, that would include Mackintosh's second edition textbook IQ and Human Intelligence
and the Sternberg-Kaufman Cambridge Handbook of Intelligence,
both recently published). Any of these books will help readers understand that IQ tests are samples of learned behavior and are not exhaustive reports on an individual's profile of developed abilities.
AFTER ANOTHER EDIT:
Discussion of heritability of IQ, a reliable indicator of how much discussants read the current scientific literature on the subject, has ensued in some other subthreads here. Heritability of IQ has nothing whatever to do with malleability (or, if you prefer this terminology, controllability) of human intelligence. That point has been made by the leading researchers on human behaviorial genetics in their recent articles that I frequently post in comments here on HN. It is a very common conceptual blunder, which should be corrected in any well edited genetics textbook, to confuse broad heritability estimates with statements about how malleable human traits are. The two concepts actually have no relationship at all. Highly heritable traits can be very malleable, and the other way around.
Johnson, Wendy; Turkheimer, Eric; Gottesman, Irving I.; Bouchard Jr., Thomas (2009). Beyond Heritability: Twin Studies in Behavioral Research. Current Directions in Psychological Science, 18, 4, 217-220
is an interesting paper that includes the statement "Moreover, even highly heritable traits can be strongly manipulated by the environment, so heritability has little if anything to do with controllability. For example, height is on the order of 90% heritable, yet North and South Koreans, who come from the same genetic background, presently differ in average height by a full 6 inches (Pak, 2004; Schwekendiek, 2008)."
Another interesting paper,
Turkheimer, E. (2008, Spring). A better way to use twins for developmental research. LIFE Newsletter, 2, 1-5
admits the disappointment of behavioral genetics researchers.
"But back to the question: What does heritability mean? Almost everyone who has ever thought about heritability has reached a commonsense intuition about it: One way or another, heritability has to be some kind of index of how genetic a trait is. That intuition explains why so many thousands of heritability coefficients have been calculated over the years. Once the twin registries have been assembled, it's easy and fun, like having a genoscope you can point at one trait after another to take a reading of how genetic things are. Height? Very genetic. Intelligence? Pretty genetic. Schizophrenia? That looks pretty genetic too. Personality? Yep, that too. And over multiple studies and traits the heritabilities go up and down, providing the basis for nearly infinite Talmudic revisions of the grand theories of the heritability of things, perfect grist for the wheels of social science.
"Unfortunately, that fundamental intuition is wrong. Heritability isn't an index of how genetic a trait is. A great deal of time has been wasted in the effort of measuring the heritability of traits in the false expectation that somehow the genetic nature of psychological phenomena would be revealed. There are many reasons for making this strong statement, but the most important of them harkens back to the description of heritability as an effect size. An effect size of the R2 family is a standardized estimate of the proportion of the variance in one variable that is reduced when another variable is held constant statistically. In this case it is an estimate of how much the variance of a trait would be reduced if everyone were genetically identical. With a moment's thought you can see that the answer to the question of how much variance would be reduced if everyone was genetically identical depends crucially on how genetically different everyone was in the first place."
The review article "The neuroscience of human intelligence differences" by Deary and Johnson and Penke (2010) relates specifically to human intelligence:
"At this point, it seems unlikely that single genetic loci have major effects on normal-range intelligence. For example, a modestly sized genome-wide study of the general intelligence factor derived from ten separate test scores in the cAnTAB cognitive test battery did not find any important genome-wide single nucleotide polymorphisms or copy number variants, and did not replicate genetic variants that had previously been associated with cognitive ability[note 48]."
The review article Johnson, W. (2010). Understanding the Genetics of Intelligence: Can Height Help? Can Corn Oil?. Current Directions in Psychological Science, 19(3), 177-182
looks at some famous genetic experiments to show how little is explained by gene frequencies even in thoroughly studied populations defined by artificial selection.
"Together, however, the developmental natures of GCA and height, the likely influences of geneâ€"environment correlations and interactions on their developmental processes, and the potential for genetic background and environmental circumstances to release previously unexpressed genetic variation suggest that very different combinations of genes may produce identical IQs or heights or levels of any other psychological trait. And the same genes may produce very different IQs and heights against different genetic backgrounds and in different environmental circumstances."
I was tested twice when I was in second grade, the first time by a psychologist in a class setting and the second time one-on-one to verify the first. I tested quite high, but even then I knew I wasn't noticeably smarter than others; I just test particularly well.
Frankly, it cracks me up when someone refers to their IQ seriously or boasts about being in Mensa. Other than testing for mid-to-high level mental retardation, subjectively, IQ tests seem to be terrible at measuring actual brilliance.
> the penultimate chapter is a list of 14 examples in which science has failed because of social blindness.
This carries through more broadly and generally to the application of many incorrect fundamental assumptions to the design of our institutions, which consistently fail because of the resulting flawed structures.
People who think that intelligence is innate will refer to the "g factor".http://en.wikipedia.org/wiki/G_factor_%28psychometrics%29
The racist (technically speaking) and controversial Jean Rushton believes that "gains in IQ over time (the Lynn-Flynn effect) are unrelated to g".
What's really going on with discussions like this is that humanity seems to have come to the conclusion that "smarter is better". Intelligence almost certainly exists, but is quite difficult to quantify exactly, and is further complicated by the fact that people resent others who are "better" than them. Given that there is a genetic component to intelligence to a certain degree, race becomes a factor and leads to a line of inductive reasoning that makes people uncomfortable:
1) Smarter is better 2) Intelligence is genetic 3) Race determines genetics 4) One race is better than another 5) Hitler was right (or other outrageous conclusion)
What? No... no. There are plenty of ways that intelligence could have dramatically increased since the '30s without any evolutionary effects.
Which means I can't read it, because my eyes suck.
Group differences in IQ do exist, and Flynn effect does not make them go away. Flynn effect increases scores across the board, it does not equalize different groups.
Black people in US have lower average IQ than white people do, Asian people have higher average IQ. The reasons for this are many, but genetics certainly comes into play: IQ is heritable.
Trying to explain away group differences by "culture" is mostly bad science - trying to make the facts fir your desired conclusions.
Someone call up Paul Krugman and tell him that iodine deficiency doesn't actually cause mental retardation.
That's going to need some explanation. First off, I can see how the question itself is abstract, about a place they have never been, but "no camels" sounds extremely concrete to me. Second, how is ignoring information anything other than a lapse in intelligent thought?
This is similar to statements engineers sometimes make that the Chinese are good copiers but can't innovate.
No, cultures are the same as everyone else. They have humour, art and like to tinker and have in jokes. If they don't publicly innovate it's more likely it's not economical in that environment yet.
The fact 'camels' was used has strong undertones to me, if this was an actually a study I'd be interested to know.
They may be; it is true that many birds exhibit complex behavior. But at least based on the info given in this article, there's a lot of unjustified assumptions about the internal states of the bird's brains based on deep, deep subconscious assumptions about how humans would be feeling if we saw humans acting that way. I would consider it just as likely that to the extent they are "feeling" something it would be something with no human analog.
Was the origin of that behavior a sort of opportunity for groups of humans to learn from the death of their friends? Or are our emotions a sort of outgrowth of the behaviors these birds display?
Now, years later, we have a domesticated Meyers Parrot and his behavior is very complex and interesting.
And it makes me sad. We're not considerate towards animals. We hardly give them a second thought. We eat them, cage them and experiment on them.
Humans need to be better "caretakers" of the planet. We need advances in compassion towards animals and the environment, not more advances in technology.
Hasn't Google+ essentially solved a generalization of this problem with Circles? Creating a whole new account just to keep your networks separate seems very clunky (and highly inelegant) by comparison.
The nice thing about trusting just a latest-edition Intel CPU is that they're so far ahead of everyone else in process that most attacks would be technically difficult for anyone except Intel, NSA, etc. Chris Tarnovsky isn't going to be able to extract keys out of Intel E5 CPUs in 6 hours with even a 10x bigger than $1.5mm lab, so as long as you deal with a machine which disappears faster than 6h (rotating keys, releasing the hounds, etc.), you should be safe.
One of the few things (along with the takeover of mobile OSes vs. legacy crappy desktop OSes) which makes me hopeful for security.
Just hope it doesn't get turned on its head for some yet-another layer of DRM that stops us from accessing our own content.
I guess (or hope) that seeding the key into the CPU in the first place is what's going to make it hard for content owners to use for DRM?
That sure is something EA and Ubisoft wouldn't do in a million years.
And I know I've said this before, but there are open-source drivers for the other vendors' cards as well...and I think they'd benefit tremendously from some Valve love.
That is probably something most HNers know or suspect already, but this seems like a particularly clean proof of that concept.
I have renewed hope.
I bought mine (a Supermicro 2U Dual 3GHz Xeon 4GB 12 Bay Storage Server) for 400$ on eBay, with great service and terms. Recommended if you are ever in the need (or just want) a private server rack unit (1U, 2U or more).
I wonder in this case why, if any specific reason Etsy is not utilizing virtualization of one kind or another? Basically they're building dumb-boxes that seem a much better fit to have a good number of slightly beefier VM systems and just rebuild bad instances from images (this would be especially good since they're already utilizing Chef).
I know that as an e-commerce firm this has helped us substantially to move toward this kind of a structure.
It sounds like a single switch failure could take down a large amount of their infrastructure. The seeming lack of geographic (or even datacenter) redundancy also seems a little dangerous at this scale.
I actually did write up and submit a police report, but it was about 3 months before I heard anything from the police.
Lessons learned: (1) sometimes it's better to roll up your sleeves and do it yourself and (2) some (most?) people legitimately want to come clean.
That's the first thing I would do if I were in a business like that.
I am going to make all my mobile devices hit a webpage on a few of my servers silently on bootup (if there is a web connection) so I would at least have that ip. Also embedding a hidden image into the browser about:blank (startup) page.
Defcon 18 - Pwned By the owner What happens when you steal a hackers computer zoz
That said, many of the components of GNOME (GTK, GStreamer, etc.) are great and we shouldn't forget their usefulness to other projects whilst the Gnome Desktop is coming up against these existential questions...
This is really the only reason that I can't recommend trying Ubuntu to more people. People site usability concerns etc and while there are some issues there I think it's mostly "good enough" now and we're long past the days of having to compile a .tar.gz full of .c files and fettle with vi in order to get sound to work.
So the issue is how to get third party developers interested. I think the best way is by including a really sexy app store. Ubuntu Software Centre is a start but it's still nowhere close to what Apple has achieved in this area. Nasty looking icons, inconsistent screen shots (some showing gnome2, others Unity) and thousands of free apps with weird names don't make it the most attractive place to shop.
In many ways though , I would consider desktop Linux a success regardless of marketshare for the simple reason that it is now possible to "use a computer in freedom". I think the software world would be a bleaker place if Torvalds , Stallman et all hadn't spent the hours pushing code. Imagine a world where the cheapest HTTP server license went into thousands of $.
Ubuntu is a very successful desktop Linux distro. It's pleasant to use and very modern. Nerds might hate it because Unity doesn't fall in line with Linux "the project" so much as its there to make Ubuntu "the product" better.
Overall, desktop linux as an overarching product failed, but so did mobile linux pre-android, but Android isn't so much mobile linux as it is Android.
Open source is a bit like herding cats and if you don't have a real product you are trying to ship, devs will scratch their own itch.
The only thing that was keeping my parents and some of my friends was that they needed Microsoft Office for their work/schoolwork. As of now MS Office runs like a piece of cake on Linux -- with PlayOnLinux (http://playonlinux.com/). Honestly, it's quite impressive how smoothly it runs (and how easy it is to install it.)
MS Office isn't the only application that PlayOnLinux supports - there's a ton of games and other software (Photoshop, Blender, Dreamweaver, Flash, etc.) that it supports. To top all that off, I feel like the desktop on has gotten better and better lately. I use KDE 4.9, and I will say it is quite nice. The level of integration KDE offers and the high quality of many of the standard apps that come with it will make a Windows user never turn back. Ubuntu too has a rather simple and straightforward UI (although it doesn't personally appeal to my taste).
In my opinion, it would fall under these two reasons
1) The difference between a Linux desktop & a Windows/Mac desktop is negligible, or worse. GNOME/KDE don't really add any compelling features that make them better than Windows or Mac anymore. I remember a few years ago, I loved putting Ubuntu on my system because drivers would be downloaded automatically and I could easily access all Linux packages from one simple package manager. The folder browser was pretty familiar, and the GNOME 2 bar was a nice hybrid between Mac & Windows, but nothing too special.
However, the driver installation & central package directory have been long a competitive advantage with the era of Windows 7, etc. Granted, these weren't "defining" features of Linux, but when I personally used it, these were thinks that struck me then, but are no longer relevant now.
2) The ecosystem. I think it goes without saying that the Linux software ecosystem is much more fragmented, and is often found in the "underground".
It's not an issue of whether or not there are substitutes to things like Office, Adobe Creative Suite products, iPhoto, and other essential apps that normal people/working people use on a daily basis, etc. (although I do think that there aren't adequate substitutes for these and that the friction of trying to get these actual products to work through things like WINE, etc is too much).
However, because things are so much more fragmented on Linux, it being an open system, it's harder for there to be de facto software (unless you lurk in some Linux community, which again, normal people aren't generally interested in).
The user has to make so many choices, which are often arbitrary and needless, and in doing so becomes frustrated and confused. There is too much stuff to explain that isn't necessary to explain, and too much detail to go into that again, is not practical.
The trade-off with Linux is that you get an enormous amount of power and responsibility. The benefit of this is that you get an enormous amount of power & responsibility. The cost is...the same, some people just don't want to bother.
The desktop is being left behind by mobile. Recognize that Android/GNU/Linux has won mobile. Despite an early lead, Microsoft has been crushed worldwide in mobile by a Free Software platform.
Android provides a platform where both Free libraries and closed source apps proliferate - and are very inexpensive. All the failures of desktop GNU/Linux have been solved, or are not relevant, in mobile GNU/Linux.
Regarding the tablet segment of mobile, one could argue that GNU/Linux will seize the low end and gradually gain market share at the expense of iPad, leaving no room for Microsoft.
The only reason I care about desktop OS at all is to develop for mobile or back-end server.
Overall, I like Unity way more than current Gnome, KDE, XFCE and LXDE. Your mileage may vary.
I think, the real linux desktop problems are when something goes wrong. Sometimes updates are unsafe. Sometimes you find a bug in a software. I had 10 or 20 crashes and error report windows in my first day. Commercial software is terrible too. Skype is buggy, crashes often and is just bad. Nvidia binary drivers suck and nouveau crashes on my card (560ti). Twinview can only VSync one screen. Your other screen is doomed to lag on renders. Xinerama has a bug with cursor randomly jumping over to another screen. Whenever something bad happens you resort to google and waste 10 minutes+ for fixing it.
I also think that applications not being made for linux is not a very big deal. 80% use case includes browser, music player and office package. All of which are included by default in most distributions.
The funny thing is that desktop Linux apps have always been trying to match Windows apps feature-to-feature, but web developers haven't. Turns out I didn't need every feature from Excel, I needed something faster, more convenient, and easier to use.
I read that blog post and all I can do is shake my head. They seem to be fighting the same old monsters for the last 12 years. The list of reasons on the post sickens me.
Only recently (3 years?) Linux distros started being friendly to "mere mortals". This change coincided with the acceleration of the demise of the PC. There may never be a year of the Linux desktop.
Yet, Linux is everywhere. As soon as you fire up your network connection, you are using Linux. Every time I look up the time on my phone, I'm using Linux. Most internet-connected TVs run Linux, as do most set-top boxes and e-readers. I amuse myself thinking the convoluted things Steve Ballmer is compelled to do just to be able to claim he doesn't use it.
Cheese and Gnome Shell are all very nice, but so what?
I think MS realise this, there's suggestion that they'll be offering Windows 8 for a reasonable figure for once. Then desktop Linux better beware.
Another aspect was the lack of availability of compatible software. Software back in the days were in the form of CDs-on-the-shelf, and I don't think anybody made any of them Linux compatible.
So the challenge was just not there from the beginning to get a hold of average desktop user.
If you don't think Microsoft is directly responsible for this, you are an absolute idiot.
Every manufacturer has to pay for Windows Mobile for every Android phone they sell. No company has stood up to Microsoft. This is a real threat.
You want to talk about Gnome 3? Fuck you. Why would anyone invest a cent in a WM if you can't distribute Linux installed on a laptop?
People like talking about Microsoft and Apple as though they are different teams. Nope, they are on the same team: fuck people who think they can get by without them.
I find preoccupation with some company's metaphor to be a sign of lack of creativity. And the people behind Linux distributions are obsessed with Microsoft and the "desktop".
The "desktop" is only one metaphor.
Does iOS have a "desktop"?
To speed up Vista when it was first released, I used to disable the desktop in Windows by changing the registry key that specifies "explorer.exe". I would just boot to msconfig or task manager.
The system ran much faster that way. Applications can still be minimised. It worked so well, I never went back to the aero nonsense.
Obsession with a "desktop", and trying to look like Microsoft's version of it, is one of Linux's major flaws.
Is, say Google, allowed to use that without paying royalties? Or does HP still own those software patents?
Of course, this is all a drastic oversimplification. Rookies can get jobs, and get some experience by exposure. But developing talent effectively & efficiently takes more time & resource investment beyond just having them do work, and that's investment with an uncertain and non-immediate return. It's a tough nut to crack, but the place where I would start is figuring out how to retain your good employees, especially when the growth of their market value is faster than your company's standard career advancement path.
Colleges of course do this already and it called by different names such as Drexel's coop program which has been around since 1919:
At first it's dis-believable and impossible, and you think the person is crazy, but after "troubleshooting" something rational pops up.
I can't tell you how many times this has happened. But it really doesn't help having a product/service that manages (on Windows) an underlining system of Virtual Hosts, dozens of configuration files, Apache, PHP, and MySQL, and a bunch of other software and tools (http://www.devside.net/server/webdeveloper).
You can make money by being the intermediary to find subjects for experiments, e.g. "For a study we are looking for identical twins who cannot see from birth but now one has restored vision where the other does not".
Does something that looks remotely similar exist?
Granted, once I knew she was on pacemaker, I figured that this has something to do with electrical connectivity. But then again, may be this is consequence of my poor soldering skills and watching too much of House MD.
"Dr, Dr! Every time I drink a cup of coffee, I get a stabbing pain in my right eye..."
(google it if you don't remember)
Exhibits, the same kind of ability to see the whole situation and make a diagnosis
From what I learned during my medical training, this king of issue is not so uncommon, but it is usually diagnosed very easily. Her peacemaker can be disabled using a simple magnet. This is a common test in nearly all protocol to check how the heart is working without the help of the device. Doing this simple test while upside-down would have shown that the paecemaker was effectively working in that position. That should have be enough to ring a bell to most of the cardiologists.
Since in there they say patient has "Nephroptosis, also known as 'Floating Kidney'", which is a listed medical condition, conditions like OP should not be uncommon.
"In a third post from mid-2011 titled "Basketball and Jazz," one of Lehrer's paragraphs closely paralleled one written by Newsweek science writer Sharon Begley some three years earlier.
"The rebounding experiment went like this: 10 basketball players, 10 coaches and 10 sportswriters, plus a group of complete basketball novices, watched video clips of a player attempting a free throw. (You can watch the videos here.) Not surprisingly, the professional athletes were far better at predicting whether or not the shot would go in. While they got it right more than two-thirds of the time, the non-playing experts (i.e., the coaches and writers) only got it right about 40 percent of the time.
"In the experiment, 10 basketball players, 10 coaches and 10 sportswriters (considered non-playing experts), and novices all watched a video clip of someone attempting a free throw. The players were better at predicting whether the shot would go in: they got it right in two-thirds of the shots they saw, compared to 40 percent right for novices and 44 percent for coaches and writers.
"Tellingly, Begley misstated the number of participants in the study. (There were only 5 coaches and 5 sportswriters, not 10 of each. In addition, there were also 10 people in the novice group who were neither coaches nor sportswriters.) Lehrer made the exact same mistake in precisely the same manner."
When Lehrer reproduces someone else's mistake, you know he isn't looking up or verifying the facts himself. The honorable thing to do in a blog would be simply to link to Begley's piece and say, "Sharon Begley wrote an interesting article a few years ago about a study on this issue."
P.S. I posted an article to HN earlier about the initial discovery of Lehrer making up quotations in articles in other publications.
I had also seen him "recycle" earlier writings of his in paid publications, because once one of his articles was submitted here to HN, and I thought, "Hey, I've read this before." Indeed I had, in the previous publication where he had first written on the same subject a couple years earlier.
As medical types will tell you, one of the problems of running imaging and diagnostics on ill / injured / diseased patients is that you'll find anomolies -- not because they're relevant to the illness in question, but because individuals differ.
What is the prevalence of the cited behaviors -- recycling, press-release plagiarism, plagiarism, quotation issues, and factual issues -- in an unbiased sample of other authors / reporters / columnists / essayists?
What, specifically, is wrong with some of the behaviors in question? I haven't followed the Lehrer situation particularly closely, I'm aware that he's admitted to fabricating quotes from Bob Dylan specifically (not good).
I'm a bit puzzled as to what he's being faulted for in "recycling" -- essentially reusing his own material.
The press-release plagiarism cited appears to involve taking quotes from press releases, rather than interviews (which Lehrer shaded to sound like it had been told him directly). The looser view would be that, well, the pres release "told Lehrer" ... and anyone else reading it. Not great, but a modestly pale shade of gray.
Direct quotations of the published, non-press-release works of others is getting rather darker. Though I wouldn't mind knowing what specific rulebook(s) Seife is playing from when he states: "Journalistic rules about press releases are murky. Rules about taking credit for other journalists' prose are not." I mean, I really hope we're not making shit up as we go along (and frankly have no way of knowing if Seife is or isn't -- he's, erm, not citing sources, merely his own authority as a professor of journalism).
Seife admits as much later in his piece: "There isn't a canonical code of conduct for journalists; perfectly reasonable reporters and editors can have fundamental disagreements about what appear to be basic ethical questions, such as whether it's kosher to recycle one's own work." He also notes that recycling can be considered common and acceptable practice, though he feels "may violate the reader's trust". My own experience, especially in persuasive writing that's repeated as an author attempts to argue for a position, is that there is considerable recycling of material, though often an author will refine and strengthen arguments over time. That's what I myself practice.
Handling quotations also allows for some leeway. It's not uncommon to tidy up tics of speech and grammar particularly from spoken conversational passages. It can, in fact, be a negative shading to quote someone with complete faithfulness and accuracy, including all "ers", "ums", "ahs", and syntactical tangents and fragments. That said, changing meaning in as fundamental a manner as to equate memorizing a few stanzas of an epic work with memorizing the whole thing, and failing to correct it, is pretty bad.
At different points in time, attitudes toward what would currently be considered plagiarism in news were radically different. It's very, very helpful to recognize that outside a relatively few fairly stable rules (murder, real property theft), much of ethics and morals is temporally, culturally, and situationally relative. Today we suffer witches to live. In Revolutionary America, plagiarism was common practice (http://www.huffingtonpost.com/todd-andrlik/how-plagiarism-ma...). My feeling is that too strict an insistence on slavishly faithful accuracy can be as much a liability as confabulation. We know now that war photographers since Brady have staged and arranged subjects in photographs to more effectively tell stories. That NASA image processing often involves significant Photoshop enhancement and visible-range representations of invisible spectra from radio, infra-red, ultra-violet, and X-ray ranges. That NPR extensively edits interview audio, and will even modify "live" host comments over the course of repeats of their anchor news programs Morning Edition and All Things Considered to correct for flubs. That Campbells put marbles in its soup, that clothing catalog models wear heavily pinned garments, and that HN moderators will re-edit headlines and censor meta articles.
Who ya gonna shoot?
If we're going to hang Lehrer, let's hang him for what he's been doing deliberately and in clear exception to both norms and hard-written rules. Not based on either fast-and-loose definitions of correctness or normal deviations.
That being said, I'm glad the journalism community is cracking down on people who are recycling, plagiarizing, and not fact checking.
Even the case of copying someone else's mistake (in the "10 sportswriters" example) also seems forgivable to me... and- forgive me if I'm too generous- just another mistake, albeit this time on Lehrer's part.
First their story about abuse at Apple factories in China turned out to be piece of fiction. Now all this with their contributor Jonah Lehrer.
Yet you still have to learn a separate template language, deal with the DOM, browser inconsistencies, etc.
Meteor Spark looks pretty nice though; look forward to trying it out.
I like the realtime concept/behavior but I want to keep developing in django.
Btw, this feature sort of already exists in jQuery. http://api.jquery.com/link/
I have prepared well for the interviews every time. I understand the algorithms, the data structures, and can program them on my own time no problem. But the second I'm in an interview setting, I lock up and can't think. I stumble across stupid thoughts (How many bits are in a byte? Oh yea, 8. But what about the 0th bit? What to do?!) and just work myself into a corner. All the while trying to seek approval from the interviewer. After about 2 minutes, I become a wreck and am hopeless.
I also don't consider myself non-social, and deal with coworkers very well. I still don't understand what it is about the technical interview setting that makes me act like this.
From my experience the average google interview requires you to be so deeep into the algo/datastructure space, you should be able to code up the KMP algo off the top of your head. You have to be a topcoder with atleast 1200 rep or equivalent algo & coding skills. Coding speed also matters.You should be able to scribble Floyd warshall.
Why this way? Well that's where google did most of their recruiting from back in the day. Anyone who says they got hired without this are either lying or got lucky in the interview process. Same goes for the new wave of startups in the bay area..facebook/palantir/quora etc.
I was very much into the OS, compiler space in school and that was what I was interested in. Got an offer from msft, amzn but not google. So kids, read up on CLRS & the algorith design manual, solve every problem there & also create your account today on topcoder if you'd like that job at el goog. Any other book that says otherwise is equivalent to Linux programming for dummies or Complete C++ in 21 days.
There are of course multiple drawbacks, but at least this method could help alleviate the problem of false negatives.
but seriously, why do i need to remember information about languages or algorithms i don't use daily? my memory is limited and i prefer it to be filled with the most useful information at the time.
The Microsoft representatives have told me to use a short one page resume.The Google representatives instead told me the exact opposite and said for me to put down everything relevant to the position regardless of length.
I now have two resumes, a short one for on the floor career fairs and a multi-page that I submit online.
(1) See a market opportunity in yachts 55 feet long. Need to hire a yacht designer to get the engines, hull shape, hull construction, safety, other engineering details right and supervise the construction including selecting the people for the interior design and finishing the interior. Want (A) someone who has done such work with high success for two dozen yachts from length 30 feet to 150 feet or (B) someone with the potential?
(2) Have a small but rapidly growing Web site and need to hire someone to get the server farm going for scaling the site. They need to design the hardware and software architecture, select the means of system real time instrumentation, monitoring, and management, work with the software team to make needed changes in the software, design the means of reliability, performance, and security, get the backup and recovery going, design the server farm bridge and the internal network operations center (NOC), write the job descriptions for the staff, select and train the staff, etc. Now, want someone who has recently "been there, done that, gotten the T-shirt" or someone with the 'potential' of doing that?
(3) Need heart bypass surgery. Now, want someone who has done an average of eight heart bypass operations a week for the past two years with no patient deaths or repeat operations or someone with that 'potential'?
(4) Similarly for putting a new roof on a house, fixing a bad problem with the plumbing, installing a new furnace and hot water heater, installing a high end HVAC system, etc.?
War Story: My wife and I were in graduate school getting our Ph.D. degrees and ran out of money. I took a part time job in applied math and computing on some US DoD problems -- hush, hush stuff. We had two Fortran programmers using IBM's MVS TSO, and in the past 12 months they had spent $80 K. We wanted to save money and also do much more computing. We went shopping and bought a $120 K Prime (really, essentially a baby Multics).
Soon I inherited the system and ran it in addition to programming it, doing applied math, etc. When I got my Ph.D., soon I was a prof in a B-school. They had an MVS system with punched cards, a new MBA program, and wanted better computing for the MBA program. I wanted TeX or at least something to drive a daisy wheel printer. Bummer.
At a faculty meeting the college computing committee gave a sad report on options for better computing. I stood and said: "Why don't we get a machine such as can be had for about $5000 a month, put it in a room in the basement, and do it ourselves?". Soon the operational Dean wanted more info, and I lead a one person selection committee. I looked at DG as in "Soul of a New Machine', DEC VAX PDP 11/780, and a Prime.
The long sitting head of the central university computer center went to the Dean and said that my proposal would not work. I got a sudden call to come to the Dean's office and met the critic. I happened to bring a cubic foot or so of technical papers related to my computer shopping. I'd specified enough ordinary, inexpensive 'comfort' A/C to handle the heat, but the critic claimed that the hard disk drives needed tight temperature and humidity control or would fail. I said: "These disk drives are sold by Prime but they are actually manufactured by Control Data. I happen to have with me the official engineering specifications for these drives directly from Control Data.". So I read them the temperature and humidity specifications that we could easily meet. The critic still claimed the disks would fail. Then I explained that at my earlier site, we had no A/C at all. By summer the room got too warm for humans, so we put an electric van in the doorway. Later we had an A/C evaporator hung off the ceiling. Worked fine for three years. The Dean sided with me.
In the end we got a Prime. What we got was a near exact copy of what I had run in grad school, down to the terminals and the Belden general purpose 5 conductor signal cable used to connect the terminals at 9600 bps. The system became the world site for TeX on Prime, lasted 15 years, and was a great success. The system was running one year after that faculty meeting. I was made Chair of the college computer committee.
That faculty meeting had been only two weeks after I had arrived on campus. There was one big, huge reason my planning was accepted: I'd been there, done that, and gotten the T-shirt. That is, in contradiction to the article, what mattered was actual, prior accomplishment, not 'potential'.
Why the industrial psychological researchers came to their conclusions I don't know, but I don't believe their conclusions.
From the abstract of the study behind the article :
When people seek to impress others, they often do so by highlighting individual achievements. Despite the intuitive appeal of this strategy, we demonstrate that people often prefer potential rather than achievement when evaluating others. Indeed, compared with references to achievement (e.g., â€śthis person has won an award for his workâ€ť), references to potential (e.g., â€śthis person could win an award for his workâ€ť) appear to stimulate greater interest and processing, which can translate into more favorable reactions. This tendency creates a phenomenon whereby the potential to be good at something can be preferred over actually being good at that very same thing. We document this preference for potential in laboratory and field experiments, using targets ranging from athletes to comedians to graduate school applicants and measures ranging from salary allocations to online ad clicks to admission decisions.
When human brains come across uncertainty, they tend to pay attention to information more because they want to figure it out, which leads to longer and more in-depth processing. High-potential candidates make us think harder than proven ones do. So long as the information available about the high-potential candidate is favorable, all this extra processing can lead (unconsciously) to an overall more positive view of the candidate (or company). (That part about the information available being favorable is important. In another study, when the candidate was described as having great potential, but there was little evidence to back that up, people liked him far less than the proven achiever.)
In other words, you are given two probability distributions: one wide, one tight. The wide one, you are told, has the potential to contain X. But the tight one, for certain, is centered on X. Now, which do you pick?
If you're in a situation were values of X are nice but values beyond X are gold, then you pick the wide distribution. That's because the tight distribution is centered on "nice" and, being tight, offers almost no hope of straying into "gold" territory.
So maybe it's not a mind trick, after all.
EDIT: fix typos.
Unfortunately, I couldn't find a sharable, non-paywalled draft of the article anywhere.
One of the best ways to get hired is to, like all great American beers, have the great taste of social proof with the less filling property of appearing to be inexpensive. Business people very much want qualified, capable employees, but they want them at the best possible price. Giving off the appearance that you are capable but unproven makes a business person's leverage sense tingle like no tomorrow. This, at least, has been my experience. Your mileage may vary.
I'm not sure that, as far as the general population goes, craving the "next big thing" is a global phenomenon. Those more concerned with getting attention (citizens of wealthier countries) over practicality might be more inclined to care. That would certainly help explain the hipster population plaguing the United States.
"they compared two versions of Facebook ads for a real stand-up comedian. In the first version, critics said "he is the next big thing" and "everybody's talking about him." In the second version, critics said he "could be the next big thing," and that "in a year, everybody could be talking about him." The ad that focused on his potential got significantly more clicks and likes."
1) The kind of person you want to hire is either too expensive or unattainable. Let's assume this person's output equals 100%.
2) The person you can afford to hire probably grades out at 50% to 75% output of the ideal hire.
3) The minimum output you need to justify paying another person is 40%.
I'm just throwing numbers out there but this is directionally the situation you're dealing with (especially in a hot market). Some would say #1 is a '10x' player and the actual gap between #1 (what you want) and #2 (what you can afford)is much much greater than what I lay out above.
If that's the case, you can begin to see why hiring mangers aren't opposed to hiring #3's (or training someone up to #3 output) who have #1 potential over proven #2's with (perceived) limited upside. Of course the process of identifying candidates with #1 potential is a separate matter.
Ideally you can hire #2's with #1 upside but it's hard to get people like that since more often than not their current employer makes a big counter-offer and/or promotion to keep them. Consequently, you end up in a situation where you can hire someone who is 1) unproven with upside or 2) proven with limited upside.
To editor who OK'd the book title ("Succeed: How We Can Reach Our Goals")? Did you not read the "Nine Things Successful People Do Differently"? Or, do you believe that a total lack of originality is the way to reach your goals?
Not that using more squishier measures is wrong. It just is. Ergo, I don't see how this study means anything in the real world.
There's an entire section devoted to new, or future Page 1 articles.
The inclusion of the link must mean that there was either a demand for it or a belief that readers would want to know about the articles with future potential to make the first page. Anecdotal at best, but it seems to support the idea that there is some innate bias toward the next new thing.
The lack of an index on the junction table definitely did have a major effect. By just doing the following:
CREATE INDEX ON movie_person (movie_id); CREATE INDEX ON movie_person (person_id);
SELECT pg_size_pretty(pg_total_relation_size('movie_person')); => 45MB
I plan on adding an erratum to my article explaining my error, the time/memory trade-off, and ideas for further improvement or exploration, potentially including bloom-based GiST indexes and the opportunities for parallelization.
I think the reason that OP's join-based queries are slow is that there are no indexes over his junction table's foreign keys:
CREATE TABLE movies_people ( movie_id INTEGER REFERENCES movie, person_id INTEGER REFERENCES person );
EXPLAIN ANALYZE SELECT * FROM movies_for_people_junction WHERE person_id = 160; Hash Join (cost=282.37..10401.08 rows=97 width=33) (actual time=7.440..64.843 rows=9 loops=1) Hash Cond: (movie_person.movie_id = movie.id) -> Seq Scan on movie_person (cost=0.00..10117.01 rows=97 width=8) (actual time=2.540..59.933 rows=9 loops=1) Filter: (person_id = 160) -> Hash (cost=233.83..233.83 rows=3883 width=29) (actual time=4.884..4.884 rows=3883 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 233kB -> Seq Scan on movie (cost=0.00..233.83 rows=3883 width=29) (actual time=0.010..2.610 rows=3883 loops=1) Total runtime: 64.887 ms
(EDITED TO ADD: I totally misread the timings in milliseconds for timings in seconds, so most of what I originally wrote is off by a factor of 1000. I'm leaving it here for entertainment value and because you might want to play with the data set in SQLite. But my point is still valid: a vanilla join is comparable in performance to the OP's bloom-filter method.)
I'm having a hard time believing that the straightforward join on a data set as small as the OP's sample is really going to take 65 seconds on PostgreSQL. Maybe that's what EXPLAIN predicts (with spotty stats, I'd wager), but EXPLAIN is not a reliable way to measure performance. For this data, I'd expect real queries to perform much better.
EDITED TO ADD: The OP's article shows the results for EXPLAIN ANALYZE, which ought to have performed the queries. So I'm not sure why the results are so slow.
Heck, even SQLite, when processing a superset of the data set on my 4-year-old computer, can do the OP's final query (and return additional ratings data) almost instantly:
$ time sqlite3 ratings.db ' select * from users natural join ratings natural join movies where user_id = 160 ' > /dev/null real 0m0.006s user 0m0.002s sys 0m0.004s
$ wget http://www.grouplens.org/system/files/ml-1m.zip $ unzip ml-1m.zip $ cd m1-1m $ sqlite3 ratings.db <<EOF CREATE TABLE movies ( movie_id INTEGER PRIMARY KEY NOT NULL , title TEXT NOT NULL , genres TEXT NOT NULL ); CREATE TABLE users ( user_id INTEGER PRIMARY KEY NOT NULL , gender TEXT NOT NULL , age TEXT NOT NULL , occupation TEXT NOT NULL , zipcode TEXT NOT NULL ); CREATE TABLE ratings ( user_id INTEGER REFERENCES users(user_id) , movie_id INTEGER REFERENCES movies(movie_id) , rating INTEGER NOT NULL , timestamp INTEGER NOT NULL , PRIMARY KEY (user_id, movie_id) ); .separator :: .import movies.dat movies .import users.dat users .import ratings.dat ratings EOF
$ time sqlite3 ratings.db ' select count(*) from users natural join ratings natural join movies ' 1000209 real 0m0.953s user 0m0.925s sys 0m0.021s
$ time sqlite3 ratings.db ' select * from users natural join ratings natural join movies ' > /dev/null real 0m5.586s user 0m5.497s sys 0m0.059s
> and the upper bound on the time taken to join all three tables will be the square of that
These kinds of from-principle assertions about what Postgres's (or other DBs') performance will be like sound helpful but usually aren't. The kinds of queries you issue can change everything. Indexing can change everything. Postgres's configuration can change everything. Actual size of the table can change everything. For example, if the table is small, Postgres will keep it in memory and your plans will have scary looking but actually innocent sequential scans, which I think actually happened in his join table example.
Anyway, it's good to have a lot of tools in your toolbox, and this is an interesting tool with interesting uses. I just think it would be a grave error to take the performance ratios here as fixed.
Any recommended book or set of articles for starting with Postgres?
Is there any chance that the bloom could be used as a short-circuit filter but still follow-up with the m2m join to filter out the false positives? If the query optimizer can take advantage of that, then you could likely balance the size and cost of the bloom field.
For the join table, that's 2 integers * 575,281 ratings * 4 bytes = 4,602,248 bytes used in the join table.
With the filter, in each movie row, you need to store 1632 bits for the person_filter and 1048 bits for the hash, so 3,883 movies * (1632 bits + 1048 bits) = 1,300,805 bytes.
In each user row you need to store the same number of bits for the filter and hash, so 6,040 users * (1632 bits + 1048 bits) = 2,023,400 bytes.
Is my math here wrong? With this approach you save about 1.22MB, or about 27% over the join table approach (ignoring how much overhead there is for each row of the table and each page to store the table in).
Depending on the dataset it doesn't seem like the space savings would be worth the sacrifice in accuracy.
For the bloom filter: (actual time=0.033..2.546 rows=430 loops=1)
And for the join: (actual time=7.440..64.843 rows=9 loops=1)
So the join returned 9 movies for person_id=160, while the bloom filtered returned 430.
I understand it's a probabilistic model, but that's a pretty whopping difference in data. Have I missed something?
The Postgres query planner can also recheck constraints automatically to recover from bloom filter false positive matches at query time.
FYI -- bloom filters are already used internally within the PostgreSQL intarray contrib module and the full-text search functionality.
EDIT: for clarity, typo correction
Because you can't index that bloom column, it seem's you'd always be doing full table scans.
In fact it doesn't appear any indexes were used throughout this whole exercise, is that right?
This fact makes it unusable for many use-cases, but it's an interesting and good article nevertheless.
I see the table `movies_people` uses (SIGNED) INTEGERS as datatypes, but they reference to a UNSIGNED BIGINT (SERIAL).