Right now there's a variety versions of OpenGL out there and they are incompatible in subtle ways. To write "portable" graphics code, you need both compile time and runtime version checks for a variety of features. Some restrictions in mobile versions make sense because of hardware requirements, others are just plain ridiculous.
And the versions of OpenGL that vendors ship is very diverse. For years, Nvidia and AMD have been the only ones to provide (at least almost) the latest version of OpenGL (but only on Windows and Linux, not Mac). Other vendors are lagging behind by several years.
I won't even start listing the obvious problems with the OpenGL API. Everyone who is working with it knows that the API is ridiculous.
I'd like to see an API clean up (ie. rewrite from scratch), a common shader compiler frontend, a common shader binary format and common tooling like benchmarking and profiling tools. Perhaps even a software emulated "gold standard" implementation.
At the moment, writing practical OpenGL applications is miserable. It's quite alright as long as you're working on a small project for your own enjoyment but once you have to start dealing with a variety of driver bugs from different vendors and whatnot, it takes a lot of time to actually ship an application.
The shape, size, and adsorbing nature of the fibers also appear to be critically important. Recently, doubts have arisen concerning the safety of commercially available carbon nanotubes, which may possess the same carcinogenicity as asbestos fibers because of their similar characteristics. Ample care has to be taken to prevent a tragedy similar to the one caused by asbestos exposure.
 http://www.med.nagoya-u.ac.jp/medlib/nagoya_j_med_sci/7112/p... [PDF]
On making nanotubes less dangerous: Shorter is better http://onlinelibrary.wiley.com/doi/10.1002/anie.201207664/ab...
E.g. former F1 driver Mika Salo underwent surgery where his lungs were examined to assess the effects of repeated exposure to burned carbon fiber dust (this was several years ago). Unfortunately, I do not have any links to sources nor do I know the results of the research.
Another big question mark is the health and environmental effects of graphene. There is a lot of research going on in applications of graphene but only now there have been research projects into possible negative effects on the environment.
Seems like the concern was correct, but misplaced. We had a plane catch fire on a taxiway and burn, and no electrical mayhem resulted. Perhaps we should have worn our gas masks -- but we didn't know at the time.
The epoxy is also pretty nasty too. If you don't wear the right personal protective equipment when using it, you can quickly become sensitized to it. I don't know what that means medically, but it can't be good. I have always worn the right gear when building things, but it is clearly harmful to you without proper ventilation and separation from your skin.
I love composites, but they definitely have downsides.
I didn't think about the collateral damage it could cause to others, which is to bombard your "friend's feeds". This is also interesting because, well you can destory facebook in this manner. If enough people are peeing in the pool, people are going to get out.
Maybe I should write a greasemonkey script to accomplish just that. Not that I want to destroy facebook, but that if i wanted to destroy my data this would be the way because facebook doesn't give me that option.
Also does everyone else think it's creepy when your friends stop using facebook and old "likes" pop up on your feed to make it look like that user hasn't left?
>Also, as I went to bed, I remember thinking Ah, crap. I have to like something about Gaza, as I hit the Like button on a post with a pro-Israel message.
>By the next morning, the items in my News Feed had moved very, very far to the right.
>Rachel Maddow, Raw Story, Mother Jones, Daily Kos and all sort of other leftie stuff was interspersed with items that are so far to the right Im nearly afraid to like them for fear of ending up on some sort of watch list.
>While I expected that what I saw might change, what I never expected was the impact my behavior would have on my friends feeds.
This article has so much modern anxiety in a nutshell. We have the pervasive surveillance society, and having our behaviour affected by algorithms.
What this really highlights, to me, is the extent to which Facebook exerts editorial control over the news that you're subjected to. This has all sorts of other effects on how media dollars are spent and as a result the shape of discourse - I'm immediately reminded of http://www.slate.com/blogs/moneybox/2014/05/22/facebook_s_mi...
This is not to say that there haven't _always_ been pernicious incentives at work; but before you could at least question those incentives and motivations instead of shrugging, and pointing to there's an unexplainable, mysteriously biased, support vector machine et al pulling strings.
I wish there were more discussion venues where the quality of your participation was based on the value to the discussion, not as to whether or not you supported what that person said.
As it stands, the title makes it sounds as if someone tried Facebook for two days and was happy with what they saw. (I know this is the original title, but that doesn't make it correct.)
Or in this case like/post.
Full disclosure - I closed my account 2 years ago. Did not miss it even for 10 seconds.
Obviously, I attribute this to HN load, but in the real world you'd probably like to get some notice that the SPA is attempting to do "something."
I hope we see this technology actually become readily available. There might still be work to be done, but in general if they can reproduce the demo videos with other content then they're on to something people would want.
Also you probably saw this over the past week: http://jtsingh.com/index.php?route=information/information&i... disregarding the politics of that) Whatever he's doing (I assume a lot of manual work) it has a very similar effect and it has these beautiful transitions between speeds.
Amazing work and the videos are stunning.
The hyperlapse of the climbing video looks like an FPS game from a decade ago with texture refreshing as you get closer.
That being said, it really does look amazing!
I see they have listed a Windows app coming. Is that Windows desktop app?
I make a lot of 4K hyperlapse movies, it is tedious as AfterEffect's warp stabilizer is useful only in a small fraction of cases, Deshaker is more consistent but also not perfect, and the only option in the end is multi-pass manual tracking and stabilizing which is very time consuming and tricky for long panning shots.
I'm curious to see what happens if they insert more action-packed footage. An MTB course with trees, switchbacks, and jumps would be an interesting stress test of this technique.
now to implement it open source ;)
The technical video breaks down some of the techniques they used. Global match graph is particularly interesting. This technique alone could lead to a big improvement in timelapses, by trying to select consistent changes between frames.
http://cg.cs.uni-bonn.de/aigaion2root/attachments/FastSimila... <- maybe this?
I'm also curious if anyone else got motion sickness while watching the video.
Also, I will pay $$$ for this to use with my motorcycle footage from GoPros.
One of the by-products of this algorithm is fully textured 3d model representing filmed environment. Offering that as pure data dump, or even a manual process allowing user to control camera would be as valuable as fully automatic one-off timelapse no one ever watches (except maybe your granny).
What sounds better - a video tour of a house, or a 3D model of a house you can traverse however you like?
I wonder if 3 letter agencies have better structure from motion implementations a la "Enemy of the State" (Isnt it sad that this film turned out to be a documentary?). I suspect something like a 3d reconstruction of Boston Marathon (FBI did collect all video footage of the event) would be very helpful to the investigation.
I would guess that I could upload a shaky video to youtube to get it smoothed out, download it, and speed it up with similar to their rate and get similar results. The timelapse that they show that is so much worse uses way less frames of the raw footage (every 10th frame?) and goes way faster than their "hyperlapse". It isn't a fair comparison.
Intuitively I would have assumed that this would be really difficult to do, but the tools OSM provides for editing are actually quite easy to use even for people with no clue like myself.
You don't even require accurate GPS hardware or anything. Knowledge of your surroundings combined with the (blurry, but available none the less) satellite imagery might already be enough for you to really do good.
In my case, I've added building numbers of my neighborhood, marked one-way roads, added a few gravel foot paths where they were missing and I knew where there because I walk on them on my commute and so on. All this required zero hardware nor actual knowledge in map making (also: The changes are still in unaltered, so I assume they weren't all bad).
Using a cheap GPS tracker and a bike I furthermore added a few small lakes and a small creek close to where I live. The GPS tracker was very helpful as the satellite imagery was (understandably) just showing forest, but biking around the lake a few times was really helpful in giving me the correct measurements.
All this was both a lot of fun and absolutely trivial to do. I highly recommend that you give this a shot on your end. I'm a person with zero experience in map making and yet I could easily contribute my part and I had an absolute blast doing it.
Also, if you are good with directions, this isn't limited to the places you live now - I've also added a lot of detail to the map around the place where I went to elementary school (yes. The environment has changed a bit, but that was a great opportunity to visit the place again).
Contributing to OSM is a very pleasant and fun experience.
It was giddying to discover that Bermuda had great satellite images, but zero mapping and then a few hours later you'd literally put a well-known place like Bermuda "on the map" by drawing in most of the main roads, airport and so on. Great to see that others have built on this with detailed information.
Very fond memories.
Aesthetics is a huge selling point, especially when used for business, and I can't help but think that its the factor that is holding OSM back from prime time.
OSM even includes horse trails and bicycling routes that I can't find in any other map available online. Not even local, official maps.
The weakness in OSM still is that it's difficult to tell whether or not the maps are of good quality and up to date for a particular region.
While this demonstrates the huge progress of OSM, it is important to keep in mind that the hardest (and not so rewarding) part of the work is not to create the maps, but to keep them up to date!
That's why it is important to keep supporting OSM.
For a long time, North Korea was shown as a featureless area in google maps.
What is the best way to use OSM on mobile, specifically iPhone? I remember trying some apps before but I didn't like them. Is there anything as polished as the Apple/Google apps?
My company stuck with Borland until 4.5. We parted ways when the compiler generated code that GPF'ed when calling 'new' in a DLL, or declared some global variables undefined that had successfully compiled a few modules earlier. Also, Borland couldn't step through 32-bit code in debug mode. Visual Studio 6 could.
Nice job, Fabien!
N.B. you have a typo
Z:/> mount c ~/systen/c
Argh. It really bugs me when newspapers can't do simple arithmetic. Especially in the lead paragraph.
India is about 17-18% of the world's population. It has a slightly above average fertility rate, so it will have disproportionately more children. So about 20% of the world's children are Indian.
If 40% of Indian children are stunted, therefore, at least 8% of children worldwide must be stunted (not including Africa, China, etc.). 8% of children cannot be in the bottom 2-3% of the world's height distribution. That is not how statistics works.
If this obvious an error was put in the lead paragraph, how can we trust that the other facts are accurate?
This explanation has always been unsatisfying for me. At least it seems incomplete. The system is obviously unethical, but I don't understand how the economics work here.
If only the eldest son inherits, why are the the following sons valued? Given that the practice creates a surplus of unmarried men, shouldn't unmarried women become an asset instead of a liability? Shouldn't unmarried men be a huge force against the dowry system?
I am the first born. My mother has told me that she would could the amount of protein she would get each day during her pregnancy with me. If she didn't get enough she would each more or drink a large glass of milk to try and supplement her diet. She didn't drink soda either.
With my younger brother she tried to make sure she ate enough protein, though she gave up not drinking soda.
With the youngest, my sister, she was working part time during her pregnancy and wasn't able to watch what she ate like with either me or my brother.
This all being said, we are no where near malnutrition. There is a considerable size difference between me and my siblings. I am 6'4" and have considerably larger bone structure than my siblings. (I'm not fat/obese/heavy. I only weigh 193 lbs.)
My brother is just a 6' and considerably smaller than I am. My sister is smaller than my brother and has a similar bone structure.
Might there be a biological imperative to ensure that the first born is healthy. Then with the later children parent lose the need to ensure their children are as healthy.
I, for one think the article is commendable in bringing new insight to problems facing us in India.
I'm not saying everything is hunky-dory in India. I'm saying that if someone thinks the fix is "Oh just apply Americanism in this particular aspect" then that someone is naive.
I think their treatment of the subject is more 'modern'. Classical signal processing is the stuff that you will find in Orfanidis's book in the OP and other classics such as Lyons, Oppenheim/Shafer, etc. Modern signal processing involves more harmonic analysis. There has been a lot of work, since the late 80s in the areas of wavelets, dictionary learning, etc. which you won't find in 'classical texts' on signal processing. In some universities these topics are taught in 'advanced' signal processing courses, at honors or graduate level. I hesitate to call this kind as 'advanced' signal processing, because I feel you need the same kind of prerequisites for 'classical' and 'modern' signal processing: linear algebra, Fourier analysis, basic probability, 'random processes', etc. In fact, I think 'modern' signal processing taught at the undergrad level also has the added benefit of being a gentle application-oriented introduction to real analysis for EE students.
As a practical example, our preferred HA solution for MySQL replication has effectively no network partition safety - if a network becomes partitioned, we'll end up with split brain. However, we have not once had to deal with this specific problem in our years of operation on hundreds of servers.
That said, do make the assumption that your AWS instances will be unable to reach each other for 10+ seconds on a frequent basis. Your life will be happier if you've already planned for that.
* Network partition tolerance can be designed around, assuming infinite time and money
* Network partition tolerance depends on the application
* Mitigating potential failure requires having a very long view on very fine details
* Most organizations will not be able to engineer solutions to address all network partition-related outages
Automated reverse engineering (of DRM) - https://news.ycombinator.com/item?id=7989490
Open-source debugger for Windows - https://news.ycombinator.com/item?id=8092273
You brought up a couple of neat concepts that I wasn't aware of, especially "UISystemAnimation.Delete".
I've had some feedback that I've made a bit of a mess with casting when trying to calculate a random number - would appreciate any best practices or thoughts on that...
* probably never, but worth mentioning
The instructions BOOLAND and BOOLOR don't interpret the stack values the same way IF, VERIFY etc do. They decode the top stack values as integers and compare against zero, thus they have to fail when the top stack item size is greater than 4 bytes.
Edit: littleEndian.decode also doesn't seem to respect the size limits
Edit2: .. or signed integers for that matter. So while this is a very cool basic concept, it's not a complete implementation.
The reference client provides test suites
I don't believe I have ever said "I feel depressed"
Would this be flagged? Does it only take one post to be flagged or is it looking for recurring behaviour?
If there is anything that would drive me to suicide, it would be more people thinking that they can 'solve the problem' in this manner.
> The goal of the CheckUp project is to detect any serious sign of depression, self-harm or suicide posted to a social network and provide peer support by notifying a concerned party.
> The app works by checking the tweets on your home timeline every few minutes and sending you an email notification if a tweet is flagged.
Shouldn't it instead make the person signing up the "concerned party" to be notified via e-mail, and instead have that concerned party specify which twitter feeds to watch? I'm probably missing something here.
> This application is temporarily over its serving quota. Please try again later.
I can understand that it's an advantage to have something like this renderer-agnostic, and embeddable anywhere in your render flow, but the way you have to use this completely defeats its purpose IMO. I would love a higher-level abstraction layer on top of this that deals with all the render state setup, so I only have to setup the UI itself and populate it, and can throw in a one-liner somewhere in my render function that draws it. In this case, I would happily trade the fact that this would break render-independence and would clobber render state, for ease of integration.
That said, the end result looks immensely useful and very-well done, it's just too involved to setup and use I think...
Just curious ...
The low # of commits and vague name might be a bad signs, but check the repo description: Developed at Media Molecule and used internally for a title they've shipped, with feedback from domain experts like Casey Muratori. Those two points alone make me pretty excited to try this out.
And a brief explanation of why this matters:IMGUI frameworks are increasingly popular for editors, debugging tools, etc. because they eliminate the need for state synchronization between your 'engine' and the visualization/editing UI. You also avoid the need to duplicate your layout in multiple places - you have one function that is the complete description of your UI. This reduces memory usage and can actually be more performant in many cases because you can easily ensure that you're only ever running code to update visible UI. Things like virtualized infinite scrolling become quite trivial.
Among other places, IMGUI techniques are used aggressively in Unity's editor.
ImgUI is a very nice engineering feat, but it's not terribly practical.
Now imagine this, you worked your ass for for a total of nine years to get your masters and PhD, and you spent countless hours honing your skills and finally you get a dream job that merges awesome research, top-level engineering, and working on a product that millions of people use every day. And then it all goes tits up because of a lottery.
A. Fucking. Lottery.
The problem for people like me is that, if I want to work in the US, I have to go through a lottery that will keep me in uncertainty for months. Then, _maybe_, if I'm lucky enough, I can relocate only to have the same thing looming a few years down the line. I cannot build a life around such uncertainty. I cannot built a future like that.
So, I'm staying in Europe. Not having been born in the US means that a lot of jobs are just not open to me, not because it's impossible to get them, but simply because the hassle is just not worth it anymore.
It's unfortunate that our government is becoming increasingly dysfunctional at every level -- federal, state, and local governments are all failing to do their jobs. I'm glad that Canada is introducing some competitive pressure.
China has been allowing domestic yuan conversion to western mortgages, boosting real estate demand in Vancouver, NY, SF & elsewhere, http://www.vancouversun.com/business/Secret+path+revealed+Ch...
I totally agree that the messed up US immigration system has benefited us here. I have experience with both immigration systems and the process in Canada was much easier than the US.
Why? Well, ask some of the 10million+ undocumented immigrants. They can live a better life in the US compared to their home countries, even undocumented. Tech workers or even middle-skilled workers take more issue with being "illegal" or "undocumented".
I think all immigration restrictions have to be put on the stand. Are they practical? Do they have a measurable benefit (beyond some hand-waving about supply and demand)? Do they have measurable negative impacts?
The situation around h1b visas is messed up. But the mess is multisided. First off, there is a large amount of unemployment in this country in the IT sector (especially around older workers) and no apparent desire to fix it. Second, there are "sweat shops"/consulting bodies milking the system to under pay for what is often basic/menial work like CRUD development/etc. Third you have the specialists at the whim of (2) and the number of allotted slots.
I think both (1) and (3) need to be fixed at the expense of (2). Any company bringing over labor that is basic and can either be filled by outsourcing contracts or simply training local labor should pay the price. Tata, IBM, Wipro, etc are at the top of that list.
The current system has few filters.
Edits: typos from posting using phone.
Charge for the H1 but have no quotas
Let market demand and ability to pay tell how many H1s come in
But, with a twist..home country has to have a similar program for US workers wanting to work that country so that the economies of both countries have a chance to rise due to flexibility of worker immigration
I am a foreigner with a PhD in applied NLP from a US university and I have been looking at such positions in other tech hubs like Vancouver, Montreal, Berlin, etc.
But these types of jobs only seem to be in the Valley. I work in the Valley and I like it here but I want to move to a place where I can have a stable immigration situation.
US immigration may be cumbersome but the most interesting jobs in Big Data seem to be in the US.
It's good to see that things are finally changing, even if for some companies it seems they're setting up purely for immigration purposes. Hopefully even if these US immigration issues pass, these companies will realize the advantages Vancouver offers and continue to stick around.
* Highest rated North American city and 15th overall according to Monocle Magazine's 2014 Quality of Life city rankings.
Perhaps the warmest major Canadian city, with a mild winter due to being coastal ('twenty degree below zero' - probably not).
To anyone of Indian nationality seeking a job in North America, and interested in permanent residency strongly consider the Canada alternative, for education as well as jobs. Unless the weather means a lot to you, ask your company if they have a Canada office where you can work when you start out. Things will be way easier.
- B.C.'s tech industry has many exploitative employers - B.C.'s tech market is underpayed, below national average. - B.C.'s living cost on average exceeds the average salary. - B.C.'s high real estate cost results in many house poor population. - B.C.'s political party places far more labor rights on nurses and people cutting down trees. - B.C. has a problem of bleeding talent to other provinces because of above reasons.
While $2 billion is obviously a lot of money in real terms, think about what that is when you compare is to the collective revenues of Bay Area tech companies.
I'm trying to work as hard as I can and start my own start up and open a second location.
They are lost.
Most of the training programs are just like any well made otj training, the comment about employees signing up for job specific courses is misleading, while they had some optional courses, it was just like any online course management system, a simple way to manage initial and ongoing job training.
In Apple's defense, their otj training was still better than any subsequent company I have worked at.
1. The program was established in 2008, which I think is relatively late and coincides with Jobs rapid deterioration of health. I guess Jobs was deeply concerned about the long term prospect of Apple.
2. As novel as the idea may seem, it didn't sound too different than garden variety internal training programs offered at larger Fortune companies.
3. Sounds like Apple HR is growing. I'd be interested to find out what percent of employees fall under HR, before and after 2008.
It should be noted that Jobs was never big on performance metrics.
http://www.pixar.com/about/Our-Story and page over to 1986:
"Steve Jobs purchases the Computer Graphics Division fromGeorge Lucas and establishes an independent companyto be christened "Pixar." At this time about 44people are employed."
To weigh in on the mention of using doubles for finance....
Using double for finance, perfectly fine. All trading systems I've seen use double, from HFT systems to deep learning AI systems that open and close potions over months. Double is fine for most algo trading, heck the exchanges and dark pools I've talked with use double.
What you can possibly do with this.
1) Learn how to write the logic of an trading algorithm.
2) learn the basics of technical trading, with MACD, Keltner channels, vortex and Bollinger band indicators. They've definitely put the time in to getting the indicators that FX traders like to use.
What you can't do with this system.
1) React to currency fluctuations on a tick by tick basis. FX is just so fast and precise, there is a reason that professional FX traders mix FX spot quotes from multiple sources, we use 4 at the fund I work at and some use up to 10 sources. There is also a reason why FX is quoted to 4 decimal places while equities are to 2, sometimes 3 for penny stocks.
Plus they use FXCM which had this new out about them:
> LONDONForeign-exchange trading firm FXCM Inc. FXCM +0.08% agreed to pay fines and refunds totaling almost 10 million ($16.7 million) to settle allegations by a U.K. financial regulator that the company withheld profits from clients and failed to inform British authorities that it was under investigation in the U.S.
> The Financial Conduct Authority said that U.K. units of FXCM withheld 6 million from customers on foreign-exchange transactions between August 2006 and December 2010. The regulator said the broker pocketed profits when exchange rates moved in its customers' favor while a trade was in process, but it passed on losses that occurred on other trades.
I don't know of a single firm that's successful in the time horizon that a lot of pseudo-HFT systems operate in. (The 10ms-3s range) And that's assuming an ideal fee situation...
Most of the people ignoring the LL arms race target RV opportunities on the 30s+ range. This involves taking a pretty big step back from microsctructure/toxicity models.
Trading might be simple but being profitable definitely is not.
Firstly, in 2013 California imported 32.7% of it's electricity. California has little control over how this is generated. Of the power used by California, 40.8% is from Natural Gas, 8.1% is hydroelectric, 6.0% is Nuclear, 4.3% is from wind power, 4.2% is geothermal, 2.1% is from biomass, and 1.4% is solar.
Natural Gas and Nuclear power are both excellent on-demand sources of power, and currently meet 46.8% of California's electricity requirements. If these power sources are to be phased out, they must be replaced with energy sources that are on-demand. Wind and solar do not fit this description. Hydro does, but quintupling California's hydroelectric capacity will have a huge impact on the environment. This paper greatly underestimates how much on-demand power generation capacity a power grid needs in order to be stable.
Side note: California currently derives little of it's electricity from wind or solar power. Electric vehicle batteries carry a high environmental cost to produce, so it is imperative that the energy they are charged with be of renewable origin for any net environmental benefits to be reaped. Given that 40.8% of California's electricity currently comes from natural gas, it's clear that anyone plugging their EV's into California's grid is doing the environment no favors.
Full disclosure: I am not an EPRI employee, but I've read a lot of their papers and presentations. Their research is original and unbaised. Their engineering is pragmatic and chocked full of raw 100% reality. I wish some of the websites people are citing here were talking to places like EPRI first, but instead write sensationalist headlines that hide details and misinform, making sane, coordinated discussion difficult.
I suggest hacker news check them out and maybe send some emails to get better information about this proposal and learn more how the grid truly functions politically, economically, and technically.
To me, this is "Grid 2.0" technology. You are moving energy from times where you have cheap excess and placing it on the grid in times of expensive need. If we are going to move to a grid with a lot of renewables, technologies like CAES and pumped hydro (https://en.wikipedia.org/wiki/Pumped-storage_hydroelectricit...) are two necessary ingredients.
The primary difference between CAES and pumped hydro is that CAES is cost effective for both medium (50+MW) and large (500+MW) installations, while pumped hydro is cost effective only at large scale (500MW+) installations.
I wonder whether it would be possible to extend this concept to get more powerful features, inspired by spreadsheets (and programming languages maybe). Sometimes you need more than a sum or an average, but writing out the formulas in full, repeatedly, seems like a lot of cognitive overhead.
I see in this thread that Emacs org-mode has something like it, but I'm not convinced that stuff like
Again, I'd like to plug http://www.ledger-cli.org. It's similar in spirit, with a reporting program reading a lightly formatted text file.
I have written a python script back in 2007 for calculating two trips expenses: https://github.com/bernardeli/trip_money_organizer
I'm not a Python developer myself, however I was pretty happy with the result. I know it works fine, and have used few times with no issues.
(Not affiliated, just a happy user. And there's a nice network effect if more people use it, so more people should use it.)
If you had some custom domain mechanism then people would feel safer hosting stuff with you because migrating away would be easier...
> ReferenceError: ga is not defined
Front-end hosting + easiest way to create a back-end api (https://api.blockspring.com.) could be a sweet connection. Would love to chat. email@example.com.
By the way is worth noticing that you won't probably be able to attach the interactive console to a running web server as the output is usually handled by the supervisor process, at least I wasn't able to do that in my first tentative and the memory dump was good enough for me.
Check this sample to dump the memory out of a running process: http://pyrasite.readthedocs.org/en/latest/Payloads.html#dump...
It's one of those tools that you'll be glad exists when the need arises, but you'll feel a little dirty using it.