This feels a bit link-baity, because it says nothing of how short, uncommonly used crunch modes help or hurt productivity - just how super-long work weeks are eventually more detrimental than helpful.
Edit: I would posit that short bursts of overtime - perhaps a single 60-80 hour week at the ramp up to a major release can actually be helpful if not exciting - if used quite sparingly. Research on that theory would be more interesting to me.
As outlined in the post, that way of working, doesn't (shouldn't) work. Especially when the code produced, or in the case of traders decisions made, might lose vast amounts of money due to a single mistake.
I'm curious, if anyone here works in that kind of an environment - How does this work? Are these numbers exaggerated? Are you all on stimulants? Do you see the kind of creeping errors and codebase decay one might expect?
There was almost no police/security at the gates or inside the festival, although selling drugs was not tolerated (eg. people selling on the festival grounds were kindly asked to leave). There were 42.000 people from 152 countries and most of them used some kind of substance or plant there (marijuana being the most abundantly and openly used). As a consequence (or despite this?), this was one of the safest and warmest places I have ever seen.
Instead of police watching everyone, there were a number of premises: there was a drug info stand, were one could go and test their drugs. The queue was quite long there, people stood 2+ hours in the queue to test their substances.
Then there was the Kosmic Care, a place were 20+ psychologists, doctors and shamans would bring people having 'bad' trips back to earth. They had 70 'bad' trippers in the first night alone and they were expecting a lot more on the full moon night.I've spoken to the psychologists there (out of curiosity, not because of a bad trip :) ) and they told me that that the majority of bad trips were caused by people taking 'fake' LSD. In fact, she said, 50% of the LSD people tested was not actually LSD but some designer substance with unknown consequences and effects.Other reasons for bad trips - was people mixing substances or taknig too much (usually young, unexperienced people) and people having prior mental illness.
I asked a guy there, how can one prevent people from having a bad trip again and the answer was 'well, after such an experience, most people grow up pretty quickly and it's unlikely they would take these substances lightly the next time'.
In most countries, these young people would end up in a hospital and then get arrested and possibly spend time in jail.
The war on drugs has caused a lot of suffering and has done very little to reduce drug use or addiction, yet it costs billions every year.
Protugal's approach to drugs is a great example of how the negative effects of drug use can be handled with minimal costs and lead to positive outcomes in drug users.All it takes is a bit of acceptance and common sense.
Governments should also support people that want drugs to come off those drugs and while we're at it, release all prisoners who are specifically in for possession/dealing/trafficking.
We really need to give up on this idea of a drug free world.
I think we need to look to Portugal for an example of what can be done and also as a starting point for possibly developing a better model http://www.spiegel.de/international/europe/evaluating-drug-d...
Good stuff. I really wish those in power would more often try a scientific/engineering approach to see what works rather than politicians shouting about war on whatever.
What they need to do is manufacture and sell the drugs at cost to registered addicts. This way you destroy the business of the drug cartels and you insure your citizens are at least using pure drugs.
Regardless of the legality of the use of the drug it is a health issue that the drugs your citizens consume are pure. The safety of your people should come first and a government that has taken this long to realize something that basic is simply incompetent.
Prioritizing law enforcement before public safety is a revealing and meaningful sign of incompetence or even corruption.
There is a sane Republican! Hurrah!
Named and outlined better, our "mining tax" could have been this. It could have protected against capital flight and essentially built national strength for all at the expense of those (especially foreign interests) looking to dig up serious swathes of our ground.
Instead, we had a predictable response from the mining magnates and Coalition, an easily duped and panicked public, and a flailing government at the time who named the concept terribly and defended it poorly. And when challenged on the whole "it's barely made any money" front, caved instead of noting that it'd been potentially hampered for political reasons.
What's an easier sell to the public? "Mining tax" or "Future Fund; funded by giant, mostly foreign mining companies."
Look at the Coalition's "$20b" medical research fund. So many people think that's a current $20b fund, rather than a far smaller fund to be built up to $20b over a number of years, and then to fund research only from the earnings of the fund and not from the fund's base value. Labor should have framed the MRRT much more like this and it would've made for a better sell. Avoid the word tax, outline the goal with a specific value range and target date, and name the beneficiaries of the earnings - technology research, medical research, etc.
I can't see how this wasn't a big missed opportunity for Australia.
Unfortunately, that report was classified as "secret" at the time, as it was felt the conclusions would boost support for Scottish independence. It wasn't released until 2005.
I think the magic ingredients (aside from being lucky with natural resources in the first place) are egalitarian society, high level of trust, very low level of corruption, a functioning democratic government and highly skilled fund managers.
Exploration usually happens when the company doing it can hope to make money off of a successful discovery. They might even pay for the privilege. Imagine a scenario where a company finds a motherload of oil after a low probability exploration. Their exploration contracts (being signed before the deposits were known) guarantees them a huge profit. Unseemly, even. They are earning that because they took a risk. Now they get their 100X. Try explaining that in an election year.
New technologies are constantly being invented that improve exploration, surveying & mining. This means that there are new possibilities every year impacting which mine/well is profitable (minerals can be extracted at a profit. These all change underlying economic realities. A $1bn per year mine can only make a slim profit in years where commodity prices are high. It employs many people. The next year commodity prices change or some new mining or processing technique (fracking is a huge gamechanger) mean that some complicated contract is now worth a whole lot of money. The $1bn goes from a 2% margin ($20m) to a 30% margin ($300m) and everyone want a piece of it.
Meanwhile government departments, unions, armies, etc are salivating over the prospect of this wealth. The National University's long impoverished Oceanography faculty has their eye on a fleet of research vessels. Academics are starting companies offering to do (mandated) environmental impact studies. There is a lot of pressure for money now, from industry, politicians, constituents.
Politicians definitely don't want to be investing during their term for the benefit of politicians 10 years from now. The world can be cynical, but not always.
Norway has done well. I'm not sure if any one thing can be learned from them. Have smart people running things. Meanwhile Norway have their own political traditions, values, and probably pathologies.
This list ranks SWFs by size- notice there are several US states on the list: www.swfinstitute.org/fund-rankings/
Otherwise, man I wish Canada would manage our oil money like Norway. But we burned that bridge a long time ago.
 look it up
As someone who lives in America Lite (the UK) your policies on just about everything strike me as insanely reasonable.
I pretty much feel shame whenever I think about our last two governments.
But now, both started offering TLS, just recently it seems.
The company that made this is: SecureMix LLC (est. 04/15/2014); aka Free Firewall Antivirus LLC (est. 10/17/2013); aka Blue Quail Capital, LLC (est. 06/21/2010).Here is the corporate registration: https://mycpa.cpa.state.tx.us/coa/servlet/cpa.app.coa.CoaGet.... The person opted to use a CPA (EDWARD H. GOWETT) to register their LLC (looks like a nice guy: https://www.linkedin.com/profile/view?id=34375436). And finally, the man, the myth the legend: ANTON BONDAR.
The graph visualization is prime, and I love that the peaks are "rounded" out instead of sharp declines (sharp declines would make it look more like a live stock ticker).
Extremely well done, and exactly something I have been looking for. I will keep an eye out for the Mac version.
This app is not a one man show! This app, with all it's license stuff, backdoors etc.. all ready to know a lot of all your network traffic going in and out, and you agree upon all this when you install it. Now YOU got hacked! Or do you think the app will also show in detail what data they store and share on their servers and third parties and more?
767 point and counting on HN, amazing...
I would also say that calling home is a huge no-no for this software. I would seriously consider revisiting that choice if I were you.
Surprising really it has taken so long to get an app like this on Windows. I've been using My Data Manager on Android for a the previous 2-3yrs.
The closest I've gotten on Windows up to this date is CFosSpeed in traffic shapping = off mode + process explorer. There have been other apps that attempted to present the data, however none have done it like GlassWire.
Looking forward to the paid version, this is awesome :)
Got a few rendering issues on Windows 8.1
Hopefully these issues get sorted out, quickly.
This will probably stop some drive-by hacking - great. But my understanding from some well informed people, is that increasingly rootkits can hide their network traffic.
So, whilst this will add piece of mind, you'll still need to maintain security - because all this will really do is let you know you've been "hacked" again. Sure, it may prevent the dropper from connecting out - but often that would look like Flash or Java just connecting out to a random host.
As someone who got hacked, and installed NoScript, I'm amazed at the number of hosts that even mainstream websites connect out to. I struggle to stay on top of my whitelists. I just don't think you're going to see the dropper in time and stop it.
By the way, I'm surprised this isn't a default feature of OS'es. I always thought knowing exactly what apps are talking to the world and how much is something one would like to know about.
Feels like a trap.
At least it so much better looking than other windows apps
Any chance you will support hi-res screens (see http://imgur.com/ztN8cL3)?
The only time I've been aware of getting hacked, my friend handed me his computer and said, "You're a nerd, find me a live pirate stream of the Big Game. Quick, people are coming over!" Friend may be too strong a word, but I gave it a shot even though I thought it hopeless. I went to some sketchy pirate sites, and I clicked on a link. A popup launched, and immediately there was an error; "Shockwave has crashed."
"Do you install updates?"
Another time, my brother was lamenting that he couldn't take pictures with his phone because his SD card broke. I never used mine, so I pulled it out and handed it to him. A few days later I had to get some information immediately and the only device available was my phone. I was on a website and an error popped up; it was to the effect of "Can't download someapp.apk because you don't have an SD card."
Edited to add:
https://incidents.org has good reads.
it would be nice to have more info about how you monitor the connection and prevent any Trojans from going around the monitor point.
What's the overhead of Glasswire? For me it's 2-6% CPU (of my many core systems).
What does the gwdrv.sys kernel driver do exactly? Hook into the TCPIP.sys kernel driver?
Is the "Glasswire control service" an app update service? Blocking it in the "Firewall" tab has no negative side effect so far.
i.e., "spynetus.microsoft.akadns.net" could have some clearly Glasswire edited note that said something like "Used by Windows Defender". You could even add a +1232 Safe/-12 Unsafe that linked to a crowdsourced/forum sourced "what's this" registry. Sort of like reviews on processes or hosts.
Beautiful app. Amazingly designed. Insanely useful with zero configuration. Would love to pay money for this, especially if you can bring this sort of zero click usability to a LAN environment.
This looks way easier and prettier than open sourced NIDS and HIDS like snort and OSSEC, and I think that's why I'm supremely skeptical they hired enough security people versus frontent people.
Some items:1) It'd be nice to be able to scroll around directly on the graph using mouse gestures (middle-click drag?).2) Graphing of bandwidth seems to be off somehow. If I do a speedtest.net, my ~104Mbps transfer shows up on the graph as 38 Mbps and the graph scale shows a max of 20 Mbps. http://imgur.com/QkZMVvj
I am not able to connect to a remote server. I don't know why! This is what I am doing:1- Allowing server access in Server tab in Settings on one computer.2- Trying to connect from another machine using the credentials.
I am not able to connect. Does anyone else face the same issue?
Can it import existing whitelists or blacklists?
If there are competing products (paid or free), a comparison would be helpful.
> GlassWire keeps an up to date list of known suspicious hosts and alerts you if you contact one. Suspicious hosts are often related to botnets, malware, and other malicious behavior
How is this implemented exactly? Does the app phone home? Does it do some sort of RBL check (if so, against which servers)?
On a related note, I recently tested a number of firewalls for Windows using Comodo's HIPS and Firewall Leak Test Suite; the only one I found that passed all tests with virtually no setup or changes was SpyShelter Firewall. Not an endorsement by any means, just an observation.
 http://personalfirewall.comodo.com/cltinfo.html http://www.spyshelter.com/spyshelter-firewall/
---maybe related...I remember when switched to linux some years ago, the software I really missed was ZoneAlarm and still haven't find a nice alternative (for fast and easy control of the outbount(!)/inbound net trafic). I liked that I could block and unblock the internet access of each application from the systray icon.Any suggestions?
I'm definitely curious to see what the paid features will be...
It was a piece of security software modeled after OpenBSD's pf firewall which let you define policies around network, file, and registry access for applications. You were able to setup really fine-grained policies as well, for example to only allow access to the C:\temp directory for list and read access, but to deny delete access, and to ask the user to accept/reject if it tries to open a file for writing.
So instead of monitoring access after the fact, CoreForce let you actively grant permissions and would either silently deny or interactively prompt you when an application went outside the resources you granted.
Downloaded it just to see if those screenshots were real. Keeping it because its awesome!
Maybe only visible with an UAC auth.
Also I hope it has list of known malware hosts for which it should give a huge red alert dialog if a connection is made to it.
If you cannot block new connections, it is likely the valuable information on your computer has been siphoned off, or glasswire bypassed before you noticed it on those fancy but useless graphs.
* A pay-once Pro version
* A plugin API so I can add my ISPs usage monitor
* Per-app bandwidth limiting (difficult on Windows I think)
That's what I currently do via NetUse, but this looks quite a bit better.
Could you make it so when the graph rescales, it just doesn't snap into place, but gradually (say, animate over a half second) resizes?
EDIT: If I have GlassWire on my second monitor, and click "+ 2 more" to see what else is going on, the pop-up opens on my first monitor.
e: After trying it, yep, this is excellent. And far too good to be free. I almost feel guilty using it.
My point being it's a closed source project by using it you implicitly trust its developers.
Here's my minor feature request (I'm sure you'll get a hundred or so today) - how about a config setting to turn on an automatic virus scan of the executable on first network activity? I imagine this would not be enabled by default for performance reasons, but I'd like to run it this way for a few days before reverting to default settings.
One question, what does "powered by Symantec" mean?
Is this just a sexy UI on top of a Symantec engine?
What usually happens with freeware like this is that it becomes adware or dies. I think you have enough features to charge for it now.
other than that am gonna say what everyone ELSE is thinking, Security + Microsoft, give me(us) a break, last time i checked the word security does NOT exist in Windows
am surprised how THIS made it to the top of HN, probably has something to do with those users who were defending IE's developer tools ;)
most malwares will rip thru this like butter.
i would only trust something like this running out of the box believed to be compromised. in the router for example.
I can't believe the article doesn't mention it.
I've been using NixOS as my OS for development, desktop and we're in the middle of transitioning to using it for production deployments too.
Nix (the package manager not the distribution) solves so many of the discussed problems. And NixOS (the linux distribution) ties it all together so cleanly.
I keep my own fork of the Nixpkgs repository (which includes everything required to build the entire OS and every package), this is like having your own personal linux distribution with the but with the simplest possible way of merging changes or contributing from upstream.
I use it like I'd use virtualenv.I use it like I'd use chef.I use it like I'd use apt.I use it like I'd use Docker.
There seems this fundamental disconnect between people making languages about how people use their languages. I don't have time to follow your Twitter feed, because I'm working on a lot of different things. I know it's important to you, the Language Developer, and so you think it should be important to me, the Language User. But I have dozens of things to keep track of, and all of them imagine that they're the most important thing in my world.
It's like the old office culture mocked in "Office Space" where the guy has 7 different bosses, each imagining their own kingdom is the most important.
Maybe this time we can talk about how to meaningfully solve these problems instead of just fighting pointlessly about if old tools are so great should be used for everything.
Decentralized package management huh?
How would that work?
A way of specifying an ABI for a packages instead of a version number? A way to bundle all your dependencies into a local package to depend on and push changes from that dependency tree automatically to builds off of it, but only manually update the dependency list?
I'm all for it. Someone go build one.
If you're very lucky, the packaging in question will not conflict horribly with apt or yum. So you probably won't be lucky.
"Stable" distributions have an additional downside he doesn't mention: when you upgrade every package all at once it's a LOT more effort than if you had upgraded them slowly over time. Dealing with multiple library changes at once is an order of magnitude more difficult than dealing with them one-at-a-time.
And also, to some extent, if all the libraries you are using have a long term stable API, then it doesn't actually matter which one you pick - anything is painless.
Secondly, it's quite hard if you failed. I learnt a huge amount about running a company, building software, even about who I am, when I've been doing startups, but ultimately the main thing I've demonstrated is that I can put a lot of time and energy in to projects that fail. I didn't have the insight to change what needed to change to ensure success or to walk away earlier to limit my losses. Those aren't great things to show people.
All in all, being self-employed does make it harder to get a job afterwards. If you recognise why though, you can defend yourself against those issues that employers will have.
Sure, occasionally, you might get employers in the valley who (claim) they care a lot about your personal growth, but from my experience outside the tech world, plenty of employers would much rather just have a dull but trustworthy tool who gets the job done without fail to someone incredibly smart, unscrupulous, and motivated. To these people (which I will venture to say is the majority of small-business owners), your ambition is scary to them, so you're better off not coming off as being ambitious.
TL;DR: best way to market self-employment? Don't. Instead market it as regular employment where you had a lot of responsibilities.
For example, here in Canada there's very little risk tolerance, regardless of what people tell you. You see it in the ways companies raise funds, are valued, and even the execution points. Being 'self-employed' can be a hindrance, especially in marketing. On the other side 'Founding X company - building the overall business to over $YY in revenue' is a positive spin on the same result.
On the other hand, discussing a project-based approach looks VERY good. At that point you're a consultant, rather than a contractor or freelancer. Here, that resonates better, in that people go 'ah, well paid expert'. This in turn means that you can pivot the discussion around to project successes, the values you've learned working on multiple projects, etc.
But most importantly, don't underestimate the value of the cover letter - which I used to believe no one reads. If you can explain your passion to join organization X (for some specific reason), then effectively you're priming your resume reader. That helps you positively change the conversation - a brilliant technique from behavioral economics.
On one hand it gave me a lot of first-hand experience with a lot of things, and second it shows that i'm not keen on sitting on my hands.
I think self employment only looks bad when it's interspersed with very small periods of employment, from a couple of months to six months. Having long periods of unemployment, followed by short bursts of a couple of months here, three months there could maybe be interpreted as a sign that the candidate has a problem with keeping jobs, and that there are probably good reasons for that.
Having only ever been self employed could also be seen as a bad sign. Having never worked within a company, maybe the candidate has no teamwork skills, cannot work within a hierarchy, cannot keep a fixed schedule, etc.
I can't think of any other situations when self employment would look bad.
When I said self employed, people either heard "unemployed" or "marginal freelance person, barely made ends meet". When I switched to entrepreneur, people heard "successful businessman". The change was uncanny.
Now, my activities didn't change. So the questions to ask are:
1. How did the people complaining about self-employed mean the term? 2. Does that match what you do? 3. How should you brand it so that the person hiring you understands it correctly?
In general, I think it's seen as a positive thing, as long as you're talking about the kind of self-employment we mean here on Hacker News. Fairly lucrative, manage your own schedule, no shortage of clients, but more overhead and uncertainty than a job and a need to focus on non-technical stuff. The latter two points explaining why someone might want a job instead of self-employment.
I thought it was amazing that the government spent so much time discussing the call records being logged.. when they are doing so much worse. Maybe that's how they keep people focusing on what the government wants to talk about? (aka look over here, nevermind that thing over there...)
This find is way worse than call detail records..
I'm surprised such a thing took so long to be revealed. If you've got as much data as the NSA has, wouldn't you want a Google like search engine to be able to search through it? It makes so much sense which is why I am surprised some people are surprised about this.
seems like a lot until you consider how many indexed pages Google has:http://i.imgur.com/EqIJAoL.jpg
why not throw in grains of sand or atoms in the universe?
Of course they built a search engine. Wouldn't you? Don't you have similar at your workplace? We use them all the time. Think about web interfaces built on top of ElasticSearch, for example. Is that not a 'search engine'?
When folks tell us crazy things, like the government is tracking every place you go and your opinions through your cell phone and social networks, we're supposed to say something like "That's extraordinary. With extraordinary claims, we require extraordinary proof" Then, if they persist, we're supposed to say something like "Such a program would require far too many people to keep a secret. We couldn't even keep the atom bomb a secret. The government is terrible at keeping secrets. Such a claim is just too far-fetched."
These are the traditional things taught to people who are supposed to be clear-headed and rational. It's the way we engage crackpots without taking them too seriously.
These responses seem to have failed us miserably in the current circumstances. As it turns out, yes, that's what they were doing, and yes, it was extraordinary and required lots of people to keep incredible secrets. But it still happened.
These things keep happening in the realm of automated surveillance, both by the government and corporations (and worse, when corps do it and the govt scoops it all up later) that would have been considered completely whacked just ten years ago. The stuff of paranoid fantasies.
Our tools of rational inquiry have failed us.
Imagine the other end of the scale - where in fact every detail about everyones lives is wide open and available for everyone and anyone to access. Willingly. Freely. A new order of celebrity: total telepathy.
Do you think we'd be dealing with terrorism, then? Would there be the idealist, killing souls, for a little private time?
The Reality Distortion Field appears, and people believe, because they want to believe.
It depresses me, the lack of intelligent discourse.
Most tech people I meet actually believe that the NSA records and stores all telephone calls. It's depressingly stupid, but I have given up arguing, logic and sense are not welcome when the NSA is the topic.
St. Elmo's fire: http://www.pbase.com/flying_dutchman/image/156304671
Northern lights from inside the cockpit: http://www.pbase.com/flying_dutchman/image/155775399
"Earthquake light is an unusual luminous aerial phenomenon that reportedly appears in the sky at or near areas of tectonic stress, seismic activity, or volcanic eruptions."
Considering there were reports of seismic activity in the area around the approximate time of the event, it's possible that ionized air promoted formation of sprites and/or ball lightning.
What about a tide of bioluminescent bacteria or algae? Typically these emit blue light and are known, in the case of bacteria, as the 'milky seas effect'. But algal tides sometimes bioluminesce red or orange. With a high local concentration of nitrogen or another limiting nutrient (which might upswell from the seabed due seismic activity below) you might get extremely high concentrations leading to the patterns shown in the photograph.
"Rocks That Crackle and Sparkle and Glow: Strange Pre-Earthquake Phenomena"
"A light or glow in the sky sometimes heralds a big earthquake. On 17 January 1995, for example, there were 23 reported sightings in Kobe, Japan, of a white, blue, or orange light extending some 200 meters in the air and spreading 1 to 8 kilometers across the ground. Hours later a 6.9-magnitude earthquake killed more than 5500 people..."
I wish the pilot had indicated exact UTC time the phenom happened. Hard to pinpoint but nevertheless, his position and the quake's position are quite close, even if the two events were hours apart.
I bet he saw this hypersonic vehicle being blown up and the lights from a massive observation fleet.
"An experimental hypersonic weapon developed to reach targets anywhere in the world within an hour has been destroyed by the US military four seconds after its launch for public safety.
The test in Alaska in the early hours of Monday morning was aborted after controllers detected a problem with the system, the Pentagon said, and the launcher is believed to have detonated before the missile was deployed."
"You, Sir, have caught some absolutely breathtaking photos of POSITIVE ET'S AND THEIR CRAFT CLEANING UP THE FUKUSHIMA RADIATION AND SAVING THE PLANET AND IT'S ECOSYSTEM FROM SURE ANNHILATION!...It is QUITE OBVIOUS WHAT THOSE LIGHTS ARE, MY "SILLY WABBITS"!!!"
"Our first suspicion was this has got to be a mistake. There must be something stupid we are doing," said Professor Troy Shinbrot, of Rutgers University, New Jersey.
"We took a tupperware container filled with flour, tipped it back and forth until cracks appeared, and it produced 200 volts of charge.
I could be wrong, but I think it would be almost impossible to capture an 8-second exposure while flying and somehow manage to keep the stars from becoming light trails - at least not without some very serious camera stabilization equipment.
Since the photographer didn't seem to mention anything special used for taking the photos, I'm inclined to say they've been 'shopped.
Some examples here: http://abcnews.go.com/US/northern-california-struck-60-magni...
If you're not familiar, it's a fiction podcast that presents itself as a community announcement hour on the town of Night Vale's public radio station. There was a particular story arc involving a sentient, glowing cloud that descended on town and demanded to be made a part of the city council.
It's free, and it's cute. If you like such things, check it out. http://commonplacebooks.com/
The pattern seems similar.
No, because that would kick all sorts of ass.
- Ignoring the 3D nature of antenna placement, you need to model the concrete walls properly to get an answer that is semi reliable. All materials have frequency dependent reflection and transmission (attentuation) coefficients. Its pretty easy to extend a toy FDTD sim to include these.
- For the reasons above, inferring 2.4Ghz behaviour from ~1GHz (30cm) signal isn't really a good thing to do (even in a "hand waving" manner).
- When displaying E-fields, you usually want to plot the ||E||^2 averaged over one complete wave cycle -- the nodes shouldn't jump around. If they do, it means the simulation hasn't reached a steady state.
The summary states,
> disclosed a glibc NUL byte off-by-one overwrite into the heap.
> a full exploit (with comments) for a local Linux privilege escalation.
Normally, I wouldn't see how such a bug could lead to privilege escalation. (glibc runs in userspace, after all.) But it is glibc, and glibc is everywhere.
I think the key is in the source code, where they state,
// It actually relies on a pkexec bug: a memory leak when multiple -u arguments are specified.
It seems very much like asking for trouble - I can't offhand think of a good reason why this would be required.
I'm sure there are plenty of programs that have similar memory leaks with commandline args, as many authors might, not unreasonably, think that abuse would be prevented by the shell ARG_MAX, which is 2621440 bytes on many systems. Perhaps some sort of adjustable lower limit might be appropriate here.
Just because they aren't based in the valley and make Apple look positively liberal when it comes to secrecy and working practices doesn't mean they should be ignored. Quite honestly I think they're the single most terrifying company in the US today, an idea Bezos would take as a compliment.
The big picture is they are gunning to become the universal middle men for when people actually spend money on the net. Google only have the ad side of things together, but never really cracked getting end users to open their wallets, yet Amazon are in the position of starting in front of users, and slowly moving themselves into being the background glue between everything else, facilitating transactions between everyone while taking their cut and enforcing their rules. Terrifying, and brilliant.
In general, we want the best article out there on a given topic, where "not being behind a paywall" adds points toward "best".
Amazon or Google will piss off or drive away the Twitch user base. The users will all move to Hitbox.tv or any number of new sites that will pop up. It's easy to do live streaming, it's just expensive. This acquisition will bring funding and Yahoo will buy the next popular live streaming site.
2) As Amazon enters the online advertising space to compete with Google Adwords and Adsense, they'll want to own web properties with high impressions for their display ads. User based video creation is great for that but comes with risk for copyright violations. Twitch solves both these issues as it'll give display ads high impressions without much concern about copyright violations as these will mostly be legit user-originated content.
For now, Amazon is selling physical copies, but seems reasonable to reuse their content delivery infrastructure for game downloads in the future.
What are you working on, Jeff?
Does anyone have statistics on whether FireTV is doing better with gaming that OUYA? Does anyone actually use the gaming features?
As mentioned, twitch is a very central part of esport community. But in the same time, it might not have enough credibility on "mainstream" startup world. I wonder if google was interested in twitch service more than twitch team.
Note what Amazon Prime Streaming works on. Then go here:
Note that aside from Apple's devices, Amazon Prime video works only on Amazon's own mobile hardware.
Is this why they're buying Twitch? To make it another "Amazon Exclusive" for Fire devices? Frankly that would explain a lot. If they can lock the content down it will force people into their ecosystem if they want mobile access.
None of the reporting outlets have any substantial details yet though.
Twitch has the potential to both compete in the same sentence as ESPN ($$$ bil) and provide a new model for content delivery in sports content consumption.
The deal could be announced as soon as Monday, the person said.
Google Inc. had earlier been in talks to acquire Twitch, but those talks cooled in recent weeks, according to people familiar with the matter.
Twitch, launched in June 2011, is the most popular Internet destination for watching and broadcasting videogame play. The startup raised $20 million from investors, including Thrive Capital and videogame-maker Take-Two Interactive Software Inc. in September.
News of the acquisition was earlier reported by tech website The Information.
Imagine what our reaction will be if someone posts a link to their own blog which doesn't allow visitor to read the content without paying $1. Even if they have a work-around like you can inspect element and hide the paywall popup.
The main issue with it is that it doesn't bode well with the requirement for original reporting. In this case, there are 4 sources of which I'm aware (The Information, WSJ, Bloomberg and Recode) who reported on this, and all of them were original reports. Even if The Information was first to press, they all conducted their reporting indepedently and the fact that four sources have the same information is very relevant. By linking to only one original report, you're depriving the average HN reader of knowledge of this journalistic consensus.
Even if the link had to be to an original report, it would make more sense to link to The Information, which is also paywalled, but was first to press. But really, I'd prefer the top HN link to be to a site like Ars Technica, which diligently compiled all the different reports.
Maybe this means I'm old now?
I have since found other TV/movie streaming sites, but none are as mature or reliable as JTV was.
I can understand why some inmates should not have access to internet to prevent them from directing their activities outside of prison but I'm sure that those are a small minority.
Is that accurate?
The company provides services that make it easier to communicate with inmates, including "The easiest way to send printed photos to your inmate directly from your computer or mobile phone!"
Law enforcement, prison system, healthcare.
Everything else will be imported.
For better or worse, I've been pushing Dylan forward heavily over the last few years and am effectively the primary maintainer.
Over the last couple of years, we've made a lot of progress. We've completely revived the documentation from 1990s era FrameMaker files and have it published via a pretty modern system. We've converted from SVN to Git and moved to GitHub. We've done 4 actual releases. We've improved our platform portability. We've provided some basic debugging integration with LLDB. We've fixed some long standing issues in the compiler and tool chain. We've improved the GC integration on all platforms.
But there's a lot to do. We need to fix our Unicode support. We need to improve the type system in at least minor ways if not major ways. We need to improve how parse failures are handled as the errors are not always friendly. We need more libraries. Some of this is really easy, some isn't. But for pretty much everything, there are bite-sized pieces of work that could be done in a couple of hours/week that would lead to significant gains.
I've wanted to just flat out use Dylan for something and have built some small prototypes with it and while they've worked out well enough, the actual projects themselves didn't go anywhere (unrelated to the use of Dylan).
I think this blog post was triggered by a comment that I'd made publicly yesterday that I'm feeling rather discouraged at this point. There was also a private email that I sent to 19 people who have been involved with Dylan recently, but the author of this post didn't get that email.
I view Dylan, not as a language from the past, but as a stepping ladder towards building a better language for the future. We don't have to get bogged down in a lot of the minutiae involved in creating a new language as a lot of the work has been done. We get to focus on things at a different level and those things are just as important. People bring up Goo often when Dylan comes up. Goo is interesting, but the implementation is nothing close to being industrial enough to survive an encounter with the real world.
I came to Dylan because I saw the mess that Scala and other languages were. I didn't like where they were going and following some people on Twitter like https://twitter.com/milessabin and others seems to show that I'm not alone.
And that's why I'll probably keep at it with Dylan. I want a better future and I'm going to keep trying to build it.
> college kids on comp.lang.lisp asking for the answers for problem-set 3 on last night's homework
Surely not during the Naggum days. CLL was a hostile wasteland.
> That is the lesson of perl and python and all these other languages. They're not good for anything. They suck. And they suck in libraries and syntax and semantics and weirdness-factor and everything.
What? How have you not heard of CPAN? There is not a single language in the world that can touch Perl's libraries. I'm not sure why you feel the need to toss either Perl or Python under the bus to make some petty point about Dylan's lack of popularity. Python replaced Scheme at MIT. It's time to move on. I know I have.
You have to have your head pretty far up your own ass to not see how much Common Lisp sucks. It's a language designed by committee, and it looks like it.
I've used Erlang too. For everything Erlang does well, there are countless areas that make you want to bang your head against the desk.
Languages don't matter. Platforms matter. APIs matter. Playing nicely with the rest of the world fucking matters. Common Lisp wouldn't.
This is why ecosystems are, for most people, more valuable than intrinsic language features: Tribalism along the lines of "We have X thousand people backing us up"/"X thousand devs can't go wrong". People don't care about monads or macros, they care about feeling like they're part of a large community.
Erlang: see 'career wrecker.'
Please. Someone, wreck my career some more.
Unlike Dylan Erlang was created by a company for a purpose with very clear goals and it did and still excels at meeting those goals, and nothing out there gets close to the qualities it has. Not everyone needs those qualities, but sometimes nothing will do. Erlang is at the core of many solid industrial applications -- mobile to internet gateways, message queues, trading systems, large databases, handling millions of concurrent connection and billions of messages per day for WhatsApp.
What does Dylan do? This is the second time I heard about Dylan. I've played with Mercury, Prolog, Nimrod, Curry (Haskell + Logic programming) and other rather obscure languages but haven't heard about Dylan much.
Some languages just don't make it, sometimes it is just luck. However I don't like the disparaging and angry remarks thrown around at other languages and ecosystems. That does nothing to promote Dylan it only pushes people away.
Unfortunately, there are large network effects to programming languages, and the stuff that really makes you productive - libraries and tooling - Dylan just lacked. It wasn't practical to write anything larger than an ICFP contest entry in it. So I went from Dylan to Python, which lacks many of the really cool language features and is a lot slower, but at least comes with so many batteries included that you can whip up a prototype for anything really quickly.
A Conversation with Alan Kay, ACM Queue, 2004, https://queue.acm.org/detail.cfm?id=1039523
One may think that because closures are finally entering the mainstream (after what, 5 decades?), we have hope for those things to come as well.
But then I saw Swift. Built-in support for an Option type, so one can avoid null pointer exceptions. At the same time, this languages manages to recognize the extreme usefulness of algebraic data types, without using them in their full generality. Like, why bother with a generic feature when we can settle for an ad-hoc one?
I'd give much to know what went so deeply wrong in our industry that we keep making such basic mistakes.
And what filled the dynamic language niche? Interpreted languages like Ruby and Python that have yet to achieve 1970's levels of implementation sophistication. But simplicity made them agile and portable and allowed resources to be spent on libraries.
A few other things:
* Like rdtsc says, Erlang does not fit the mold in a lot of ways. It had a large corporate sponsor from the get-go, which was good for it in some ways (money for developers), and perhaps bad in others: lots of production code early on means it's not possible to change stuff that is less than optimal.
* As per my article, new/small/unpopular languages need a niche, a beachhead if they are to gain traction. You can't create a new language and platform from scratch with an ecosystem as big as Java's (indeed, piggybacking on the JVM is a popular strategy because of this), so you'd better have one thing where you absolutely kick ass. Erlang has this in spades, for instance. Ruby had Rails. Tcl had Tk and a few other "killer apps". PHP was way easier to get started with than mod_perl, back in the day. I don't see this for Dylan, particularly, but then I don't know much about it, so maybe it's there somewhere, and BruceM will figure it out and the language will gain a following.
When I'm not programming I like to get some distance from my work and hang out with people who have diverse interests. When I'm serious about programming I use Common Lisp. When I'm serious about connecting with other people I use English. Many people seem to confound these pursuits and end up with languages that compromise weakly between talking to people and talking to computers.
For me, programming is about solving business problems ASAP in a manner that is amenable to a long series of minor improvements over many years. Having a stable language standard with language improvements happening as add-on libraries is a huge win. My old code keeps working, so I can stay focused on improvements instead of bailing water.
Also Lisp has the seemingly magical property of being one of the easiest languages to read, understand, and reason about by programmers who have the aptitude to learn it--and it scares the pants off of people who don't. With all of the "expert programmer" pretenders out there it's helpful as an employer to have something that separates the serious programmers from the pretenders.
I have a soft spot for Dylan from magazine articles back in the Newton days.
Here we have a Lisp like language, with a more approachable syntax for the average Joe/Jane developers, AOT compilation and Apple kills it.
I think it is also important to bring out the paper from Erik Meijer about using Visual Basic to hook the typical enterprise developer into FP (via LINQ).
And then maybe try to get more like him/her.
Algebraic types? Dependent types? You'll never see them. They're too ... research-y
Why can't those features be baked into C++ or Java?
Admittedly, HTML2Canvas (or anything that creates <canvas> from the page so it can be screenshotted) is pretty cool and allows this great sort of annotation/selection capability where a user can choose to highlight what went wrong and provide some feedback. That's great. I found html2canvas to be really resource heavy when taking the snapshot, and there was no way to do it 'automatically'. Why does that matter? When debugging, I want history.
My ideal bug-reporting tool would not only give user specs, stack traces and a screenshot, but a history of screenshots going back to an arbitrary length so I can figure out exactly what the user was doing that triggered that error. Trial and error could lead one to the correct sequence of screenshots but I've found one per 10s / 6 max (so 60s of history) to be a pretty good way to do it.
So if you know you want history, how do you get it? You can't use html2canvas to just take snapshots the entire time the user is using the page (I've tried (work account on GitHub): https://github.com/niklasvh/html2canvas/pull/270). It's far too slow, memory heavy, and if done asynchronously can create really strange render bugs if the user scrolls or mutates the DOM. Some of these issues can be fixed but it will forever be slow and take a ton of CPU. So I came up with something else.
At the same time I was working on a web scraper + screenshotter using PhantomJS that talked to RabbitMQ. I thought: why render the screenshot at all on the client? I have all the same resources; I can recreate the environment from source, on my own time, asynchronously, without making the client do the work for me and slowing down their browser. So the implementation I came up with does the following:
1. Maintains a rolling 5-element array of HTML snapshots (usually via document.firstChild.outerHTML), tied to setInterval. That part is easily configurable to your preference. The snapshots even contain a virtual cursor element that I just track using mousemove events.2. On error, collects the usual data using Raven (the client-side library that reports to Sentry, the open-source error aggregator).3. Rather than hit Sentry directly, the browser POSTs to a local route. That route separates the HTML snapshots from the Raven/Sentry data, and creates UUIDs for each snapshot to create S3 filenames.4. The Sentry data is sent to Sentry along with the S3 filenames, and the HTML + filenames are put on a RabbitMQ queue.5. A separate process is listening to the queue, grabs HTML off the queue and renders the page using the CSS (no scripts) from the running webserver. The screenshot is then uploaded to S3.6. A Sentry plugin displays those images in the bug report, assuming the S3 urls are valid. They may take a while to resolve but they usually show up in < 2 minutes.
Since there are a lot of moving parts, I've always wanted to create a simple devops service out of this, but I haven't had the time, being busy with other projects. But if there is interest, I could open-source some of the components.
I think this is too optimistic. For example, you can divide by a 0 value but not by a 0 constant:
1.0 / 0.0 // compile error
zero := 0.0 1.0 / zero // infinity
foo := 0.0fmt.Printf("%f", -foo) // outputs -0.0 fmt.Printf("%f", -0.0) // outputs 0.0
edit: it has been updated now.
This is what "growth hacking" really looks like today...
1. Manipulate a non-perfect signal-to-noise ratio ranking scheme with the most traffic (PageRank/EdgeRank)
2. Gain massive popularity
3. Sell your business to a greater fool
4. Ranking scheme changes rendering your model worthless
Demand Media, Zynga, Socialcam, etc. etc. ....and now BuzzFeed. The list goes on and on. The winners are the investors and the ones creating the ranking themselves, no one else.
What in the world do the other 20% want?
The race to the bottom among aggregators, which started quite some time back with HuffPo (nearly a decade old now) has become quite maddening. I've long since resorted to flagging such content as spam, where possible (curious that comments here suggest FB has an "I don't want to see this" option, G+ most certainly doesn't), and increasingly have resorted to unfollowing or blocking those who post such crud.
Much as xkcd suggested a format for getting bots to contribute usefully to online forums, it would be quite slick if search and social engines would reward actually good and quality content.
My takeaway is this: Doing something quick and dirty for a first draft and improve it later on often leads to better results in the long run than planing and over-engineering a solution beforehand, because you can start refining details much earlier or throw away bad approaches without investing too much time.
Even though you may don't like his style, you have to admire his pragmatism, productivity and humbleness.
It's a series of shell scripts I use that screencap videos of coding and set them to music (so, for example, rendering a 48-hour coding competition to a 5-minute song, or as I more typically do for clients, render the development process down to a few minutes for them to watch in fast-time how their development was done).
It's called watchmecode (https://github.com/choptastic/watchmecode) and I just have to do
./make-av-video.sh /path/to/video.mpg /path/to/song.mp3
The result is something that looks like this: https://www.youtube.com/watch?v=Hwn7mfmo0SQ
(disclaimer, I've plugged this before on HN: https://news.ycombinator.com/item?id=5685859)
Either way, I've created a quick landing page to see if anyone would actually be interested in a live streaming site specifically for coding - http://devv.tv
It's nothing pretty but some validation or feedback before I jump head first into this would be amazing.
...multiplayer with several people, of course.
I've been building a system/website to access, search and develop intelligent analytics from PACER court information. We're tracking cases, attorneys, parties, judges, as well as the actual case dockets. The data is a treasure trove of information, and if anyone's interested, I'd be very happy to chat more about it.
The site (a signup for now as I'm working out the kinks in the system) is www.docketleads.com. Email me there or ping me here for more info.
And EDGAR after that.
I assume, based on the weird specificity of what they're removing, that the PACER office is doing this at the request of the individual courts. Which just sort of underscores how awful this is---that courts get to decide how public their own opinions are.
I wasn't aware that an opt-in version of this was already on the books. I'm curious to see exactly how much the user is in control of this "technology" in practice. If the user can (a) disable the feature, and (b) is the only person who can initiate a remote shutdown, then it's probably to the consumer's advantage. But I suspect it's only a matter of time before the FBI/CIA/NSA (or local PD) will be able to unilaterally decide it's in the "public interest" to suddenly shut off every phone in a particular geofence.
Cars are also stolen every day, and society manages to get by, through insurance and opt-in theft deterrence tools (both manufacturers and consumers already have plenty of incentive to deter theft). I have a hard time believing that stolen phones are a big enough social problem to warrant a mandate of this scope. Regardless of intent, this power will be abused.
From Wikipedia: "For example, if a mobile phone is stolen, the owner can call his or her network provider and instruct them to "blacklist" the phone using its IMEI number."
Is it because it's actually mutable/not properly authenticated? Or because global blacklist synchronization isn't good enough and not all operators respect them?
With my own phone, I'd love to be able to switch that off/on. Why is that option not available to me?
This feature will be abused.
People used to burgle houses for VCR's once, and for DVD players. Nowadays a DVD player is a giveaway item, nobody gets them stolen anymore.
Dear friend, I am the widow of the former Prime Minister of Nigeria and I need your help to get out of Nigeria where my life is threatened, along with the $50M currently in my bank account. If you help me I am willing to give you 30% of that money, please reply me to see how we can proceed. Regards, Mrs Mary Noscam
Dear Mary, I am very interested to help you, how can I help you to get out of Nigeria? Regards, Mr John Victim.
1. There are example of bots having actual conversation, such as SHRDLU (http://en.wikipedia.org/wiki/SHRDLU) that was recently submitted here. 2. The answer doesn't have to be very elaborate.
They reached the same conclusion and in that case, I can believe it's correct because those ads cost money, so must bring in more revenue, or they'd have stopped a long time ago.
In the case of 419 scams, a large proportion of the scammers may not be that sophisticated. It's entirely possible they really are just as dumb and incapable of spelling as a naive layperson would assume. The fact that "scam baiting" is a thing provides some evidence of that, although it's likely that many of those reports are fake as well.
What % of that was recovered is unknown.
Any team recovering even a small percentage of this makes it a fine acquisition target for the biggest banks in the world for a solution that works.
time saved in prevention of fraud; is time saved for banks not handling fraud, angry customers or hiring lawyers or training staff.
Expect one team applying to YC, trying to tackle this problem.
[*] See page 33 of http://www.ultrascan-agi.com/public_html/html/pdf_files/Pre-...
Given that non-productive responses (false positives) are harmful to the scammers, one can think of spamming them with false positive responses as well, right?
And now I realize, it's been a few months since I didn't get one of those mails.
Ultimatly the figure isn't a big deal.....idk, maybe i'm wrong..
Of course, I am aware of the open-source trackers (like MilkyTracker, etc), but I believe that the FT2 played a huge part of the demoscene and it should be preserved (the original source code).
By the way, recently I started to use BTsync to get back control (I would prefer an open-source implementation, but hey) and it feel so much better. Sample workflow: take picture with DSLR, import them on desktop at work, near-instantly get them synched to my phone, check and remove bad shots on my phone, also rotate and adjust them, have the edited and filtered pics ready on my laptop back at home, have all the thing on a linode where I have some scripts doing renaming and analysing. And, guess what, all of this in China, where dropbox and google drive are often unreachable.
The most important in this workflow, which I also use for music, is that when I remove a picture or a track, I want it removed from all the devices, and never come back before my face. It is suprisingly difficult... (In both senses: it is artistically difficult and necessary to decide to delete for good a file, and it seems technically impossible with services like Google photo, Dropbox's camera upload, etc.)
On thing i noted as beeing polisheble was how the conflicting was done and renames of files.
When a file was renamed the other browsers acted as if the file had been deleted then recreated.
What do I miss?
Disclaimer: I contributed some parts to the project.
But I never got a good answer to a doubt: I submitted my application just now, just because. But I am in a particular good period for user grownth (first customer two weeks ago, and now things are happening). I will have considerably more traction in October (including a reliable growth rate, as it would be at least passed 2 months).
The question: if I update my application on October 14 with the updated traction numbers, will it make a difference?Will someone actually see it or my application will already be judged and decided and that's it?
Since last batch, wasn't there a "New Deal" that fixed the amount to $120k for 7%? 
I'm happy to review applications, especially from international founders. Roberto at Glio dot com.
It's like founder-dating with a deadline.
Really glad for the application overhaul, looking good.
If I'm the only founder, is it considered negative?
One final nitpick would be to move the timeline at the bottom of https://apply.ycombinator.com to the top.
Otherwise, really cool.
My cofounder and I would like to ask for some critique on our answers including the personal questions. However it seems that the statement, "We will send an email to each founder to fill out additional information about themselves." implies that the application must be submitted before we see the personal questions. I just want to confirm if the first submission should be the final draft before we proceed.
Hopefully the questions about hacking a non-computer system and most impressive thing we've built/achieved are included in the personal questions. Those were some of the most fun to answer.
The "How to Apply to Y Combinator" essay hasn't been updated to reflect this change.
Is this a small side web app you guys rolled yourself or are you using a decision-making product (like Submittable, etc) for the back-end?
Wow. I'm not a Marxist, but perhaps this person may want to start by reading Marx' view about de-humanization in industrial production.
Although most markets have historically had a high concentration of output from the top percentile, so its not exactly a far fetched proposition to say that 5% of the workforce led development during the IR.
As a close to the heart analogy, everyone here knows that if you graduated with a BSCS and didn't do the IT/accounting or the graphics arts/web design track then the student probably did the stereotypical academic track with all manner of highly skilled senior year classes like automata theory, compiler design, maybe some control theory (although thats more EE). I did well in those classes and like many (most?) people I'm highly underemployed. I would guess that well over half, maybe 90 percent, of my fellow students in automata class and compiler class are just doing CRUD web apps or mobile apps, which hardly require those skill levels / skill sets.
I'd be slightly interested in sociological commentary on societies where underemployment increases. Does it always increase infinitely, or crash after awhile, or just not matter much?
A better proxy for carpentry skill level of a society might be the total sales of tools and supplies. I think the total economic size of the "at least somewhat skilled woodworker" is larger today than in the olden days.
Another interesting aspect is expansion of titles. Everyone in a skilled craft no matter if its programming or carpentry knows some are more equal that others, in carpentry no matter if you all have the same job title, or hobby name, some guys can barely be trusted with material handling and rough carcasses while other guys can be trusted to trim the finest kitchen cabinets, despite all having the same title. And obvious IT/CS analogies.
(Companies often reject profit-improving innovations which empower skilled workers. On the flipside, unions to the extent they exist also have the incentive to reject improvements which damage bargaining power. That's one problem with capitalism's built in boss/worker antagonism.)
Another is mind-numbing work. Adam Smith rants about how division of labor makes people "stupid and ignorant as it is possible for a human creature to become... But in every improved and civilised society this is the state into which the labouring poor, that is, the great body of the people, must necessarily fall, unless government takes some pains to prevent it." (http://www.econlib.org/library/Smith/smWN20.html#V.1.178)
Another (since the last century) is the rise of managerialism, with its bureaucracies. David Noble points out that tech can deskill workers and strengthen management, or empower workers and peel away management layers.
I hate this kind of marketing trick.
So let me add: And here's the bad news, your Erlang program can be much slower than a C program on one core: for example in the alioth 'benchmarks' Erlang run between 3X and 30X slower than a C program on one core ( http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te... ) so you may have to throw MANY core at it before the Erlang program is faster than the C program.
So beware! YMMV.
That said, I hope that Elixir will improve Erlang's adoption as the high availability feature are the big deal IMHO.
I'm interested in the language design process.
* I think the negative connotation comes from your logo's dollar sign, $. It looks like a scammy pay-day loan or something. I'm not saying to change your logo, or your website sucks -- it looks awesome! -- I'm just letting you know my take on it.
You make up your own reward value per item. If someone find's your item you can choose to release the reward or not.
Should be "finds"
Yeah it's a nitpick but for some reason these kinds of things really catch my eye.
Edit: Heh... and right above that: Nobody can compete with us! We'll give you free tags and therefore free protection of your item's. Guess my eye isn't as sharp as I thought.
"People are returning lost items to owners through GoReturnMe everyday."
Is this true?
"THIS WAS LOST BY A DAMNED FOOL NAMED HIRAM STEVENS MAXIMWHO LIVES AT 325 UNION STREET, BROOKLYN. A SUITABLE REWARDWILL BE PAID FOR ITS RETURN."
Also, I really think you should consider changing your logo (and matching color scheme). As another commenter mentions, it looks like a pay-day loan logo which has some pretty negative connotations. In addition, you've geared the logo towards the person finding the lost item, when ultimately, your customer is the person with items they're afraid they might lose. For them, you're selling peace of mind, but your logo is not at all reflective of that. You don't really need to worry yourself too much about the people returning lost items as the mention of a reward should be enough to motivate them to visit your website and return the item, if they weren't already inclined to do so.
Did you do much research into rewards vs. no rewards?
Also, something that holds me back from using your service is that I don't like the design for something like my wallet. It looks a bit sporty so I don't want to stick it on my nice leather wallet or slick laptop. What about creating a few different styles?
I think it's a fairly narrow use-case. I'd always just write my name/phone # on a large % of my stuff; but I suppose I would rather put a sticker on some things (phone, wallet, electronics, etc.)
Pretty cool - how long have you been live? And how are you coming up with this 80% "return rate?" Do you have a sense of when things are actually marked "actively lost?" as opposed to "if it ever is... this sticker will come in handy" ?
You wouldn't have to spend a lot if you did it with items with fictional sentimental value or old generation throw away technology.
The only difference in your case seems to be that you are targeting online audience and it might be more receptive to the idea, but otherwise I'd say it's a pretty crowded space already.
Phone $100-200 Keys $40-100 Wallet $40-100 Tablet $200-500 Laptop $40-100
>Nobody can compete with us! We'll give you free tags and therefore free protection of your item's.
Should be "items", not "item's"
Why can't we all be like Japan...
PS - I would pay for a small card and/or keychain.
If you do know that -- like you're writing a GC for a higher-level programming language -- you can write a precise GC instead and avoid all of the (very interesting) muckiness this article goes through.
I wrote an article on that a while back:
Heh. No wonder it's 32 bit! You can only hope to get away with this on a register-poor ISA like i386.
I wonder if you could use setjmp to portably get access to register values?
Also, while this is a nice explanation, does anyone actually use GC's while writing code in pure C? I never found the idea of calling free() that troublesome. On the other hand, writing a GC in C for another language is obviously a good use case.
"Thirdly. Please don't use this code. I did not intend for it to be wholly correct and there may be subtle bugs I did not catch."
So too with GC that you start with a hook into your allocator and you end up with coloring references and aging them and disabling compiler optimizations and all sorts of other things that lead to really unexpected behavior in your environment.
Maybe you should mention weak refs and volatile also. volatile to keep pointers on the stack, so that you don't have to inspect the registers also. And some easy ways to make it precise. E.g. int tagging, float boxing, ...
It scans heap memory, and if a word looks like it points into allocated memory, it's assumed to point into allocated memory, and that memory is marked as used.
When dealing with numbers, such as times, file sizes, etc, you will inevitably store an integer value that looks like a pointer into a valid block.
The reason you can do mark and sweep in Java, is that there is a distinct wrapper for all pointers, so you can disambiguate pointers from data. You can't do that in this algorithm, it's impossible. This is also the same reason that disassembly of executables is so hard; it's not always clear what's data and what's instructions.
2. Especially if simple means you don't have any interrupts or threads.
3. Writing anything is a lot easier if you carefully exclude "debugging" from "writing".
When you have GC bugs, they don't always occur in the collector itself, but rather when the code which relies on GC breaks the conventions that allow GC to work correctly (because that code is generated by a C compiler which doesn't understand your conventions, and so they are ensured by hand).