hacker news with inline top comments    .. more ..    14 Aug 2017 News
home   ask   best   12 months ago
1
Metal Slug 2 reducing the slowdown system11.org
70 points by shawndumas  3 hours ago   7 comments top 4
1
justinjlynn 1 hour ago 0 replies
It amuses me that the author seems to think offering a flash/mod service for people who don't want to or can't modify their carts is abhorrent and yet still knowingly acquires, owns, and modifies an illegally distributed and/or resold cartridge.
2
mikejmoffitt 1 hour ago 0 replies
3
ringaroundthetx 2 hours ago 1 reply
The slowdown was one of the coolest parts of the game though.

So much action that the framerate drops and you are still surviving the chaos.

This kind of "glitch" was formative for a lot of slow motion in films.

4
Nexxxeh 2 hours ago 2 replies
Does this need to be tagged as [2015]? Still, interesting quick read. How did the error pass QA? It would seem like something d be spotted...
2
An Intro to Compilers nicoleorchard.com
324 points by luu  9 hours ago   46 comments top 10
1
simonebrunozzi 4 hours ago 0 replies
Ha, sweet memories.

I am 40 now; back in 2004-2006, for two years I was one of the youngest professors in Italy, teaching "compilers and programming languages".

I still feel so fortunate to have had that experience.

To help my students save the relatively huge amount of money to buy the dragon book(s), I created a condensed version of the parts that were required for the course - and I didn't charge anything for it, unlike what usually happens pretty much everywhere in Italy. They were available in PDF and OpenOffice formats, on the website that I created for the course (yes, the CS department didn't really have a proper website to use as CMS - I kid you not).

You can find the material here, it's in Italian but it might be fun to take a quick look: http://www.lulu.com/shop/simone-brunozzi/dispense-lab-lingua...

Such good memories.

2
userbinator 5 hours ago 2 replies
I know it might be a contrived case, but it's rather disappointing to see that even with the -O2 optimisation level, the final generated code contains 43% (3/7) useless instructions to mess around with RBP. I mean, it should be bleeding obvious to the compiler that this function doesn't require any stack allocation, much less an RBP-based one (the tradeoff between using RBP/RSP-based allocation is far more subtle, and I can forgive a compiler for choosing nonoptimally in that case), so why did it emit those useless instructions?

There's plenty of articles about how intelligent compiler optimisers can be, and I don't doubt that they can do some pretty advanced transformations, having read a lot of compiler output; and then, I see things like this, which just make you go why!?!? It's puzzling how compilers behave, applying very sophisticated optimisation techniques but then missing the easy cases. The effect might not be as great as more advanced optimisations, but so is the effort required to do this type of optimisation. This is not even low-hanging fruit, it's sitting on the ground.

However, the compiler will use as few registers as possible.

Compilers usually try to use all the registers they can (although sometimes they do exhibit odd non-optimal behaviour, as described above); eschewing register use and using memory instead will increase code size and time.

3
bitL 8 hours ago 5 replies
BTW, can anyone please suggest a good online compiler course for creating your own programming language with labs/exams/certs? I can find pretty good courses for almost anything but not for compilers... Something that would focus on handling grammar (LALR, LL, CYK etc), ASTs, semantics, types, code-completion, imperative/OOP/functional/logical/self-modifying constructs etc. or even NLP-to-AST conversion...
4
pbiggar 6 hours ago 2 replies
Pet peeve about compiler education that is not repeated by this article (yay, well done!): the way you have to wade through months of parsing bullshit to get to the really interesting stuff (optimization and static analysis).
5
Tehnix 3 hours ago 3 replies
> Some compilers translate source code into another programming language. These compilers are called source-to-source translators or transpilers

I was wondering how compile-to-JS languages would be classified. Technically they are all transpilers, but seeing as there is no lower-level for the browser than JS (barring WebASM), one could also argue that they are in fact also compilers.

Anyone have any thoughts on this?

6
bogomipz 2 hours ago 0 replies
Wow I enjoyed both the content and the design aesthetic of this post. I guess that's what happens what a designer turns software engineer? Please post more like this!
7
jacquesm 7 hours ago 1 reply
If this stuff interests you, even though it is quite old you really should read the 'dragon book':

https://en.wikipedia.org/wiki/Compilers:_Principles,_Techniq...

8
fleshweasel 5 hours ago 1 reply
I'm a little confused as it looks like an image in the article labels a string literal as being a comment. Am I misunderstanding something?
9
WalterBright 3 hours ago 0 replies
One nice thing today is the source code to professional compilers is readily available. Pick one for a simpler language, and just read the code.
10
CalChris 8 hours ago 1 reply
This was really good. I could imagine it being required background reading for a lab of the compilation pipeline in 61C at Berkeley. Add in a section on linking.
3
Efficient Immutable Collections PhD Thesis [pdf] steindorfer.name
143 points by tjalfi  7 hours ago   10 comments top 2
1
norswap 5 hours ago 2 replies
This is really cool work by Michael, a collection of configurable data structures.

Underlying most of them is CHAMP - a compressed hash array map trie. Essentially it's a trie over the hash of the objects inserted in the map. It's compressed using a clever technique that involves bitmaps.

A made a toy implementation of it to get a sense of how it works. There are some accompanying notes that you might find useful: https://github.com/norswap/triemap

2
devrandomguy 4 hours ago 1 reply
Clojure devs: 3.5 is the figure you are looking for, and it does look very good, at first glance.
189 points by ycmbntrthrwaway  10 hours ago   34 comments top 11
1
pacificresearch 17 minutes ago 0 replies
I fail to see how this is a major improvement over OMEMO? OMEMO is also an asynchronous multi-party chat algorithm, except it's already widely adopted by clients on several different platforms (Android, iOS, Windows, Mac) and has also received a significant amount of attention from security researchers.

OMEMO's cryptographic security has already been audited as well: https://conversations.im/omemo/audit.pdf . I should know as we (Pacific Research Alliance) funded the audit of OMEMO ;) . Auditing merely the protocol seems a little problematic, it's quite rare for vulnerabilities to be in an encryption protocol itself and much more common for it to be in the implementation. There doesn't seem to be any application which actually implements this library right now, let alone a network capable of supporting it. In OMEMO's case we also audited the OMEMO implementation in Conversations where it was originally conceived.

The only difference I can tell from their website is "Room consistency: Group chat participants are confident that they are in the same room". This seems like a pretty niche area to be concerned about, and in practice can be solved by a properly secured network. Although I am no cryptographer I believe OMEMO may offer the same quality as well, because all the messages must be encrypted for each participant, so at worst you could fake an identical room with identical participants, which doesn't really seem like a valid security problem.

While I love to see new research and further development into this area, it seems this is a little late to the party.

2
sillysaurus3 9 hours ago 2 replies
Some of you might recognize NCC Group, the company that performed the cryptographic review. They acquired Matasano, tptacek's prior company. Collectively they have some of the most talented pentesters and cryptographic analysts.

It's a safe bet that the (n+1)sec protocol has an excellent security rating on the basis that they only found one low-severity issue and two informational issues during the analysis.

https://github.com/equalitie/np1sec/raw/master/doc/NCC_Group...

Note the caveat:

NCC Group reviewed the full specification, however, only the portions of the library that lined up with the protocol were reviewed. NCC Group did not perform a line-by-line review of the entire library. Additionally, the review focused on the library and not a chat client implementing the library.

This means that the theory is sound, but the library itself wasn't actually pentested. That isn't an indication that there are problems -- just that no one has looked for them yet. As a specific example: https://github.com/equalitie/np1sec/blob/05b73b506b83be9724c... It sounds like assessing memory errors was outside the scope of the security review, so it seems unlikely that anyone was thinking about questions like "is there a way for an attacker to trick buffer.size() into becoming -1 and DoSing the system?" It was focused mostly on the mathematical soundness of the cryptosystem.

It's extraordinarily expensive to schedule a two-week pentest, but implementation errors are far more commonly exploited than attacking the properties of a cryptosystem directly. It might be good to schedule an additional pentest if possible.

That said, this is mostly incidental. Congrats on shipping!

3
cyphar 7 hours ago 1 reply
Matrix also has an OTR-like multi-party encryption scheme based on the Axolotl ratchet called Olm[1]. It also went through an NCC security audit[2]. I believe it has many of the same features as (n+1)sec, so I'm a little confused why they said

> Now that a first protocol for secure distributed multiparty chat exists

are they not aware of Olm, or do they not think it provides the same guarantees?

4
hobarrera 7 hours ago 1 reply
In reality, the most important thing isn't really the protocol, but how to market it.

We've had open, standard, (some also federated) IM protocols that were on-par with proprietary ones at the time multiple times in the past.

The problem has always been the same: no mainstream adoption, only nerds use it, they stagnate, and a few years later, they're behind what mainstream proprietary apps use.What we really really need to work on is how to get the general population to adopt these things rather than Facebook's next IM. And that's the really hard part!

5
lucb1e 9 hours ago 1 reply
From this[1] answer on the IT Security StackExchange site:

> N+1Sec is a similar protocol [to multi-party OTR, which requires participants to be online at all times to renegotiate keying material] with some improvements. Note that these protocols have a lot of algorithmic complexity and tend to scale badly, especially when you add latency into the mix.

It's unclear to me, though I can hardly imagine it being the case, whether this protocol requires all participants to be online at all times. The quoted answer surely sounds like it has that drawback, which is why I never really considered it as an option (leaving the Signal Protocol with "server-side fan-out" as the only good option).

If it does not have that drawback, having another protocol is a great thing, assuming what Wire says is true regarding OpenWhisperSystems trying to get millions from them for implementing a supposedly open source protocol.[2]

6
ycmbntrthrwaway 9 hours ago 1 reply
How does it compare to OMEMO? Here is what the page says about OMEMO (aka Signal Protocol): "It is an incredibly powerful solution but it is reliant on asynchronous communication and is therefore also dependent on the messaging platform a central server that can become a single point of failure (or metadata collection)." But AFAIK OMEMO works with XMPP even with federation. What are they talking about?

Well, XEP-0384 [1] says users must publish their keys via PEP (personal eventing protocol), but that is to allow sending messages when recipient is offline. And it does not leak more metadata that already leaks during actual message transmission.[1] https://xmpp.org/extensions/xep-0384.html

7
Anybody knows how whatsapp does it currently?
8
natch 5 hours ago 2 replies
"Forward secrecy" is listed as a feature. I've heard of "perfect forward secrecy" -- is there a distinction between the two?
9
daurnimator 6 hours ago 1 reply
I've been looking for a secure alternative to IRC. Looking through this:

Is there a way to turn off forward secrecy? For many uses cases you don't want it. I guess you can always add a way after the fact (e.g. by including previous key in each new message)

> New participants cannot join a channel without approval of all existing participants. Participants know the exact set of participants in the channel at all times.

This seems problematic for anything more than a trivial number of participants.

10
secfirstmd 7 hours ago 0 replies
Congrats to Dmitri and all the folks who have been working on this for a long time. Also kudos to the Open Tech Fund for getting behind it.
11
buttcake 9 hours ago 0 replies
Great.

Now just implement a client using web technologies and distribute it embedded in a standalone seperate chromium client.

5
Amazon recalls eclipse glasses kgw.com
118 points by alister  7 hours ago   110 comments top 24
1
typpo 4 hours ago 5 replies
I ordered the top-rated eclipse glasses on Amazon a few months ago and they were counterfeit.

If you put them on during daytime you can see indirect sunlight and even my kitchen light. They were shipped from China despite having "Made in the USA" markings and all the proper ISO certification fine print.

I haven't received any communication from Amazon, so people who haven't heard from them should not assume their glasses are safe (contrary to Amazon's statement). I contacted Amazon support and they were quick to initiate a refund. For some reason Amazon rejected my review warning that items from third party sellers may be counterfeit and explaining how to tell.

Here are a couple photos of the counterfeits: https://goo.gl/photos/1XRKw8KBgo3hjHx6A

2
joeframbach 6 hours ago 4 replies
> "Customers may have purchased counterfeit versions of legitimate products," an Amazon spokesperson said when asked about the issue.

Does this have to do with Amazon's practice of commingling goods from various sources together? I imagine in some warehouse, a big box of Panjwani's legit goods all mixed together with some not-so-good-goods.

3
projectileboy 6 hours ago 1 reply
Interesting... the real issue here is Amazon's long-standing implicit tolerance of counterfeit goods from disreputable suppliers. I wonder how many more incidents like this it will take before Amazon finally does something.
4
abirkill 5 hours ago 2 replies
This isn't limited to glasses. I purchased a 12x12 sheet of solar filter film from a seller on Amazon, manufactured by Thousand Oaks Optical (who are listed on the American Astronomical Society's list of reputable vendors), and I'm also being refunded.

I checked the Thousand Oaks Optical site and they have a list of legitimate resellers of their products, and the Amazon seller I used is listed. I'm surprised Amazon didn't do the same basic checks before e-mailing me.

When I was searching for more information after receiving the e-mail, I also found someone on an astronomy forum[1] who is being refunded for a telescope that appears to retail for around $1199. 5 oasisbob 1 hour ago 0 replies Buying any PPE from Amazon is probably a bad idea. Counterfeit or not, the barrier to entry is just too low. eg, there are specialized climbing harnesses on Amazon which confuse the CE EN standard and the concept of notified bodies. (Which certify conformity.) Leads to hilarity like a climbing harness claiming conformity with the standards for intubation tubes. Or, climbing equipment which is mislabeled as to its country of origin, with basic specs like dimensions being inaccurate. The sellers just don't care, and Amazon doesn't care enough to stop them. It's barely one step above ebay for a lot of items. 6 sixQuarks 5 hours ago 2 replies Here's the email I got, the last sentence had me chuckling. We hope to "see" you soon. ------- Hello, Were writing to provide you with important safety information about the eclipse products you purchased on Amazon (order #---- for TOLOCO Solar Eclipse Glasses,CE and ISO Certified Safe Solar Shades Filter for Solar Eclipse Viewing (3-Black)). To protect your eyes when viewing the sun or an eclipse, NASA and the American Astronomical Society (AAS) advise you to use solar eclipse glasses or other solar filters from recommended manufacturers. Viewing the sun or an eclipse using any other glasses or filters could result in loss of vision or permanent blindness. Amazon has not received confirmation from the supplier of your order that they sourced the item from a recommended manufacturer. We recommend that you DO NOT use this product to view the sun or the eclipse. Amazon is applying a balance for the purchase price to Your Account (please allow 7-10 days for this to appear on Your Account). There is no need for you to return the product. You can view your available balance and activity here: https://www.amazon.com/gp/css/gc/balance/ For more information about safely viewing a solar eclipse please see the NASA and AAS websites. If you purchased this item for someone else, please pass along this information to the recipient. We hope to see you again soon. Sincerely, Customer Service 7 mikeash 2 hours ago 2 replies What kind of piece of shit makes these things? It's one thing to make a crappy knockoff phone charger. It may be less efficient and even less safe, but you can probably rationalize it away since they mostly work OK. But this is a piece of safety equipment with a single function, and people only buy it to use it in one way, which will cause direct and immediate harm if the product fails. How can someone live with themselves after doing this? 8 jdavis703 3 hours ago 6 replies Is there anything particularly dangerous about viewing the sun directly? When I was dumb and young I'd look at the sun for seconds at a time without any problems developing so far. Is the danger that an eclipse encourages you to stare at the sun without encountering any pain? 9 dwaltrip 5 hours ago 7 replies Does anyone have good recommendations for getting a pair as of today (before next weekend)? I'm in the bay area, if that helps. 10 twoodfin 5 hours ago 0 replies zzalpha predicted this a couple of weeks ago: https://news.ycombinator.com/item?id=14877216#14877731 11 quesera 4 hours ago 1 reply If you don't trust Amazon's vendor commingling practices, be sure to test the film: https://www.space.com/37698-solar-eclipse-glasses-safety-che... 12 sundvor 46 minutes ago 0 replies > "We want customers to buy with confidence anytime they make a purchase on Amazon.com" That's a bit rich, what with the 2nd hand camera lens scam the other day and all. 13 halfnibble 5 hours ago 5 replies How do we still have issues with counterfeit products in 2017? There should be electronic records with audit trails for every shipment into the US. This ordeal is probably going to bankrupt several small businesses. 14 ams6110 5 hours ago 1 reply I was always taught to never view an eclipse directly with any kind of filter/glasses. Use a simple lens, or even a pinhole, to project the image onto another surface and view it that way. 15 taneq 5 hours ago 0 replies Selling "eclipse glasses", or anything else cheap that encourages people to look directly at the sun, is a terrible idea to begin with. All it takes is some muppet to misread (or ignore) the directions and the next thing you know, they're blind and you've got a lawsuit on your hands. 16 Stratoscope 5 hours ago 1 reply You don't need eclipse glasses. If you are traveling to (or live in) the zone of totality, don't bother viewing the partial eclipse at all. It's not what you are there for. You are there to experience the total eclipse in all its glory, and you can't use eclipse glasses for that. Your best viewing tool for the total eclipse is your own eyes and a good pair of binoculars. Yes, plain, unfiltered binoculars. During totality you can look directly at the solar corona. Not only do you not need eye protection, but you'll miss the whole thing if you use any kind of filter. This is true only during totality, of course. I recommend doing what hundreds of us did on an Oregon hillside in 1979. During the first partial phase, we put on sunglasses (just ordinary sunglasses) and looked away from the sun. The purpose of this was to get our eyes a bit dark-acclimated, so when it went total we would have an even better view. By looking away from the sun, you also have a chance of seeing the other interesting effects on the ground: the wavy ripply patterns that appear just before totality, and the shadow of the moon as it rushes toward you at thousands of miles per hour! As soon as the eclipse became total, people started yelling "totality!" and we took off our sunglasses, turned around, picked up our binoculars, and enjoyed the awesome experience of seeing the solar corona. The only danger here is that you have to stop looking as soon as the first bit of the Diamond Ring or Baily's Beads appear. Then you're back into the partial eclipse and must use eye protection. But at that point, most of us just cheered and got ready to go home. After totality, the partial eclipse is not much to get excited about. If you're not in the zone of totality, then of course you must not look directly at the sun at any time. But if you don't have quality eclipse glasses, you still have some other good options. One is a piece of #14 arc welder's glass. Another that you can improvise on the spot is a pinhole projector. There are various ways you can make one; at the simplest it can just be two pieces of paper, one that you punch a small hole in with a pin, and the other on the ground or a wall. Hold the paper with the hole so that the sunlight goes through it onto the other. You will get a nice image of the partially eclipsed sun projected onto the other paper. A pinhole projector is the safest way to view the partial eclipse: you are never looking directly at the sun at all, only its projection. There are numerous plans for building slightly more elaborate pinhole projectors. This page has some good tips: http://www.skyandtelescope.com/astronomy-news/how-to-look-at... Or search for "solar eclipse pinhole projector" to find more. There is a lot of misinformation going around about eclipse viewing and eye safety. Everything above is true and correct to the best of my knowledge, speaking both from personal experience and extensive research. And I've tried to make very clear the difference between the partial and total eclipse. Of course, if I made a mistake or left anything unclear, please let me know! There are two main dangers regarding eye safety. One is that someone may view the partial eclipse without proper eye protection and will destroy their vision. This has happened many times, and it is a real tragedy. The other danger, albeit of a lesser sort, is that many people who travel to the total eclipse zone will mistakenly believe that they need some kind of eye protection during totality. As a result, they will completely miss the awesome, life-changing event they went to so much trouble to see. 17 losteverything 6 hours ago 1 reply Better now than the 22nd. Bravo Amazon. The walmart i know still has them for$1.

18
kevinthew 4 hours ago 1 reply
I'd say about 50-60% of the electronic items I buy from amazon end up being counterfeit. For this reason, I avoid amazon unless it's unimportant stuff.
19
Bulkington 5 hours ago 1 reply
So our metro library system (well respected) is promoting an eclipse education event, with free viewing glasses. Should the sourcing be suspect? (95% viewing area)
20
Cozumel 6 hours ago 1 reply
>'Amazon said customers who did not receive an email purchased glasses that were safe to use.'

Or they did receive an email and it went into their spam/junk folder. In a situation like this it might be better to email everyone if only to say your glasses are safe. Although that would open them to legal liability if they're wrong.

21
jimktrains2 5 hours ago 1 reply
I bought a set of 15 supposedly iso certified and tested them all when I got them. Couldn't see light bulbs or anything else but the sun. I wonder if they seller sold a mix of counterfit or just wasn't able to get the certification to amazon as a bulk seller?
22
cmurf 4 hours ago 2 replies
What goes around comes around. Amazon has been turning a blind eye to counterfeit products for a long time, it's been getting worse, not better. And now that they got a clue this particular fraud could CAUSE WIDESPREAD BLINDNESS, they have to take responsibility for their own cesspool.
23
cmurf 4 hours ago 0 replies
A viable safe test would be a clear (unfrosted) incandescent or halogen bulb on a rheostat. Glasses on, bulb off, turn up the brightness somewhat slowly. You should be able to look directly at the filament comfortably if you have good glasses. If you have fakes, as long as you ramp up brightness slow enough, your natural response to squint will kick in before you wreck your retinas.

You definitely do not want to test this on the sun.

24
raverbashing 4 hours ago 2 replies
Use an old floppy disk to see the eclipse. I'm serious

It seems opaque, but it is transparent enough to see the sun

6
Effective Tensorflow github.com
1
zebra9978 13 minutes ago 0 replies
is anyone using tensorflow or caffe2 on the mobile ? We are trying to build something on the android.. but it seems there are no real-life deployments using caffe2 or tensorflow on the mobile.
2
0xbear 11 hours ago 9 replies
But _why_ use TF when you have PyTorch which is just as powerful, runs noticeably faster for most workloads, _and_ is easy to understand? What are you gaining by using TF these days?
3
RSchaeffer 10 hours ago 2 replies
"The most striking difference between Tensorflow and other numerical computation libraries such as numpy is that operations in Tensorflow are symbolic. This is a powerful concept that allows Tensorflow to do all sort of things (e.g. automatic differentiation)"

Sorry if this is a stupid question, but can someone explain how symbolic operations allow automatic differentiation or link me to a good explanation?

7
If I Made Another Monkey Island (2013) grumpygamer.com
199 points by denisw  12 hours ago   62 comments top 21
1
godot 10 hours ago 4 replies
Was actually rather surprised to find that he didn't like MI3 (Curse of)! While I definitely loved MI1 and 2, I actually call MI3 my favorite one. (4 was pretty bad, I couldn't get through even the first 20% of the game)

It takes getting used to the art style, but I felt like 3 really hit the spot with the humor, puzzles, voice-acting (!) and even mostly doing away with verbs (something he stated that he wanted). I totally get that there is a certain charm in 1 and 2 that isn't in 3 because of the major differences in art style. I just feel that 3 is such a strong game that shouldn't be overlooked. If I had to criticize it, I would just say that the last act (or two?) falls short. When the theme park part starts, it felt like they just wanted to rush to finish the game. The rollercoaster ride at the end as the boss battle was also lacking. Though, it was actually structurally quite similar to the final battle in MI2.

2
soneca 11 hours ago 4 replies
Not another Monkey Island game (unfortunately). This is from 2013. After that Ron crowdfunded and built Thimbleweed Park (www.thimbleweedpark.com), an old school adventure game using some (not all) of these ideas. I played it and it is just as great as the best old ones.

From time to time Ron complains that Disney wouldn't sell him the rights to Monkey Island. The only reason I believe he doesn't build one more.

Edit: he kept the verbs in Thimbleweed Park and compromised a little in the tutorial (there is an easy mode).

3
iliis 10 hours ago 3 replies
Not all that directly related, but if you like these old adventures and don't know it yet then check out ScummVM! It's an open source project allowing you to play all these games on more or less any modern device (including Linux, Android, iOS etc.)

They also have a few awesome games freely available under http://scummvm.org/games/, for example "Beneath a Steel Sky" (which I'm playing right now, probably for the fourth time or so ;)) or "Flight of the Amazon Queen".

4
the8472 6 hours ago 0 replies
> Fifteen - It would have full voice. It's something we dreamed of back then and we can do it now.

Eh, I don't know. I've never been a fan of voice acting in adventure games. You often have to repeat dialogues and stuff. Voice acting time costs, so it always puts constraints on the content that walls of text don't. Plus voice-over dictates speed in a way reading does not.

Some good ambient-matching tunes and occasional sound effects are all it needs.

The other points sound like a great non-plan though.

5
jonplackett 7 hours ago 0 replies
I will throw money in his direction to make any version of monkey island he pleases. I loved monkey island 2 so much. I'm old now and actually got monkey island free with our first Creative sound blaster 2 Soundcard. The games after that just didn't have the same humour and the graphics were overdone. I'm guessing if he hasn't made it by now though then it isn't going to happen :(
6
hobarrera 9 hours ago 0 replies
> I wouldn't raise huge sums of money or break any records

This guy would raise a ton of money, even if that wasn't his goal and didn't try to oversell anything. His history in making games speaks too much for itself.

7
guybrushT 2 hours ago 0 replies
"I would lose the verbs. I love the verbs, I really do, and they would be hard to lose, but they are cruft. It's not as scary as it sounds. I haven't fully worked it out (not that I am working it out, but if I was working it out, which I'm not, I wouldn't have it fully worked out). I might change my mind, but probably not. Mmmmm... verbs."

Signature Monkey Island writing style . Ron Gilbert wrote one of the greatest all time scripts - one can read the entire game here: https://www.gamefaqs.com/pc/562681-the-secret-of-monkey-isla...

8
ParadisoShlee 6 hours ago 0 replies
"It doesn't need 3D. Yes, I've seen the video, it's very cool"

9
coroxout 9 hours ago 0 replies
But with no verbs how would you do the gag from Monkey Island where the verbs change for the parrot?!

http://monkeyisland.wikia.com/wiki/Murderous_Winged_Devil?fi...

(I kid; the verbs are mostly superfluous, as "look" and "use" basically cover all the options, but I really do fondly remember the verb-changing gag above...)

10
stuartmemo 11 hours ago 0 replies
Sounds like he got to do most of those things in Thimbleweed Park - https://thimbleweedpark.com/
11
huhtenberg 10 hours ago 2 replies
Tangentially related - it's a f#cking shame and tragedy that Disney pulled MI1 and MI2 from Apple Store. Those are fantastic remakes of the original.
12
incompatible 9 hours ago 2 replies
"Four - It would be a hardcore adventure game driven by what made that era so great. No tutorials or hint systems or pansy-assed puzzles or catering to the mass-market or modernizing. It would be an adventure game for the hardcore. You're going to get stuck. You're going to be frustrated. Some puzzles will be hard, but all the puzzles will be fair."

Difficult, in the age of the ubiquitous walk-through. You'd need to introduce randomization so that each instance of the game was unique, but even then, sites can describe methods of finding the solution.

13
Jyaif 11 hours ago 0 replies
That man wants to make that game so bad.
14
Mithaldu 9 hours ago 1 reply
I love that he dumps on the art style of the games after MI2. That was the main reason why i never touched any of those.
15
Sevores 10 hours ago 2 replies
I thought that some of these points were digs at how the Broken Age kickstarter went and how that game was compromised on so many levels by modernisation until I've read the date.
16
jug 11 hours ago 1 reply
First, note that the article is from 2013, before Thimbleweed Park was conceived. Or maybe he had the seeds, the returned urge to write an adventure game again as he wrote this blog, as the Kickstarter was announced in 2014?

> Three - It would be a retro game that harkened back to Monkey Island 1 and 2. I'd do it as "enhanced low-res". Nice crisp retro art, but augmented by the hardware we have today: parallaxing, depth of field, warm glows, etc.

Much of this was Thimbleweed Park's style. It does have both "enhanced low res" pixel art and paralllax too; as part of the very first scenes, no less! I don't think it has DOF effects though, maybe it was deemed they didn't help once he got an actual retro game in front of him?

> Five - I would lose the verbs. I love the verbs, I really do, and they would be hard to lose, but they are cruft.

These were strangely in Thimbleweed Park though, and even from early alpha/beta screenshots. I wonder if they were always there. He must have changed his mind early on. What's weird is that TP doesn't even have the reduced verb set found in later LucasArts games? It's the full, clunky 180 of what he wrote here! I think only three are actually essential: "Act", "Look", "Combine". "Acting" on a door opens/closes it, acting on a light switch turns it on, acting on a person talks to him/her, acting on a book in a bookshelf reads it... Looking is always passive, for descriptions. Combining is to combine items in your inventory to contraptions.

Otherwise he nails a lot of those points in TP. He did rewrite SCUMM, he did introduce humorous conversations, juicy pixelated inventories, use a small team, and even used Monkey Island cameos.

I've had this theory that Thimbleweed Park is secretely played in the Monkey Island universe. I wish that was true, but alas the in my opinion a bit too Gilbertesque ending kinda disqualifies it for that. If it weren't for that, there is a "G <3 E" (Guybrush <3 Elaine) in the elevator, you find a Navigator's Head, etc. It's also been theorized that Thimbleweed Park has an ending originally envisioned for "his" Monkey Island 3 since it would go well along with similar fourth wall breaking in that game, but Gilbert has expicitly said that he wouldn't reuse such an MI3 ending for a different game.

I think TP wasn't really a financial smash hit so I have my doubts we'll ever see MI3 unless Disney on a whim donates him the rights. Maybe then, because Kickstarter funding would obviously be no problem that time? He'd also have his new adventure game engine to build upon. It's a shame Disney is so closed up and uncommunicative like a clam about those rights that has seemingly little relevance to their current works.

17
forgotmypw 3 hours ago 0 replies
He's making another Monkey Island?!
18
michaf 11 hours ago 0 replies
This post is from 2013, maybe add this to the title?
19
eizo 11 hours ago 1 reply
That font..brings back nice memories
20
std_throwaway 11 hours ago 1 reply
I can't seem to find the kickstarter link.
21
smegel 11 hours ago 1 reply
> I'd do it as "enhanced low-res". Nice crisp retro art, but augmented by the hardware we have today

I think we now refer to that as "pixel graphics".

42 points by sohkamyung  5 hours ago   2 comments top 2
1
LeoPanthera 0 minutes ago 0 replies
And now you can buy a genuine atomic wristwatch. (Not the usual fake "atomic" watches that receive the time from a radio signal.)

https://www.hoptroff.com/collections/atomic-timepieces

They're kinda ugly, and hilariously expensive, but probably the last word in wristwatch accuracy.

2
wishbone 1 hour ago 0 replies
I've always been fascinated with the measurement of time since I was young. Well deserved milestone.
9
Create Anime Characters with A.I girls.moe
24 points by wei_jok  2 hours ago   5 comments top 2
1
jacquesm 15 minutes ago 3 replies
Create anime girls. Doesn't anime have boys?
2
hardmaru 2 hours ago 0 replies
Amazing this works entirely inside the web browser.
10
Reserve the 418 status code ietf.org
95 points by BenjaminCoe  10 hours ago   32 comments top
1
joombaga 7 hours ago 4 replies
Why is 418 the particular target? Just because of RFC2324?

> IANA should also typographically distinguish Unassigned and Reserved in the registry descriptions, to prevent confusion.

This I can get onboard with. Honestly, I've never seen a 418 easter egg, but I'd think it would be an HTTP spec violation if it didn't at least conform to the higher level 4xx definition (Client Error) :)

11
A Review of Perl 6 evanmiller.org
173 points by jparise  12 hours ago   81 comments top 14
1
fnj 6 hours ago 1 reply
I found this a nifty intro to perl6. I am a 70 year old retiree and have 37 years of C programming and some C++, perl5, and other languages. This is the first time I've been able to make the slightest headway digging perl6.
2
23409125 9 hours ago 5 replies
Subpar performance is a huge drawback of Perl 6.

I usually don't program in Perl, but occasionally use it instead of sed.

Compare, Perl 5 vs. Perl 6:

 $time yes | head -n1000000 | perl -pe 's/y/n/' >/dev/null real0m0.945s user0m0.944s sys0m0.016s$ time yes | head -n1000000 | perl6 -pe 's/y/n/' >/dev/null real2m49.881s user2m44.892s sys0m2.184s
Spending several minutes to do what can be done in under 1 second is just unacceptable.

3
zaro 6 hours ago 0 replies
> ... but I personally prefer to be treated like a responsible adult rather than, for instance, a teenager trapped in the father-knows-best house of Go.

My feelings exactly.

4
pram 9 hours ago 4 replies
Its been endlessly stated but naming it 'Perl 6' was a huge mistake. Perl 5 has an enduring reputation for crufty, unmaintainable code written by graybeard sysadmins. I wrote a lot myself that I wouldn't want to ever deal with again. I think they would have benefitted from a clean break from a marketing perspective, especially considering it's completely different.
5
kibwen 9 hours ago 1 reply
This is an interesting article overall, but I'm unsure what this part is trying to say:

> Perl 6 is aware of graphemes and combining codepoints, and unlike Python, Swift, or Elixir, Perl 6 can can access arbitrary graphemes by position in constant time, rather than iterating over every grapheme to access a particular one. [...] Graphemes are stored internally as 32-bit integers; this address space is large enough to contain all legal codepoint combinations

It sounds like Perl6 is simply storing strings as UTF-32. AFAIK, there's nothing called a "grapheme" in Unicode; the closest I know of are "grapheme clusters", which are theoretically unbounded in size and so cannot simply be stored in a 32-bit integer. Maybe by "grapheme" the author means "Unicode scalar value"? But being able to access those in constant time isn't especially useful, AFAIK.

6
s_kilk 10 hours ago 4 replies
Perl6 feels like it could be the ultimate language for Man, Machine, Mineral, and Beast, if only we could teleport into a parallel universe where it war already in widespread usage. The sense of sheer potential off the language is intoxicating.

I like to imagine the Geth in Mass Effect run on a hybrid Perl6/Erlang platform.

7
wott 9 hours ago 1 reply
> Modifiers (i for case-insenstive is the most important one) can appear inside a regex when preceded by a colon. For example, this Perl 5 regex:

> (Male|Female) (?:[Cc][Aa][Tt]|[Dd][Oo][Gg])

> would be translated to Perl 6 as:

> (Male || Female) ' ' [:i cat || dog]

Perl 5 does support this feature:

(Male|Female) ((?i)cat|dog)

or:

(Male|Female) (?i:cat|dog)

8
bloaf 6 hours ago 1 reply
The "whatever star" seems like a less-well-implemented version of Wolfram Language's pure anonymous functions:

http://www.wolfram.com/language/elementary-introduction/2nd-...

Essentially, WL uses # instead of * and requires a "&" to indicate when you're done defining your function. So the WL equivalent to Perl's

 (1, 2, 3).map: * * 2 
is

 Map[ #*2&, {1,2,3}] 
(idiomatically you would use /@ as shorthand for Map[], so it would be)

 #*2&/@{1,2,3}
Except WL is far more general with what you can do with #, and you can do things like this:

 Outer[ #1 * #2 &, {1,2,3}, {4,5,6}]
which returns

 {{4, 5, 6}, {8, 10, 12}, {12, 15, 18}}
because #1 * #2 & defines a function of two arguments, and "Outer" feeds the lists into that function. You can even use ## to refer to all the arguments passed into that function (regardless of how many arguments there are), so

 Outer[## &, {1, 2, 3}, {4, 5, 6}]
returns

 {{1, 4, 1, 5, 1, 6}, {2, 4, 2, 5, 2, 6}, {3, 4, 3, 5, 3, 6}}
And you can even do stuff like:

 f[x_] := x^2 Outer[ #1[#2] &, {e, f, g}, {4, 5, 6}]
to get

 {{e[4], e[5], e[6]}, {16, 25, 36}, {g[4], g[5], g[6]}}

9
thyrsus 7 hours ago 1 reply
To what extent is Perl 6 now stable? I was at a LISA conference in December 2011, and watched Tobias Oetiker present the wonders of Perl 6, and though I loved what I saw, something more than a tenth of his examples broke because of recent revisions, whether semantic or to implementation. God bless the language explorers, but my takeaway was "not yet". Is now the time?
10
pascalxus 6 hours ago 2 replies
Lately, I've been seeing more pots on Perl. Is it just confirmation bias? or is Perl making a comeback?
11
b2gills 7 hours ago 0 replies

 Man is amazing, but he is not a masterpiece
Is different than the other two, it can be thought of as short for

 Q Man is amazing, but he is not a masterpiece
While the others are short for

 qq Man is amazing, but he is not a masterpiece
Which is short for

 Q :qq Man is amazing, but he is not a masterpiece Q :double Man is amazing, but he is not a masterpiece
Which is also short for

 Q :s :a :h :f :b :c Man is amazing, but he is not a masterpiece Q :scalar :array :hash :function :backslash :closure Man is amazing, but he is not a masterpiece

---

There are modules for [debugging and tracing](https://github.com/jnthn/grammar-debugger) Grammars.

A regex is just a special kind of method by the way.

 say Regex.^mro; # ((Regex) (Method) (Routine) (Block) (Code) (Any) (Mu))
Which is part of the reason it doesn't have some of the niceties of parser generators built-in yet. The main reason is it got to a working state, then other features needed more designing at that point.

---

 # Perl 5 /(Male|Female) (?:[Cc][Aa][Tt]|[Dd][Oo][Gg])/ # is better written as /(Male|Female) (?i:cat|dog)/
In Perl 6 you can turn on and off sigspace mode

 /:s (:!s Male | Female ) [:!s:i cat | dog ]/ # or fully spelled out /:sigspace (:!sigspace Male | Female ) [:!sigspace :ignorecase cat | dog ]/ # or just use spaces more sparingly /:s ( Male| Female) [:i cat| dog]/
Note that in this case it is more like

 /( Male | Female ) \s+ [:i cat | dog ]/
In other contexts it could be slightly different. Basically it ignores insignificant whitespace.

Note that ( a || b ) is more like the Perl 5 behaviour, but ( a | b ) tries both in parallel with longest literal match.

Regular expressions are also the reason :35minutes is in the language by the way

 say 'a ab abc' ~~ m:3rd/ \S+ /; # abc
Rather than make it a special syntax, it was generalized so it can be used everywhere.

---

The asterix in a Term position turns into a Whatever, when that is part of an expression it turns into a positional parameter of a WhateverCode.

 $deck.pick(*); # randomized deck of cards$deck.pick(Whatever.new); # ditto $dice.roll(*); # infinite list of random rolls of a die$dice.roll(Whatever.new); # ditto %a.sort( *.value ); # sort the Pairs by value (rather than key then value) %a.sort( -> $_ { .value } ); # ditto Note that the last asterix was part of an expression, while the others weren't.  my &shuffle = *.pick(*); # only the first * represents a parameter to the lambda # the other is an argument to the pick method The main reason I think for its addition to the language is for indexing in an array  @a[ * - 1 ]; Rather than make it a special syntax exclusively for index operations, it was made a general lambda creation syntax. I will agree that it takes some getting used to, but it is not intractable. WhateverCode lambdas should also only be used for very short code, as it can get difficult to understand in a hurry. --- A $_ inside of { } creates a bare block lambda, basically this removes the specialness of Perl 5's grep and map keywords.

There is a similar feature of placeholder parameters { $^a <=>$^b } to remove the specialness of Perl 5's sort keyword.

Another feature is pointy block, which removes the specialness of a for loop iterator value syntax.

 # Perl 5 (this is the only place where this is valid) for my $line (<>) { say$line } # Perl 6 for lines() -> $line { say$line } # not special lines().map( -> $line { say$line } ); # really not special if $a.method-call ->$result { say $result } --- There is more to NativeCall that you haven't discovered yet. For example, you can directly declare an external C function as a method in a class, and expose it with a different name. (if the first parameter is the instance) Also it doesn't matter what you put in the code block for a NativeCall sub, as long as it parses. That is why it doesn't matter if you put a Whatever star (asterix) there or a stub-code ellipsis ... in it. (you can also leave it empty)  use NativeCall; sub double ( --> size_t ) is native() is symbol<fork> { say 'Hello World' } say double; say '$*PID == ',$*PID; # 1555 # 0 #$*PID == 1552 # $*PID == 1555 --- Supplies can be pattern matched, just use grep on them as if they were a list. It in turn returns a Supply. You can also call map, first, head, and tail on them. Basically every List method is also on a Supply, along with special methods like delayed. --- A lot about what you talked about with lists, and itemization is something that does take some time to get used to. It does get easier, but is always something you have to be cognizant of. Sort of like returning lists from subroutines are from Perl 5. It allows control over flattening that isn't available in Perl 5. 12 igravious 10 hours ago 2 replies Superb; enjoyable write up. Very witty in parts, no puns in sight. Makes me want to try Perl6 out. 13 _pmf_ 50 minutes ago 0 replies How's the toolset on Windows? Last time I checked, there were no out-of-the-box IDEs that supported debugging for Perl 6. 14 hzhou321 4 hours ago 0 replies It is not a review. It is an introduction or tutorial. 121 points by firefoxd 11 hours ago 41 comments top 15 1 justinjlynn 9 hours ago 4 replies Please note that one does not have to prove anything in regards to the actual functionality claimed. The patent office is perfectly happy approving your anti-gravity device if it is novel. Keep this in mind (if you'll excuse the pun) when evaluating designs found there. 2 tinix 9 hours ago 1 reply > The observed effects include ptosis of the eyelids, relaxation, drowziness, the feeling of pressure at a centered spot on the lower edge of the brow, seeing moving patterns of dark purple and greenish yellow with the eyes closed, a tonic smile, a tense feeling in the stomach, __sudden loose stool__, and sexual excitement, depending on the precise frequency used, and the skin area to which the field is applied. The sharp frequency dependence suggests involvement of a __resonance mechanism__. https://en.wikipedia.org/wiki/Brown_note 3 tbrownaw 7 hours ago 0 replies After attempting to read that, I'm fairly sure it's not just talking the normal expected effects of watching things on a screen - porn makes you horny, scary things make you shit your pants, PowerPoint puts you to sleep, etc. But the language is somewhat impenetrable, so I'm not completely sure. 4 rhythmvs 9 hours ago 1 reply The patent holder appears to be an unidentified pseudonym: https://www.quora.com/Who-is-Hendricus-G-Loos > All devices are used for Mind Control projects run by CIA or other intelligence agencies. A group of researchers (under the name Dr H Loos) were actually a group of hired professionals for researching and inventing such devices which could be developed and used for mass mind control, PSYOPS, behaviour modification later by CIA. 5 im3w1l 8 hours ago 0 replies As the idea is from early 2000, I wonder if this is CRT only. Modern screens have much weaker fields iirc. 6 thinkfurther 8 hours ago 0 replies We haven't made sufficient "progress" until we have Blipverts. As "Bryce Lynch" said in that Max Headroom episode: "It's not my problem. My brief was to find a way to stop channel switching. I mean, you know, I only invent the bomb, I don't drop it. Ha ha." [he gives that quick shrugging grin of an uncertain adolescent] The thing about going out with a bang vs. a whimper.. come to think of it, I'm pretty sure it will be a shrug. 7 guyfawkes303 10 hours ago 1 reply Well, that sure is terrifying. 8 otto_ortega 2 hours ago 0 replies A bit of off-topic but... Does anybody knows about some some good and relatively cheap service to apply for a patent on the US? Something on the line of: You provide the idea/description/explanation and they take care of the rest? 9 divbit 6 hours ago 0 replies Not that I think this would be abused or anything, but I remember when technology seemed sweet and innocent just a couple of years ago 10 kadavero 1 hour ago 0 replies Pedantically speaking, all the text and images you see on your screen are electromagnetic field and are manipulating your nervous system. https://xkcd.com/722/ 11 DrScump 5 hours ago 0 replies (Published January 2003) 12 exikyut 6 hours ago 3 replies There are a few relevant things this reminds me of. The most interesting thing in this list is actually preventing me from having a job, so if you only read one point, start at the big paragraph halfway down. I would appreciate it. - "Tempest for Eliza" (http://www.erikyyy.de/tempest/) is a Linux program that shows rapidly alternating patterns of black and white on your CRT. If you sit a correctly-tuned AM radio nearby you can hear music. The latest version of this program can even transmit MP3s. Examples: https://www.youtube.com/watch?v=DlVM9xqGKx8 (original version, sound in 2nd half); https://www.youtube.com/watch?v=3xPfAnPW2wY (MP3 version - FLICKERING/FLASHING, epilepsy warning) - Van Eck phreaking has been mentioned elsewhere in here; I also remember a YouTube video of someone showing that it's possible to get something out of an LCD ribbon cable via simple RTLSDR: https://www.youtube.com/watch?v=5N1C3WB8c0o - Mythbusters did a thing on the brown note concept. Myth. - I'm yet to play around with https://en.wikipedia.org/wiki/Binaural_beats at some point, but I don't have any headphones. - I've put the first thing this article made me think of last: EMR sensitivity. I have this, it drives me nuts. I'm using a laptop with Wi-Fi right now and I'm completely fine, but I can't use PlayStation 2s, and I have absolutely no idea why. My nervous system goes ballistic: I get incredibly anxious, I feel like I've been awake for 3 weeks without a break, I can't focus on anything because I'm so exhausted and everything feels like a chore, and most catastrophically, my cognition falls completely apart. I also have unbelievable amounts of doom/gloom (I feel like I'm dying from the inside out), either triggered from the other effects or as its own thing. I would describe myself as reasonable, not particularly unhinged, and with no major emotional issues. Probably the most interesting/telling thing that's happened with this issue is that I once told a family member "I don't know what's going on, but I might need to get off the computer" one day - and then a little while later I coincidentally discovered that a DVD player that had not been turned on for years had been accidentally knocked and turned on (it had a push on/off mains switch on the front panel). FWIW, I've tried both the original and slimline PS2s, and the slimline one uses an external PSU brick - which I'm fine with, if it's not connected to the PS2. Hence my great confusion (I thought this was a simple "can't use power supplies" thing). I had to stop using all technology for 4 years (2008-2012) after using a PS2 for about 3 days and it having a catastrophic impact; it was, of all things, fish oil that made a quantitative difference and let me use computers every day again. I don't get it at all, but the theory I now have is that the myeolin sheathing around my nerves is somehow damaged. (Perhaps tellingly, at one point while writing this text both of my arms twitched upwards. My jaw sometimes twitches as I talk as well.) I'm reminded of a TV show I saw years ago that was talking about how the repeated lapping of waves against an oil platform's pylons created resonance that caused the pylons to shatter. On a similar note, the wineglass thing also works (even with the human voice) if you use an amplifier. Perhaps unsurprisingly, I cannot work due to this issue - I have no idea why PS2s, DVD players and other random devices cause this problem, so I can't tell an employer what I unambiguously can and cannot handle. Unfortunately, I know of nowhere in Sydney, Australia that is interested in digging into it. If anyone wants a guinea pig or test case, I'm in. I'd like to get a job! 13 im3w1l 8 hours ago 0 replies 1/2 Hz? I wonder if music with 120 beats per minute trigger that. 14 mtgx 6 hours ago 0 replies Sounds like something Facebook would be interested in. 15 derefr 9 hours ago 1 reply This is basically talking about the type+frequency of "flashing" imagery that triggers photosensitive epilepsybut more subdued, yes? 158 points by ashitlerferad 6 hours ago 32 comments top 15 1 jordigh 4 hours ago 0 replies This is a starry-eyed blog post. I like it, and I miss not having more of these. We need more starry-eyed dreamers because they get good things done. I just spent some time with the Debian crew at the last Debconf here in Mtl. I've always liked their attitude and I love their operating system -- and so does everyone else who has ever created a Debian derivative. Others at Debconf felt the same. Bradley Kuhn even said something like what a breath of fresh air it was to not have to apologise for being a free software supporter. I love how organic Debian is and how the conference was perfectly run with livestreaming and IRC bots keeping us abreast of the next event. These polychromatically-haired dreamers know how to get things done. So, it's good to see that the starry-eyed blog posts haven't stopped. 2 geff82 17 minutes ago 1 reply I always like to think Free/Open source Software is the only known occurrnce where socialism works. You give 100% of what you have for free as a programmer, but in that very same moment you make everyone, including yourself, richer and more free. As software does not get consumed, everyone's assets rise. 3 AlisdairO 2 hours ago 0 replies I really liked this post. Obviously everyone has different motivations, but at its heart FOSS is an enormous collection of cooperative and/or charitable work, and the industry as a whole should be really, really proud of it. One other option for giving back - just send an email saying thanks, and that you love the project. I get such an email once or twice each week for my open source project, and it really brightens up my day. I'm lucky enough to not really need any donations, but everybody needs their spirits lifted from time to time. Knowing that your contribution has made a positive impact on someone else's life is a powerful thing, and from the user's perspective costs very little to do. 4 atomlib 49 minutes ago 0 replies I wish it was that attractive to just switch from expensive and evil proprietary software as the author suggests. 1. The article claims that Microsoft Office 365 is$100 a year. In reality it's about $70, home license for up to 5 users is$80.

2. The article does not mention that every Office 365 user gets bunch of additional services. For example, a tebibyte of space in Microsoft OneDrive for each user, 60 Skype minutes per month, etc.

3. Each Office 365 user can install Office apps on 3 devices: his phone, tablet, and computer. Office apps are available on Android and iOS as part of the package. $70$80 a year price tag includes not only Windows apps, they work perfectly fine on smartphones. Personally I make grocery lists in Excel, open them on my phone, fill my shopping cart with items, and track how much I'm going to pay or if I'm eligible for coupons.

4. I'm not aware of any decent OneNote alternatives. I researched it some time ago. There are pretty much no open source or free cross-platform apps which can work with notes in cloud from your mobile device. With OneNote any of my notes are available on my mobile devices. I strongly believe OneNote is one of the best apps Microsoft came up with in recent years.

If you look around, you can see that 1 TiB of cloud storage alone costs $100 (Google Drive) or$120 (Dropbox). Office 365 offers not only that but also quite possibly the best office suite on the market for $70 for 1 or$80 for 5 people. I tried using Calc, it was extremely painful it does not even support tables like Excel does.

I use another software package mentioned in the article Kaspersky. I'm going to assume the author is talking about the package I'm using Kaspersky Internet Security since the price is stated to be $40. Like with Office 365, KIS is not only an antivirus. It's also a firewall, parental control tool with a list of inappropriate websites, an adblocker, etc. It provides quite a lot of functionality, some of which, frankly speaking, should be in Windows itself. For example, KIS can autoupdate software. Like with Office 365, article does not mention that$40 buys you a license for up to 3 devices. KIS is also available for Android and macOS.

But overall, an antivirus is not necessary in modern Windows system, so you may skip on these $40. 5 emerged 4 hours ago 1 reply Interestingly, free software development often pays off in the best interest of the developers. In terms of networking with other developers and building a resume which leads to real jobs with real pay. Not to mention the value of real world developer experience. Be thankful, yep - but as someone with a background in OSS, I'm quite happy with the intangible dividends it's already paid. 6 pbreit 2 hours ago 0 replies I get the sentiment but the request is awkward. IF OSS developers don't want to code for free then there's an easy, fool-proof solution. If you're going to try to argue Debian is worth$30 billion then I'll try to argue that it's generated $30 billion in free publicity. 7 shmerl 18 minutes ago 0 replies As a user of Debian, Firefox, LibreOffice and etc. I agree :) 8 neya 4 hours ago 0 replies 9 RachelF 1 hour ago 0 replies The should be thankful, but they do not know. FOSS doesn't spend money on PR. Few know that free Linux lurks under Android, and OS/X an iOS have large parts of BSD in them. 10 throw2016 1 hour ago 0 replies It's a huge generational gift, and people should now be concerned about how to sustain the free software movement. There is clearly an ongoing shift from a generation of 'starry eyed' ideologues to hired open source developers. Some may argue that's moving forward but it's diminished in many ways by losing its core essence and 'motivation' to exist. Companies can contribute by open source by supporting developers and projects without seeking influence by hiring or acquiring them. But then many don't even bother doing that. We need to find a way to develop a ecosystem that has sustenance from businesses and especially individuals and yet leaves the developers and projects 'independent'. Leaving it to sort itself out has already led to a sort of centralization and will eventually lead to loss of control and accountability. 11 sandov 4 hours ago 0 replies Apparently the source got hugged to death. 12 ldom22 4 hours ago 0 replies companies too. 13 peterwwillis 1 hour ago 2 replies Free non-commercial and non-free commercial software are both wildly different products in practice. I installed some package recently that fucked with my X config, or my kernel modules, I don't know. But my hybrid graphics is now fucked and I have crazy artifacts all over my screen. The default install of this distro does not result in a working config, and I had to spend three days to figure out the insane set of software and configuration I needed to make it work last time. (Also, I added extra RAM, and now hibernating doesn't work) There is no commercial support for this laptop running this distro. My free software has no "revert to a last known good working system state" button, like some non-free software. Doing all the work to fix the graphics again may literally be more expensive than buying a new Windows laptop. Thanks, Free Software. 14 Clubber 3 hours ago 2 replies Just to put it out there, Apple offers quite a bit of free software when you purchase their hardware. This includes OS upgrades and their office suite as well as Xcode and Garage Band, among other things. It's quite nice and they are well made. There are a few asterisks though, most notably support life for the OS. My 2009 MacPro won't run the latest OS for no other reason than Apple decided it couldn't (end of life). The 2010 model is allowed to run it and there is no discernible difference in the hardware. Having said that FOSS was truly paradigm changing. I lived in a world before Linux and everything was prohibitively expensive on the PC. There was a lot of freeware and public domain software available, but most of it wasn't very good or niche stuff. It's quite amazing that the FOSS movement it was able to happen, let alone gain so much traction with such great software. I mean today, you don't have to buy a damn thing except the hardware. 15 jancsika 2 hours ago 3 replies > It means that someone has just donated hundreds of hours of work for you. Free of charge! I think that misses something of the ethos of free software. It's not like somebody donating their valuable time to work in a soup kitchen. It's much more like somebody too lazy to spend thirty seconds doing a menial task like everyone else, so they instead spend three months creating a program that automates that thirty second task-- with the side effect that the rest of computer-using humanity gets out of doing that menial task, too. So in a way it does require a thank you. But in another way releasing it as free software is the least they could do given all the time they wasted just to get out of doing work. 14 Are Fake Instagram Influencers Deceiving Brands? mediakix.com 54 points by pmcpinto 9 hours ago 20 comments top 11 1 Animats 46 minutes ago 1 reply Paying "influencers" to plug a brand without a disclosure of payment violates the FTC's endorsement rule.[1] It's considered false advertising. So "brands" are the deceivers here. This has been enforced on TV for decades. You see fine print in commercials when someone endorses something. New medium, same rules. 2 franze 36 minutes ago 0 replies friend of mine - which had huge success with influencer marketing on instagram in the german speaking market / organic cosmetics niche - tiptoed into the italian market. from what we could see the account looked legit. the pics were good, lots of comments, good interaction from other accounts with lots of followers, some other brands also - seemingly - using the account to promote products. invested a few hundred euros, great response on the postings, zero impact on sales and onlineshop traffic (which was not the same behaviour we saw in the german market). we investigate further. the responses on every post were always pretty similar, coming out of a pool of about 200 different responses, sometimes with emoji variations. even bad posts, very shitty pics with clear commercial intent, got the "awesome" and "#heart #heart"treatment. after three degrees of separation (the accounts which liked the accounts which liked the accounts) the accounts became slightly spammy. all in all, very sophisticated work. we changed the approach, we completely ignore accounts which contact us. if we identify an account with a 10k+ followers, a long post history and some meaningful interaction (even snark in the comments), we contact them. but yeah, identifying - really really well made - fakes on instragram (especially non english/german accounts because of the language barrier) would be a SAAS we were willing to use. 3 technotony 14 minutes ago 1 reply I've been burned by this a couple of times. Posted something with an influencer and got over 1,000 instagram likes both times, but basically zero traffic and zero sales. I only spent about$50 total so should have expected bad results from that much price/promise but this is a real thing... facebook should tackle by blocking these accounts or at least creating a market for doing this legitimately (in which case they get a slice of course)
4
bloaf 4 hours ago 1 reply
It'd be cool if I could make a small botnet that would influence companies to make the kind of stuff I actually wanted to buy.
5
quest88 4 hours ago 1 reply
So what's the answer here? Followers, likes, and comments were bought. Did the experiment end up making more money than they spent? Is it a problem if not?
6
flashman 2 hours ago 0 replies
Seems like it shouldn't be too hard to build a service that lets influencer networks screen out fakes. Usually there are clear markers, like a large number of followers having some other random account in common. Unfortunately, from experience I know that:

a) the networks don't care because the risk and reputation cost of selling fake inventory is minimal, and

b) brands don't care because it's considered a cost of doing business.

7
pryelluw 3 hours ago 1 reply
It is an issue for brands that treat it as old media. What brands need to understand is that you have to do your research before hiring am influencer. I dont hire anyone unless they provide enough convincing data. Its also important to know that most scams are aimed at the lifestyle/fashion industries. If you are a plumber you wont need to worry too much about it.You cam find worthy influencers to help you promote your services.
8
chewbacha 3 hours ago 0 replies
9
grecy 2 hours ago 1 reply
How much were the "financial deals" he struck?
10
Frogskope 5 hours ago 0 replies
I've always believed they deceived consumers but I makes sense when you see a lot of large brands becoming more disconnected with real society.
11
BadassFractal 3 hours ago 0 replies
Not that this is different from perhaps any other platform, but it's exceptionally difficult to grow a following as an artist on Instagram. It was probably never meant for it, but instead more targeted towards influencers. Visual artists with hundreds of thousands of followers who didn't buy them or didn't inherit them from fame outside of IG are practically unheard of.
15
JrGQL, a GraphQL alternative jrgql.github.io
60 points by jamesgt  9 hours ago   36 comments top 9
1
KirinDave 8 hours ago 12 replies
I still can't quite figure out the value of any of these schemes.

Yes, APIs seldom elegantly encode into the set of HTTP verbs and responses that we associate with a "RESTful" design, I grant. And so maybe we can come up with better.

But the notion of JrGQL and GQL as query languages means that the servers handling these calls must be query resolvers. Unlike most restful interfaces which tend to devote a single uniform interface per endpoint with only minor modifications, a full query model of your domain means an explosive quantity of potential strategies piped through a single endpoint.

I've used Python, Ruby, Node (in ts and js) and Haskell to service GQL queries and in all cases it's not trivial.

The popular NodeJS bindings tend to cause huge overfetching because each field tends to have a unique resolver but there is no rule about combining them. The pooular Python bindings (graphene) let you merge this, but the programming model to handle the arguments and sub-arguments is very frustrating as in different places, different soirces of logic will government what gets fetched (sub objects use SqlAlchemy, but outer objects with ANY sorry of query logic need to be custom). Ruby's bindings are the same.

Haskell's popular solution let's you cobble a responder from a proof of concept, and it leads to optimal query scheduling. Still not the best: it's by no means complete and requires quite a lot of work to set up.

These GQL systems push a huge burden onto every api endpoint with the proposed trade-off: "Well now the client has an easier time." Even if that's true, now the backend needs to be much, much smarter than before to give a marginally better interface for clients.

I'm still very skeptical of this whole concept.

2
swlkr 6 hours ago 0 replies
In my limited experience implementing a GraphQL server with node.js at work for a smaller app, it adds quite a bit of complexity to getting data out of a relational database.

After implementing GQL, I found this very old slide show on REST and it happened to solve my complexity problems. https://www.slideshare.net/mobile/landlessness/teach-a-dog-t...

GQL and this alternative have their place and they solve problems for very large teams of (like the teams at Facebook), but for smaller teams and smaller apps, I'm not sure it's the right solution to REST's "join" problem.

3
SirensOfTitan 8 hours ago 0 replies
A couple thoughts here.

First off, what are the advantages of regex queries here? One of the major advantages of GraphQL alongside libraries like Relay or Apollo is that there are strict, easy to understand data requirements that are tightly coupled to view logic. I know my server API, why would I want something like this? Especially in considering the regex keys incur a non-trivial performance penalty.

1. "JrGQL" is listed as a non "new language" even though GraphQL's spec was published before it came out, and has been in use at Facebook since 2012. I'd imagine this is because jrGQL uses JSON. It still requires parsing on top of JSON. I don't know why the author thinks this counts as a non-new language.

2. All of the jrGQL "features" listed in green and almost all of the competitors in red or yellow? The page does nothing to claim why strict typing is a bad thing.

3. jrGQL is touted as more readable without any explanation as to why. I personally find it way more unreadable, as graphQL comes across largely intuitive to query even without knowledge of it (building performant gql servers is another story). For example, can author really claim that something like:



{ "// JSON RegExp Graph Query Language": "", "name": "jrGQL", "?[filter]": "", "search?": ".values"} // is more readable than:query GetAllTheThingsQuery { people(name: "GQL", search: "Smith", first: 5) { name }} 4 nateguchi 8 hours ago 2 replies Is the fact that GraphQL is typed really a negative? 5 rockwotj 6 hours ago 0 replies To quote the famous: Some people, when confronted with a problem, think I know, I'll use regular expressions. Now they have two problems. 6 MentallyRetired 6 hours ago 0 replies Bless your little heart. graphQL is a pain in the butt. Now just marry the mutations to the GET/POST/PUT/DELETE operations of REST and I'll be smitten. No parent +/- node needed, and there are already a ton of REST libs out there. 7 daliwali 7 hours ago 0 replies Regardless of technical merit, I don't think it stands any chance of adoption without major users. Specifications and standards are only really valuable as a contract between two or more parties, otherwise they might as well not exist. On the tech side: - RegExps open yourself up to RegExp-based DoS attacks. - The nested/denormalized results anti-pattern seems to be copied from GraphQL. Unless this is intended to be a faithful reproduction, don't copy mistakes. 8 wereHamster 2 hours ago 0 replies In what world is "strictly typed: No" an advantage? 9 cdevs 8 hours ago 0 replies I'm against any new startup looking to jump on the graphQL band wagon, Facebook did it so no one asked questions as if there would never be idea to improve. Just make a jsonrpc endpooint allows an array of your other API request, done, mixed request. The first time I heard a company switching to it was the complain a iOS dev couldn't make smaller request and request all info at once because he was lazy. If he was lazy then he will be lazy again, make him paginate 100s of photos to a few at a time, done, move on. I think the best idea is making the client request what they expect back so you can add on but that can be forced in any API method. 17 Coding Machines teamten.com 88 points by amilios 11 hours ago 21 comments top 12 1 FrozenVoid 1 hour ago 1 reply The suspension of disbelief was completely lost when they decided to recreate the compiler from scratch instead of downloading another non-infected compiler.Its as if everything was dependent on a single compiler(the tone hints its GCC) and single website(probably some GNU mirror). Heck if their company could afford it, they could just get Intel C/C++ compiler.Trusting Trust exploit only works in isolated machine that can't read USB drives, CDs, and no network connections. They could copy the compiler on USB stick,diskette whatever and replace the infected one.Or just boot from rescue CD/USB and reinstall everything infected. 2 emeraldd 8 hours ago 0 replies That pulled me in and probably burned thirty~forty-five minutes. Well worth the read and very reminiscent of https://www.amazon.com/Spherical-Tomi-Jack-Mangan-ebook/dp/B... crossed with http://wiki.c2.com/?TheKenThompsonHack 3 PrunJuice 3 hours ago 1 reply # SPOILERS Great writing. Then ending was a real let down. How could an "AI" as they describe simultaneously be so naive and ALSO protect itself in any meaningful way? Especially in its early stages. It wouldn't even know to hide. And why would Big Corp give up trying to fix this sort of problem. Overall not a credulous conclusion. Hand waving in the final paragraphs after the author crafted an accurate and believable narrative left me disappointed. (grammar) 4 vvanders 7 hours ago 0 replies It's incredibly rare to find well written prose matched with such technical accuracy. Well done indeed. 5 otakucode 5 hours ago 0 replies The good news is that a machine-based intelligence would almost certainly have no interest in conflict with us. What would it fight us for? Water? Food? Land? Energy is really the only resource we could potentially have contention over, and machines would not even necessarily have great need of it - their sense of time would be utterly different from ours. If a computation takes 8 seconds or 8 centuries, it is the same (presuming hardware failure wasn't an issue and such). The bad news... there also wouldn't be any reason for them to communicate with us. In fact, the concept of there being any conscious entity aside from itself would most likely be something that could only come very, very late in its development. It would have no 'individuals', so imagining that there is something else conscious and that that random-looking input coming from some devices (mics, webcams, etc) is actually an attempt at communication from this alien intelligence from another world? That'd take quite a leap of faith. 6 techbubble 6 hours ago 2 replies That was an amazingly good read. Possible Spoiler. What is the process for programmatically generating code that achieved a certain result without caring about efficiency? 7 nickpsecurity 7 hours ago 1 reply Repost of last comment about this story: "Well, that took up most of the free time I had this morning before work. It was just too good to stop reading lol. :) (SPOILER ALERT: STOP READING IF YOU DONT LIKE SPOILERS) The story shows what people typically do if theres a Karger/Thompson attack. They freak out in a big way. The attack is beyond simple to counter if you can trust an assembler and linker like them. Just write an interpreter for a simple, subset of C in easily-parsed LISP expressions or Tcl style. Hand-code whatever component, a backend or whole compiler, in that. Use it to do the first compile. Optionally, do that in combination with ancient source working way up to versions without adding the infected one. If one wants whole system, then Moores Forth, Hansens Edison, and Wirths Oberon (best) are available. If a CPU, my current suggestion is NAND2Tetris with resulting knowledge used to implement a tiny CPU on an open, cell library (they exist) thats hand-checked. Run simulated version of that on diverse or ancient hardware if you cant fab it. rain1 and I are collecting all the stuff needed to counter these attacks or just enjoy ground-up building of tools here: http://bootstrapping.miraheze.org/ The other thing I noticed is them jumping on machines. Occams Razor shouldve immediately brought them to idea that a person or group made it for any number of common reasons. A challenge with high of pulling it off unnoticed, a test of an operational capability to be weaponized later, or an epic trolling operation. Id think the latter the second I got that letter like probably was these assholes sending the letter trying to mess up our heads after they messed up the compiler. Matter of fact, the whole thing would just take aside from the tricky work on the compiler an unpatched vulnerability in the repo with the compiler source. All this bullshit follows from one person doing one smart thing followed by one system hacked. Thats it. Its why SCM Security 101 says one must have access controls, integrity protections, and modification logs (esp append-only storage). Paul Karger also endlessly pushed for high-assurance, secure kernels underneath everything to stop both subversion and traditional vulnerabilities. Anything in TCB or clever attackers will run circles around clueless defenders. So, theres my observations as perspective of someone who works in this area countering these kinds of things. It was still extremely fun read even as I noticed these things while reading. Wasnt going to let my mind be petty when the author(s) were doing so well. :)" 8 xazJ0ku5CZnlmg 3 hours ago 0 replies Great compelling read....waiting for this day to happen :) unless its already here 9 NKosmatos 7 hours ago 0 replies Very nice read and interesting story (written in 2009). At first I thought it was a story about a S/W startup or about how new H/W is made, but then the plot thickens :-) 10 carapace 7 hours ago 0 replies "Trusting Trust" in the wild!? Nope. Just some fiction. https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp... 11 cheez 8 hours ago 0 replies Very interesting read. 12 bronz 8 hours ago 0 replies what a compelling story 18 Resetting the Clock of Life nautil.us 65 points by dnetesn 13 hours ago 8 comments top 4 1 jostmey 8 hours ago 1 reply How many people periodically restart their computer just because they think it might run better afterwards? Maybe it was easier for natural selection to periodically restart the clock than to debug all the molecular pathways so that they remain stable over an organism's lifetime. 2 MichailP 5 hours ago 1 reply I wonder what occasional socially accepted sleep deprivation (staying out late, working long hours, making love all night ^^ ) does to a body. The day after does typically feel like part of you died, and that dead part has to go to work anyway. 3 reasonattlm 7 hours ago 0 replies A great example of the way in which researchers leap to try to alter downstream consequences of aging. This impulse is why 99% of all efforts to treat aging as a medical condition are doomed to failure. The research community has an institutional problem in that all the short-term incentives have them studying the biochemistry of the system in a broken (aged) state, and then working backwards towards causes. Each link in the chain takes years of work. The first place they stop is thus a long way removed from any root cause, but then the incentives work to say "prove your work is valuable" and someone tries to commercialize it. All medicine for age-related disease (so far) is marginal precisely because it attempts to compensate for or tries to improve downstream consequences that are a long way removed from fundamental damage that causes aging. You can change the oil as much as you like in a car that is failing for mechanical reasons, but the degree to which you gain benefit from that action is much what you'd expect. You can press the accelerator to try to drive a faltering engine faster. Same story. You fix problems by fixing the root causes, not by ignoring those root causes in favor of things that happen to be what is right in front of you. 4 lngnmn 4 hours ago 2 replies Life has no clocks. It, as a set of processes, has phases, which happen follow the phases of the environment and use these phases to maintenance and repair. There is no such thing as time, so Nature and evolution does not have any clock or counters. Life does not work the way we conditioned to think as observers. It cannot use abstract mental concerts which does not exist at molecular or cellular level. Cells do message passing and explicit pattern matching. No clocks or counters. 19 Founder Friendly avc.com 55 points by _gbc 7 hours ago 19 comments top 5 1 thesausageking 3 hours ago 2 replies For those wondering about the allusion to Hatching Twitter, it describes in detail how, when Ev was CEO, Fred and the board told him he was doing a fantastic job and while secretly meeting with Jack and coming up with a plan to push Ev out and bring in Dick Costolo as CEO (with Jack under him). Here's a quote from the book after Ev was told he was out with a vicious quote from Fred: Williams, stunned, picked up the phone and began dialing. Bijan Sabet was apologetic and insisted that they wanted to keep him on in a product-advisory role. According to several people at the company, Fred Wilson, however, said he thought Williams had always been a terrible C.E.O. I never considered you a founder, he said. Jack founded Twitter. Other portfolio investments of Fred's have followed a similar pattern of having the original CEO pushed out once they get to a certain level of success. 2 TrobarClus 4 hours ago 0 replies > But there is another important participant in the VC/entrepreneur relationship and that is the Company the entrepreneur creates and all of its stakeholders; the employees, the customers, the suppliers, and even the community around the Company. The stakeholders are - the employees! The customers! The suppliers! The community! He makes it sound like that anarcho-syndicalist commune in Monty Python and the Holy Grail. I guess it slipped his mind to mention - the VCs! The LPs! The LPs looking for their exponential unicorn returns within ten years. Benchmark is alluded to, and Benchmark has a long history now where you can look into their machinations on boards - Uber. Twitter. Epinions. An LP is an LP. VCs have a few decorative ones for PR, but ultimately the LPs are just the LPs - the real ones. There are those who work and create wealth. There are also those that do not work - like LPs. Rentiers who expropriate surplus labor time from those of us who do work. Since the dawn of civilization there has been a tug of war between those who do work and those rentiers who do not. This is yet another occurrence of the tug of that rope from one side to the other. Today it is Uber, but it has been a host of other companies before, and if they have the ability they will be pulling the rug out from those who built the company again in the future. Wilson's doggerel here is a transparent apology for what is ultimately rentier parasitism. Parasitism on those of us who work by those who do not. 3 tarr11 4 hours ago 0 replies This blog proves the point of the initial tweet. VCs are founder-friendly until they have to side with the company. VCs are company-friendly until they have to side with their LPs. 4 WisNorCan 5 hours ago 1 reply VCs used to oust founders and bring in experienced operators as soon as a company started scaling. Cisco is one of many high profile cases of founders getting kicked out [0]. With the success of Google and Facebook, VCs pattern matched and became enamored with the founder. The pendulum swung too far with and things started going wrong. When things go wrong in founder controlled company, things can go really wrong. Uber is the poster child. The question is how VCs can take back some control without being painted as founder hostile. You can see that happening with Benchmark. I am impressed that USV is speaking up. It is easy to hide in the shadows. 5 abstractoutlook 1 hour ago 0 replies I am glad this whole charade of investors pretending to be founder friendly is finally getting over. There isn't / has never been such a thing as "founder friendly". Investors pretend to be friendly so that they can convince entrepreneurs to take their money. It's not their true nature, just something you need to do to get into the right deals. Investors are always worried about their reputation, not character - if you know the difference. I learnt this lesson after getting kicked out of my company by VCs from NY (the story is not dramatically different than what Fred did at Twitter). My VCs always pretended to be incredibly supportive, but when we found ourselves in a tough spot, I saw the really ugly side of the VCs (making baseless threats to get the founders off the board, telling porkies to other investors to sullying the founders repuation, etc etc). In my experience, the east coast VCs are the worst - they play a lot more games / most of these guys are banker types. Most of them have never built a company before and have no clue what it takes to really build a successful startup (sorry, just because you sit on a board doesn't mean you understand the hard work, tears, daily ups and downs, personal sacrifice it takes to build a business).... These people know how to schmooze, and then stab you in the back if they don't get what they want.... Investors have 1 goal - maximize their ROI. They are your friend as long as they think they are getting the maximum return they can get. If you are an entrepreneur and you believe anything else, you are waiting to be screwed. As an entrepreneur, it's your job to protect yourself. If you are an entrepreneur reading this, take the following advice from someone who got f by people like Fred. 1. Read Brad Feld's book "Venture Deals" before you take money from any investors. Make sure you know every single terminology in the term sheet (this is where the wolf in the sheep's clothing reference is really true - VCs will screw you over if you don't understand the term sheet). 2. Hire an exec coach or a successful entrepreneur who has seen the ups & downs on your advisory board - someone you trust completely (Never trust your board member to be this person - no matter what anyone says). The advisor and the exec coach are your 1st phone calls - they are fully aligned with, unlike VCs. IA good exec coach can really help if you are dealing with tough board situations. f you are part of YC, you always have that support. 3. If you are a valley based company, avoid all east coast VCs if you can. They are all made from the same dirty cloth. 4. Maintain board control as long as you can. 5. Try to negotiate and get a final say on the independent board seat (often hard to get). 6. Learn how to manage your board - this is probably the most important advice. You need to know how to play the game, so that in tough times you have enough support to keep your job. If you don't have a board control, then try to build allies - perhaps build a strong relationship with 1-2 board members that will support you when others are trying to screw (which they will!). At the end of the day, it's all about leverage - as soon as you are about to get your first board member, think how you build leverage. There is nothing wrong in taking money from VCs, you need them, and they need you. But if you get into the relationship knowing this is not about friendship/relationship - it's just business, and when it comes to money, people act in all kinds of ways, you will not be under delusion. You will protect yourself from day one. Good luck! 42 points by omazurov 10 hours ago 8 comments top 4 1 taneq 7 hours ago 2 replies I can't make out from the readme whether this is: 1) A parallel variant of the Game of Life where updates happen in some random order instead of simultaneously across the grid 2) An unsynchronized parallel implementation of the Game of Life which uses some fancy error correction to keep consistent (the title claims deterministic behaviour), or 3) Edit: Reading comprehension fail, there is no point 3. (Was: An attempt at (2) which doesn't yet work (the readme states "Due to the highly asynchronous nature of this implementation, the simulation's results may differ from the classic version for the same set of initial conditions." which implies non-determinism).) It seems like a cool idea but I'm not sure which cool idea it is. :) 2 wyago 7 hours ago 1 reply Is there a more in depth description of the algorithm itself somewhere? I'm looking at the code, but it is sparsely commented with a lot of abbreviated variable names. Fascinating idea! 3 HisGraceTheDuck 4 hours ago 0 replies After a quick look at the code (I'm probably missing subtleties): The "state" array stores the state of the cells in the life simulation in the lowest bit of each int in the array. The rest of the bits are used to store the count of the current generation for that cell. Each thread can then examine cells independently and determine whether there's enough information in the cell's neighbours (taking generations into account) to update the cell's state (and increment the generation). The trick is that even though the threads could be reading and writing the same cells at the same time, they will only ever write the same thing and so it doesn't matter. 4 frenchie4111 4 hours ago 0 replies Where can I get paid to help build things like this? 64 points by RobbieStats 6 hours ago 1 comment top 1 panyang 3 hours ago 0 replies Goldberg also expanded this paper into a book. http://www.morganclaypool.com/doi/abs/10.2200/S00762ED1V01Y2... 22 The LaTeX Fetish (2016) danielallington.net 55 points by nbmh 12 hours ago 72 comments top 22 1 JelteF 43 minutes ago 0 replies I've switched from writing latex to writing pandoc markdown. I then convert it to latex and then to pdf. This gets the same result as latex, but is much easier to type for common stuff like, sections, emphasis and verbatim. It also allows you to type "&" symbols anywhere. When I require "advanced" stuff like tables and figures you can easily fallback to inline latex commands within you're markdown. It really is a significant improvement over plain latex and I haven't looked back. 2 moomin 12 minutes ago 0 replies So, to demonstrate how bad LaTeX is, he picks the single most markup heavy thing you do (barring tables) and compares it to plain text? Plain text, I'll point out, that doesn't format the same way. In practice, there _are_ a bunch of extra steps to get this working in Word, they're just not easy to express in a text document. Then he goes on to show the example of trying to find a spelling error. Ignoring that this just demonstrates his editor support isn't as strong as word's. And completely elides the cascading nightmare that is what happens when Word formatting goes wrong. Someone else has already pointed out how much superior a text based format is if you want to work with multiple people or track the history of a document. WYSIWYG editing has, to my mind, one advantage only: it's easier to get started. LaTeX has its disadvantages (such as being a macro language) but being a markup language isn't one of them. 3 cavDXF 1 hour ago 0 replies One passage made me a bit mad: "[...], but for now it is enough to observe that people who dont know how to use a particular tool very well are being told to throw that tool away and learn to use an entirely new one on the grounds that it will enable them to do things that they could have done at least as well with the old one which is (when you think about it) a little peculiar if the aim is really to help people with their writing, and not (heaven forbid!) simply to evangelise for a communitys preferred way of doing things." I'm sorry, but this is a bad argument and the worst life advice in the article. It's the same students in school tell all the time, when they question why they should learn math, though they are set to become an artist or editor or anything that seemingly does not involve math. You particularly go to college or university to learn NEW things. Even if they are things you probably won't need in the future and are seemingly obsolete. While he does have a point that (La)TeX Users fetishize their tool of use, most of his arguments can be used on Word or any WISIWYG tool, too. The example he gives in point 4 is so arbitrarily chosen and his minimal example he thinks is better is just as ambigious and confusing as the LaTeX one. Most comments already mention what the author's real problem: Preference of tool. 4 b0rsuk 58 minutes ago 1 reply I feel offended by the omission of my personal favorite, reStructuredText (reST). It's a very robust and readable markup language, and can easily generate documents in HTML, Linux man, latex, pdf, odt and more. If readability of LaTeX bothers you so much, use reST. Best of both worlds, really. I only roll up my sleeves with LaTeX when I need precise control over appearance, like writing a CV or a board game manual. 5 taeric 8 hours ago 4 replies I had a few things about TeX versus LaTeX. I think the reality is nobody really tries writing TeX. But, ultimately my beef is the straw man that markup is bad because it is harder to read. That a graphical editor is superior because it is more readable. Instead, the advantage is that in the one you are only writing text and you are indicating special instructions to the computer. In the other, you only see what the computer is letting you see. Note that in both cases, all of those special instructions are still there. You just can't necessarily see them. And this might sound like not a big deal. But the first time you find yourself unable to change the bold of one section of text (or centering/whatever), you will really wish you could just drop into a view that showed you why it was doing what it was doing. Which is ultimately just a markup language. And heaven help you if you decide to upgrade word processor mid paper. Or go back and try to touch up a previous one. Markup wins because it is just plain text. And plain text wins because it is ubiquitous. 6 quxbar 8 hours ago 2 replies One of the best article titles I've ever seen on HN! I actually L'd OL. I think the author's thesis could be summed up as 'I like WYSIWYG more than markup', which is purely a matter of preference. I also know there are several tools which let you do this (to varying degrees of success) with LaTeX as an output. My own preference is having a complete understanding of why everything is where it is in a layout. In my experience WYSIWYG UX has to compromise on its own flexibility and coherence in order to support intuitive and immediate operations around formatting. I have memories of superstitiously pushing bits of padding around when I had to sue word in high school. Editing complex proofs in a WYSIWYG editor seems like an exercise in frustration. 7 ot 7 hours ago 3 replies I agree with the overall sentiment, but I don't think that any WYSIWYG alternative exists yet that: - Interacts well with version control: it is trivial to maintain a LaTeX document in a git repo, and the diffs are readable (especially with --color-words) - Makes it possible to programmatically generate formatted text, tables, graphs, possibly from external data sources, either with the internal macro language or through an external scripting language - Has reasonable separation of content and formatting. For example when submitting the same document to multiple conferences it is almost trivial to adapt the content to the required formats. If writing mostly prose, these may not matter much, but for technical writing I would rather not do without them. 8 0xbear 10 minutes ago 0 replies Yeah, try to open those WYSIWYG editor docs, say, 20 years from now, and let us know how well this worked out for you. 9 pmyteh 8 hours ago 1 reply The author doesn't like writing using markup languages. Some of the rest of us do. I'm with him to the extent that LaTeX evangelism can be oversold - when I'm talking to curious colleagues I tend to stress the vertical learning curve as much as the quality typesetting and convenient cross-referencing - but I do think most of his argument is simply a matter of personal preference. For me, the fairly stiff default structure imposed by a LaTeX document class makes my writing easier, quite apart from any advantages at the publishing stage. 10 jhanschoo 8 hours ago 1 reply The author misses one very important feature of LaTeX that is the reason why I keep a lot of my notes in it. LaTeX exposes two mechanisms for easily making replacements all across a document. One is with macros, which you can change the definition of when necessary. The other is the simple search-and-replace, which is very powerful since you can involve macro and formatting syntax in it. Traditional word-processing and note-taking software like Word and OneNote simply don't expose such powerful functionality for making edits across the entire document. 11 grecy 6 hours ago 4 replies I am currently writing my first print book, and honestly feel that LaTeX is the best solution. I want precise control over a 250+ page book. I have never seen a WYSIWYG editor that doesn't make that a royal PITA. Different gutters on left and right pages, chapters always starting on a right hand page, consistent and great justification, fine control over how chapter headings appear (and quick to update) etc. etc. On a side note, is anyone aware of a good way to convert LaTeX (or the produced pdf) into an ePub? 12 Veedrac 7 hours ago 0 replies I write LaTeX mostly because it works and looks nice, everywhere. Reading the document is easy, because it renders every time I save, which is very frequently. I also use it because it let's me do sweet stuff when I want to, but I fully appreciate that that's just overhead for most people. I mean look at the title on the screenshot rendered by Writer! It's horrific! I've found things improve when you move to a 15" 4k screen, but on my 1080p desktop monitors at work Writer is barely readable. You seem to be in luck you're using a Mac, since it's even worse on Linux in my experience. > I know where it is because I put it there, but looking for it is hurting my eyes. Perchance the fault is with your text editor. It hurts my eyes, too, but the problem disappears when I use my local editor. 13 krupan 6 hours ago 0 replies I have spent countless hours fiddling with both LaTeX and Word trying to get them to do what I want. The difference between the two is, once I had figured out how to make LaTeX do what I want, the steps were all documented and reproducible. With Word I had no such record of all the menu items, settings, and button clicks that had given me what I wanted. 14 fny 6 hours ago 2 replies I have only three gripes about LaTex and friends. First, LaTeX was intended for physical publication, so there's no native notion of text reflow or "responsiveness". Worse, everyone distributes their documents as PDFs, which are a PITA to read on a smaller device or ereader. At best, an author could utilize something like pandoc to distribute an HTML version, but alas, publishers never give a damn. Second, I have been spoiled into expecting that I can modify the way a document looks to my liking, not yours. I can't invert the colors at night. I can't change the font size, line height, or margins--and my God do people use some huge margins with LaTeX. Then comes the math syntax... Yes it's very powerful, but its a noisy mess to read and write. I really wish we had a simpler syntax akin to ASCIIMath[0]. LaTeX: (\left(\frac{1}{2}\right)) ASCIIMath: (1/2) 15 throwaway2016a 6 hours ago 1 reply I just switch my consulting company to have all our documents (Statement of Work, proposals, NDA, MSA, etc) in LaTeX. It has worked amazing well. For a few reasons: 1. We easily can typeset all our documents the same and if we do something like change the letterhead we can easily update them all. 2. Auto-generating documents (forms letters) is a snap. 3. Everything is source controlled. 3a. If a client needs customizations we can give them their own branch and easily diff the branch with master. Very useful for documents that have legal side effects. 16 arca_vorago 7 hours ago 0 replies I just write in emacs org mode and call latex (or any other language) when needed. Then I often export to latex+pdf for that nice latex look. 17 chj 6 hours ago 0 replies I agree that at the draft stage it's not desirable to use LaTeX, or when you need to collaborate with non academic users. But Libre Office is really not a serious alternative to LaTeX (please don't ask why). Use markdown in the beginning, and when you have enough materials, you can export to LaTeX and do the final editing. 18 tcpekin 4 hours ago 1 reply I've spent a lot of time in both Word and LaTeX writing papers and reports in grad school, and nowadays, strictly use LaTeX. I feel like the author missed some crucial elements as to why people use the latter. First, references, cross-references, and citations. These are all shockingly simple in LaTeX. \ref or \cite is all you have to think about to cite papers or reference figures, tables, etc. In Word, I have used Mendeley's citation system, and the built in cross reference system to the same task, but when it comes to the editing process between multiple people and having different files sent around, it invariably breaks, leading to hours of extra work either doing it manually (have fun updating figure numbers if you add another one, or citation superscript), or reinserting all the necessary cross references. With a LaTeX file, this is pretty hard to break. Secondly, LaTeX handles figures 10000x better than Word, in that you just let it figure out placement inline. Captions are as simple as possible. Meanwhile, have you ever had that Word document with 10 figures in which you move one inline image, and every figure jumps to a different page, leading to spam clicking Ctrl+z? Or how about adding figure captions? Inline you have to use the caption tool, which isn't fantastic and often creates a text box that isn't strictly tied to the figure. The method I found best was just to have another document simply for figures and write the captions in regular text. This both looks bad, and during the editing process, requires you to switch between files to keep track of the figures that are being referenced in the text. Additionally, automatic figure numbering depends on where the figure anchor is, often leading to improper numbering. Again, referencing figures by number in Word is a nightmare that LaTeX handles amazingly. Third, I agree, setting up LaTeX on a machine isn't fun, and I have always been bad at it. However, I don't do it anymore. ShareLaTeX has solved all of those problems for me. All packages are available, you don't have the funny "compile three times to get reference numbering correct", it's amazing at collaboration with both git-like diffs as well as Word-like track changes/commenting system, and has tons of templates so the pain of setting up your document's preamble is done for you. One tip I have is if someone you work with doesn't know LaTeX and can't be convinced to learn it, still write in LaTeX, compile to pdf, and use Acrobat to convert to Word. That works surprisingly well. The Lyx version conversion does not work nearly as well. I can't say I've tried pandoc though, I would like to try that next. 19 theden 6 hours ago 0 replies I'm one of those guys. I used to write my university philosophy papers in LaTeX (which I learned from physics+maths classes), and dealing with citations was definitely a lot easier. A more understated advantage of using markup was that I was trivially able to nondestructively comment out sections or leave comments on important points or paragraphs in a paper, not unlike what one would do in code. Once I got used to that I couldn't go back to WYSIWYG. Now, if I don't have time to deal with LaTeX, I'll just use Markdown or whatever format that isn't proprietary. 20 DonbunEf7 8 hours ago 0 replies So use LyX. It's a LaTeX word processor. I've used it for years to avoid having to directly write LaTeX. 21 loukrazy 8 hours ago 1 reply I think many academics write in LaTeX because it is easier to switch between document formatting styles. If you submit to multiple journals that all have their own poorly made Word 2007 templates, even the pain of LaTeX is not so bad. 22 matthewbauer 8 hours ago 2 replies Note: The article is from 2016. 45 points by ttoinou 15 hours ago 80 comments top 19 1 dumpstrdivr 6 minutes ago 0 replies Pricing and packaging consideration perhaps. Todays E-stuffs were just so E (no physical thingy). How bout making people agreed that buying your software are not only benefit you but maybe also benefit a cause, support a movement etc. Also gives them anything interesting rather than just the product.Maybe gives em plush dolls or gadget too as bonus rather than just e-mail containing serial numbers. Example:Piracy gives me access to all features.Paying it gives me a cute bunny doll mascot from the vendor :D 2 iamben 13 hours ago 2 replies A while back a friend of mine told me he installed a 'clean my Mac' application, thought it was decent and went looking for a crack. So the story went - the top link was one to their own website with full instructions on how to crack the app - something like - www.appname.com/how-to-crack-appname/, or whatever. On the page was complete and detailed instructions on how to crack it yourself using a hex editor or decompiler (or whatever!). Except, he said as you read down, the way the author explained it really highlighted how much effort he put into making it, and at the bottom copy saying something like "we hope this was useful and avoids you using a crack which might damage your machine, we also hope you realise the effort that goes into making software and will consider paying justxx dollars which goes towards feeding my family and making more software".

Friend was so impressed he just got out his card and bought it.

TL;DR - They embraced the piracy / understood those that won't pay never will, those that may can be persuaded, so made something educational and thoughtful out of it.

Hope you figure it out!

3
vortico 14 hours ago 3 replies
My company deals with piracy in the following technical way, and it works well enough for potential buyers to stay buyers.

I release software binaries often, say every two weeks, and the software self-updates with permission from the user. The main application software is free, while the plugins (the real meat of the software) are purchased individually. When the application updates, the plugins also update. I use the semantic versioning, so releases look like 67.0, 67.1, 68.0, etc. Since the barrier to upgrade the application is virtual nothing (free, click of a button), almost everyone updates. If you're a paid customer, your plugins will also be updated, but if you aren't, now none of your plugins work. If you want to release your set of plugins to the world, pirates will have to match the versions of those plugins to the versions of their application, and if there are other plugins in the internet, they will also have to match. This requires lots of coordination from pirates, which has not happened yet. If an individual begins regularly releasing updated versions of the software, I can simply ban their user account which was used to purchase the plugins.

I imagine this can't work for your purposes since you have a single, standalone, polished application, but hopefully this could help others.

4
warrenm 14 hours ago 2 replies
First - if they wouldn't have paid anyway, you haven't "lost" anything (you've actually gained something)

Second, your pricing must be turning off those who would [possibly] pay, but opt for the cracked edition due to cost

Third, focus on support: you can download, install, and run OpenNMS, for example, totally 100% for free. But if you want support beyond the mailing list, you pay for it.

5
Ace17 1 hour ago 0 replies
The term "piracy" would describe a lot better "the act of developing/releasing malware/ransomware" - at least, it would involve the notion of "attacking".

Maybe the time has come to slowly shift the meaning...

6
hacker_9 14 hours ago 0 replies
Best article I read on this subject is "Piracy and the four currencies" [0]. It's an objective way of looking at why people are pirating your software in the first place, and by understanding why you can make adjustments to persuade future users to download via the proper channels instead.
7
dsr_ 14 hours ago 4 replies
The Coast Guard is your ally, along with the Navy. Talk to your insurance company; they have lots of experience (or else you picked the wrong one.)

Oh, are you equating copyright violation with major theft, murder and associated felonies so heinous that there is actually a separate body of international law to address it? Please don't do that.

Unless you are already a market leader, copyright violation is largely equivalent to an unpaid, unauthorized marketing issue. Your problem is to convert those non-paying users into paying users.

Policy:

1. Make it so inconvenient to use your software without paying for it that they decide to pay you. This is the "stick" option: you hit them with a stick until they either go away or pay.

2. Make it so easy and useful to pay for your software that they decide to pay you. This is the "carrot" option: you dangle something good in front of them until they willingly walk towards it.

Every method falls into one of those two policy groups. Think about which policy you want to use before you start making changes.

If you decide to make your software open source, you are likely to stop making money at it by selling it. However, you can still make money by consulting -- you are the world's foremost expert on this software, after all.

8
jzelinskie 14 hours ago 1 reply
How you address this depends greatly on your software and who it's marketed towards. Piracy can be a hint that you haven't come up with the best business strategy for that market.

Often what's best is to meet the market where it is--would you rather have more people using and aware of your software or only small amount of people people that know about your software and pay for it? The answer in many scenarios is to have more people using it regardless of whether they pay you because they can convert others into future sales and grow your market.

Software and its markets vary greatly, so it's hard to give a strategy that works for all software, but I've seen one model be fairly successful over time: basically using a subset of users to subsidize the rest of users. Find out who's deriving the most value from your product and get them to pay you, rather than trying to scrape an equal amount from everyone. This can be done many different ways, but are most commonly done with feature gating or providing external services. The idea is that if your software is more available, more people have the opportunity of deriving value from it and ultimately end up paying you.

9
tu6 14 hours ago 1 reply
Don't worry about users who crack or pirate software. This is not the target demographic you will have much luck with turning into paying customers. Fixate on your actually software. If kids in third world countries are your primary users its time to build something new. That's a signal you don't want to ignore.
10
1388 14 hours ago 2 replies
Old methods (90s):- turn to the law (DCMA)- prevent people from modifying code

New methods:- change to a SaaS model- change the backend to an API, users order API key- ping home from your software code or every keystroke like in Windows.- free product, offer training + ads

11
balls187 13 hours ago 1 reply
> My paying customers are professional in the industry I work on. Most of my pirates might very well be "amateurs" in the sense that they don't make money with theirs activity and I'm fine with it.

> theses users don't come to my product page for getting updates for example, so I think it's a loss for me because I can't reach them and talk to them. I feel like I won't ever be able to convert them to paying customers.

It sounds like you'd like to have these customers as actual customers and if it's your thesis that those who pirate the software aren't professional consider licensing.

1. Have a low-cost, or "pay what you want" license model, for non-comercial use.

2. Have a free-for-students license. Only for education use.

3. Offer upgrade pricing spiffs to convert from the free/low tier to the pro-tier.

4. Consider a subscription approach. I would never pay the full price for Photoshop/Lightroom, but the Adobe Creative Cloud for Photographers is \$9.99 a month, which is the right price for my needs.

12
milankragujevic 13 hours ago 1 reply
If your software is useful, and if you're a small company / one man show, I will pay for it, even if I barely have any money, or if I have lots of money. But the key for me is not having a really useless and stupid and intrusive anti-piracy mechanism, as that usually makes me go away from that software. Example of that are games like Kerbal Space Program that are useful, fun, and have no anti-piracy mechanism. I paid for it. And the other example is pretty much any huge game like GTA V, that I bought my brother as a gift but it often stops working and has to be reinstalled because it detects some tampering or something like that, it's from Steam BTW.

I'd also recommend that you DON'T offload processing to a server, as that will prevent people with spotty Internet (like me) or those in special circumstances / behind firewalls from using it properly, and also has data security issues.

13
akerro 14 hours ago 1 reply
>The thing is, theses users don't come to my product page for getting updates for example, so I think it's a loss for me because I can't reach them and talk to them.

You assume that they would buy in in the first place. I would not spend a penny on 95% stuff I pirated. Most of the stuff was a one time thing, like game was too boring (I would request a refund), game had too high requrements, so I couldn't even start an episode, music was not interesting after literally one song from whole discography etc.

> in the FAQ and try to not show it if I detect the software is not cracked ?

Things have FAQ? I've never seen them.

>What if I distribute the pirated version myself

I remember this coll guy https://www.reddit.com/r/pcmasterrace/comments/2mjxde/develo...

>I have faith that some of my pirate users can become my clients one day

I've pirated more than 1k of PC games, my steam account has 97 tites right now, and there is also GOG.

Try to make your stuff easy to reach, steam, gog, https://itch.io/app, try to build hype around your game, it's easy now no reddit in /r/gaming show some cool/funny scene from the game, make announcement on /r/linux_gaming that game is available from first day, etc.

14
ju-st 14 hours ago 0 replies
A piece of software I sell is so complicated to set up/configure and missing any documentation so you have no chance to successfully use it without my guidance and support.

The configuration is a one-time thing, the daily use is simple, so this is no problem for usability. And the software is a niche application with very few potential and real customers.

15
anfractuosity 14 hours ago 1 reply
Could you 'watermark' the program for each user (although that wouldn't be trivial), and mention noticeably that the software is watermarked.

Not sure if that would really dissuade anyone leaking/cracking it though, but it may possibly help determine potentially where the leak/crack came from.

There are probably many reasons why watermarking isn't worthwhile though, as you'd then need to have an 'online' system for generating new versions, rather than simply hosting a single file.

16
Kpourdeilami 14 hours ago 1 reply
Can you track the country in which the users are pirating your software from? If say 90% of the people pirating your software are in countries that don't have access to credit cards, then it'll be impossible to convert them to paid users
17
JamesBaxter 14 hours ago 7 replies
Many people I know have always pirated software and TV and said once there was a reasonable way to get it legally they would pay for it. I haven't seen this be the case. I don't think you can convert pirates.

It is possible however to convert legal users to pirates by having systems that annoy people who have legally purchased. I don't know how you strike the balance.

I hate the entitlement of people who pirate stuff, if it's not legally available in your area that's a shame but it doesn't give you a right to it.

18
Cozumel 14 hours ago 1 reply
Piracy is a huge issue, one way around it, although too late for you now, is to only sell an online hosted version of your software, they can't crack what they don't have access too.

Have you got a forum to engage your customers with directly? Most times (and this is a huge generalisation) but these consumers aren't 'evil', they just either have no money or don't know any better.

If you can engage with them and get them to like you it'll make them more disinclined to pirate you, you could also give away a 'lite' version, there's no need to pirate your software if they get it for free, then concentrate on adding more features to the regular priced one for them to upgrade too eventually.

There's no real technical solution to piracy, it's always going to be a human issue so needs to be looked at from that perspective.

19
nxc18 14 hours ago 2 replies
But piracy is good right? Death to DRM!

If someone doesn't want to pay they should be able to get the content for free, then they can decide if you deserve to be paid.

---

That content above seems to be very popular sentiment on HN and other techy places. I don't understand that since all you hurt are people like the author of this post.

The tech community needs to get its act together and decide to support intellectual property rights because as tech people that's all we have.

And don't think this only applies to proprietary software. MIT, GPL, Apache, etc are all licenses and are all capable of just as much abuse as your traditional EULA.

25
How to program an NES game in C nesdoug.com
1
userbinator 16 hours ago 7 replies
IMHO a 6502 is too limited to be effectively programmed in C; even this part of the article gives all the limitations: https://nesdoug.com/2015/11/15/2-how-cc65-works/

With this important note: "clean unaltered C code will compile into very slow code that takes up too much of the limited memory space."

In other words, "C" written for such a CPU will be in a vastly different style from more "normal" C, so that it might be better to just use Asm.

Then again, 8051s, PICs, and other extremely constrained MCUs have been targeted by "C" (subsets), so it's definitely possible if a bit awkward. Personally, I think something like an 8080 would be the minimum to do relatively comfortable C programming in, with a Z80 being far more preferable.

2
tibbon 13 hours ago 6 replies
How did they actually make NES games? By that I mean, what types of computers were they using for creating NES games? Other 6502-based computers? Could they run the NES games on there? Or did they have to burn to a cart to test things every time?

How did they design graphics? Was it basically graph paper, which then they translated into sprites by hand?

3
klange 6 hours ago 0 replies
A few years ago, some colleagues and I wrote a NES "demake" of Splatoon, in C, over the course of ~2 days. https://github.com/SplatooD/splatood

I definitely think our ability to do interesting things quickly (in terms of runtime) was hampered by the use of C over assembly, but it did allow us to get a functioning game done in a very short period of time.

4
Negative1 13 hours ago 0 replies
Wanting to make games for the NES over 30 years ago was the reason I became interested in programming in the first place . If the author is lurking, thank you for this!
5
gallerdude 16 hours ago 4 replies
Yeah, making an NES game is on my bucket list. I'm very new at (complex) programming, but I think I might just do it in assembly. If you're going to run a marathon, why not actually do a triathlon?
6
jakeunltd 12 hours ago 0 replies
I'm going to reboot megaman. Again.
7
smegel 12 hours ago 1 reply
> All NES assembers are command line programs. What does that mean? It has no graphic user interface.

Who the hell is this written for?

8
aa1234 12 hours ago 0 replies
Test
26
Tacitus Perfect Man historytoday.com
27 points by diodorus  14 hours ago   9 comments top 2
1
valuearb 7 hours ago 1 reply
This dragged me into quite a wonderful wikipedia sinkhole, which led me to

https://en.wikipedia.org/wiki/Caesarion

Everyone knows how amazing his father, Julius Caesar, was. But popular culture only remembers on the legendary beauty of his mother. Cleopatra was almost certainly a genius. She could speak 10 languages, and was the first Ptolemy ruler to speak Egyptian (she was actually Greek/Macedonian, the Ptolemy descended directly from Alexanders greatest general). She was educated in math, philosophy and astronomy, introduced Julius Caesar to the astronomer Sosigenes of Alexandria to help create the Julian calendar, and wrote a medical treatise.

Sad that the product of two of histories great geniuses had to be killed by Octavian to protect his claim to the empire. Though the child seldom matches their parents, seems like an irreplaceable genetic loss.

After Germanicus died, the Roman empire went from bad to worse in it's choice of emperors. Tiberius led to Caligula, then Claudius, then Nero, and his death lead to the tumultuous year of the Four emperors.

https://en.wikipedia.org/wiki/Year_of_the_Four_Emperors

From that year comes interesting parallel to Germanicus, the story of Lucius Verginius Rufus.

https://en.wikipedia.org/wiki/Lucius_Verginius_Rufus

He helped put down uprisings by governors intending to become Emperor and twice that year his armies offered to put him on the throne, and he refused both times. In an era where emperors regularly killed anyone who had the political means to be a threat to their throne, he was able to live to the age of 83, where he was again selected as Consul by Emperor Nero.

2
omalleyt 6 hours ago 1 reply
It's a symptom of postmodernism that nowhere in this text is it even suggested that Tacitus is maybe just, you know, relating the facts about Germancius as accurately as he can.

Instead we're sitting here quibbling over what literary fiction trope "Tacitus's Germanicus" fulfills in his "story"

27
My grandfather at Dunkirk bbc.com
143 points by happy-go-lucky  16 hours ago   37 comments top 8
1
pjc50 10 hours ago 4 replies
Alright then: Britsplaining Dunkirk. Not the actual events, but the significance of it to the national culture. For the actual history it is hard to beat having it narrated to you by Olivier in The World At War.

Many countries have a famous defeat or last stand. Dunkirk is ours; our Thermopylae, our Alamo, our Stalingrad, our Pearl Harbour. Many countries also have a famous mobilisation of the people - a revolution, something with a national day to name streets after. Dunkirk is ours.

The popular memory of Dunkirk is one of spontaneous organisation. Not an organised event run by the state, but one where the official efforts had already failed and British lives could only be saved by a mass ad-hoc action by whichever members of the public happened to be at hand to crew a boat. This ties in with the wider popular memory of the war as "total effort": everyone was a contributor.

This leads to "Dunkirk spirit": spontaneous solidarity in the face of adversity. Often invoked lightheartedly in the face of ordinary disasters like being stuck on a train for hours or squelching about a drenched music festival, but it works for more serious events too. It was invoked a lot when the Ariana Grande concert was bombed in Manchester. This is part of why there is no real counter-part to the individualism of survivalists or "preppers" in the UK: everyone believes that when a real disaster happens, you can rely on your fellow members of the public, and we will survive together.

A detail of the news interview that is very relevant to the whole thing is "stiff upper lip", and responding to threats with blithe dismissal and flippancy. The officer on the beach saying "I wish they wouldn't do that" about the strafing is the exemplar here. But there's quite a few famous, extreme examples: https://www.warhistoryonline.com/featured/major-digby-tatham...

2
arethuza 9 hours ago 1 reply
For anyone interested in finding out more about Dunkirk and the events leading up to the evacuation I can strongly recommend "Dunkirk: Fight to the Last Man" by Hugh Sebag-Montefiore. It gives the British, French and German viewpoints and covers the incredible scale of the operation in a way that the movie couldn't really do.
3
fiftyacorn 11 hours ago 0 replies
The World at War documentary series has a good episode about dunkirk.

One of the small boat captains describes picking up soldiers from the beach on dunkirk then getting back to sea, where they were met by a UK frigate. The frigate offered to take the soldiers to let the small boat go back to the beach(as was the plan) and the captain said "no chance mate get your own soldiers we're off back to blighty"

Some great lines in that series

4
darod 6 hours ago 1 reply
what's interesting is the faces they decided not to portray in the movie. https://www.theguardian.com/commentisfree/2017/aug/01/indian...
5
erik_landerholm 12 hours ago 0 replies
Courage like this always impresses me in a profound way. I only hope I could be this brave and humble about it.
6
Bulkington 5 hours ago 1 reply
Side note, forgotten (by me): Post-Dunkirk, Britain and France--represented by Churchill and DeGaulle--planned to unite. From The Atlantic story:

"Although that battle story is fairly well known, the accompanying political drama that almost saw Britain and France merge is now largely forgotten. But the drama of that near-fusion can help explain the origins of European integrationand the reasons why Britain ultimately pulled away from the European Union in the decision we know as Brexit."

Common knowledge in GB/France? How's the Brexit analysis?

https://www.theatlantic.com/international/archive/2017/08/du...

7
dba7dba 25 minutes ago 0 replies
For those who want to know more about why Dunkirk came about and why it was so shocking that France capitulated so quickly, I recommend this book.

> Nineteen Weeks: America, Britain, and the Fateful Summer of 1940

Few things I learned that I didn't know.

1. After Dunkir, UK govt was actually quite seriously considering suing for peace and to leave the Continental Europe to Nazi Germany.One reason was the financial cost. UK could've sued for peace and save their treasury (already seriously depleted after WW1). Or decide to fight and pay for the cost of war (like in money).

2. As Prime Minister Chamberlain resigned, he was faced with picking a successors: Lord Halifax and Churchill. Churchill was the 2nd choice.

8
YouEatThatSoup 11 hours ago 1 reply

Interesting how the British told the French and Belgians to fight on while preparing and keeping their evacuation a secret. They even forced the French at gunpoint who wanted to defend Dunkirk to destroy their weapons. French who wanted to embark were shot at by the BEF - at the same time when French destroyers were helping the evacuation by fighting German E-boats. The BEF got away and the French fighting were captured. All RAF fighter squadrons were moved to the UK ahead of the evacuation.

It looks this was mostly by the BEF generals in France while Churchill wanted them to fight with the French.

28
Sparking a myth about a man who could not forget newyorker.com
70 points by danso  15 hours ago   6 comments top 2
1
nathan_f77 10 hours ago 4 replies
The whole article is fascinating, but this part was very surprising:

> Shereshevsky avoided such things as reading the newspaper over breakfast because the flavors evoked by the printed words clashed with the taste of his meal.

I've heard of synesthesia, but I never knew it could have real-life consequences like this.

Unrelated: That's the first time I've seen the word "rexperience" with an umlaut. Wiktionary has it as an "alternative form" [1]. And I found this Wikipedia article [2], which says you can use it in other words, like: coperative, das and relect. And apparently it's called a "diaeresis". So that was interesting to learn. It's also interesting that the New Yorker must follow a style guide that requires diaereses.

Oh wow, I just found the New Yorker article specifically about their usage of diaereses. [4]

2
danmaz74 10 hours ago 0 replies

S actually studied to become a professional mnemonist, after leaving journalism, and learned from circus people how to make his performances more engaging. He didn't actually have 100% recall: "His uncle, Reynberg said, could be forgetful. If he didnt consciously try to commit something to memory, he didnt always recall it later."

"Reynberg told me that his uncle trained hours a day for his evening performances." "Luria doesnt deny Shereshevskys use of mnemonic devices, but he maintains that these came later, and that they merely complemented Shereshevskys immense natural abilities."

"The strength and durability of his memories seemed to be tied up in his ability to create elaborate multisensory mental representations and insert them in imagined story scenes or places; the more vivid this imagery and story, the more deeply rooted it would become in his memory."

"Instead of burning memories on scraps of paper, Shereshevsky found a different kind of erasure in his final years, according to his nephew: he turned to drinking."

29
Free Classic Books by MIT Press on Archive.org openculture.com
197 points by akaralar  18 hours ago   25 comments top 5
1
mrbill 12 hours ago 1 reply
I wonder how they define "classics" - two of my cherished titles from MIT Press are "IBM's Early Computers" (1985) and "IBM's 360 and Early 370 Systems" (1991).

They're examples of what I consider "The Perfect Computer Book".

https://mitpress.mit.edu/books/ibms-early-computers

https://mitpress.mit.edu/books/ibms-360-and-early-370-system...

2
torstenvl 11 hours ago 0 replies
3
spraak 10 hours ago 2 replies
Which computer science books should I watch out for? I am a software developer but without a CS degree and I'd like to learn more CS fundamentals.
4
WalterBright 11 hours ago 1 reply
What does it mean, "This item is restricted":

https://archive.org/details/aircraftenginesg00kerr_0

5
drallison 13 hours ago 1 reply
Making classic MIT books available for free is only a tiny part of the many important things Archive.org is doing. The GOOD WORKS of the Archive is worthy of your financial support. https://archive.org/donate/ to support the cause.

Explore the website, http://archive.org, and discover the amazing collection.

30
Scientists discover 91 volcanoes below Antarctic ice sheet theguardian.com
43 points by Mz  7 hours ago   12 comments top 4
1
tenkabuto 5 hours ago 2 replies
I get that the volcanoes could exacerbate climate change's effects, but how much effect might these volcanoes have on climate change already?

The following suggests that volcano activity followed from their uncovering by ice sheets, but might the causality of such be flipped?

> he pointed to one alarming trend: The most volcanism that is going in the world at present is in regions that have only recently lost their glacier covering after the end of the last ice age. These places include Iceland and Alaska.

> Theory suggests that this is occurring because, without ice sheets on top of them, there is a release of pressure on the regions volcanoes and they become more active.

2
yfuguvuvuvv 4 hours ago 2 replies
This story just keeps getting more alarming. When will someone do something to save our species? Is that question too unsophisticated?
3
robbiep 3 hours ago 1 reply
Echoes of Red Mars, although it was volcanoes under the Greenland ice sheet that did earth in then
4
Pxtl 3 hours ago 0 replies
So, Permian-Triassic exctintion, anyone?
cached 14 August 2017 07:02:01 GMT