Artificial Intelligence in Art is Here to Stay.  What Do We Do Next?

Artificial Intelligence in Art is Here to Stay. What Do We Do Next?

Time after time, new innovations do not wait for human society to figure out how we’re going to integrate it. There’s a reason these things are called “disruptive” in business jargon. They shake the box.

For example, Photoshop has had AI powered tools in it for almost a decade now, and nobody’s making that a front-and-center issue. Grammerly has been around for a long time too, and nobody’s pointing at that in panic either.

But AI that can believably regenerate somebody’s voice, or study a few thousand images and make a new image that resembles them in style, and suddenly it’s important.

It’s not that the tools can do it. It’s a matter of degree. This shows us that the problem isn’t that the AI can do it – it’s that the sudden advances have taken us by surprise, and we realize that as a society we have been so busy trying to figure out how we CAN do it, that we haven’t stopped to think about whether we SHOULD.

Getty Sues Everybody

The lawsuit by Getty Images against the creators of Midjourney and Stable Diffusion claim that these tools store images and paste together parts to make new images like an electronic collage.

This is not even remotely how they work. Instead, a special kind of deep learning neural net is trained on the images, producing what is essentially a complex formula with hundreds of mllions of parameters that the AI generation tools use to create new images.

In my opinion these lawsuits will fail immediately on expert testimony because of this gross basic misunderstanding of the technology. Images are not being copied, are not being stored in the database. If they were, you would need thousands of terrabytes to store the data. As it is, Stable Diffusion can generate images on a dataset as small as 2.7 Gb. They don’t even make SD cards or flash drives that small anymore.

A further complication is that in Europe, as in the United States, datamining is legal, so after the question of copying is set aside (to reiterate, it’s not copying, it’s using the images to train a neural network), then it there’s a very good chance that the law suits will fail on the scanning without permission issue as well, because protecting from analysis is not a legal right any copyright holder anywhere in the world enjoys. If it were, simply observing an image displayed on the internet and having any kind of opinion about it would be a crime.

The images are being reduced to parameters in a very complex equation with hundreds of millions of parameters. Datamining isn’t illegal. Training neural networks on material you don’t own isn’t illegal either. Copyrights aren’t being directly violated, because you couldn’t bring up an exact copy of anything the neural nets are trained on if you tried (though you can get close). And, you can’t copyright a style, or a composition, or a color scheme. All that’s left is Right to Publicity, and the responsibility for that falls on the users of the tools, not the tools’ makers.

That doesn’t leave much meat left on the bone.

It’s Just Going Sideways

And sure enough, this is exactly how the law suits are shaking out. Sarah Silverman et al. tried to sue OpenAI for  reading their stuff and incorporating that knowledge into their ChaptGPT model.  The only problem was that they couldn’t make ChatGPT spit out exact copies of their manuscripts.  The New York Times tried the same thing, and had the same problem . Why does this matter?  Because in order for the courts to offer anything to the plaintiffs, first there must be a viable record of wrongdoing.  It’s impossible for the courts to proceed on the basis of being butt-hurt alone. There have to be provable damages.   The court runs on two things above all else:  monetary damages, and proof of injury.  The New York Times — and Sarah Silverman, and the handful of artists trying to sue Midjourney — haven’t established either one. Even to argue undue restraint of trade, the “right to publicity” argument, they have to show exactly how they’ve been hurt by the AI’s, and none of them can demonstrate this.  These cases have been largely thrown out because of these lacks, and all that’s left is the damages from restraint of trade, which none of them can clearly demonstrate.

In my opionion, the writers and artists suing are the victims of class action ambulence chaser lawyers. If they win, mostly the lawyers get the money.  And companies like Getty Images are only suing because they want to make their own generative AI service based on Getty Images licensed images and sell that as a service.  When you can download Stable Diffusion and SDXL for free, why would anybody care?

The Right to Publicity

What remains appears to be Right to Publicity violations – the recognizability of artist styles, or celebrity faces, which have traditionally been treated by the courts as the responsibility of the individuals using the tools, and not the makers of the tools themselves. As a user, it is my responsibility not to try to sell AI generated images that simulate the style of Salvadore Dali, Chris Claremont or Michael Whelan and sell them with the claim that they are by the original artist.

Finally, if I happen to produce output that resembles one of those artists, how much can the original artist claim they have been damaged by such a production when human artists imitate the style of other artists all the time? Cases where one artist considers themselves damaged by someone else emulating their style are virtually nonexistent, and I could find no examples.  Certainly apart from being grumpy about it, few can say their business is actually being negatively affected by it, if any.  Greg Rutowski comes to mind, and even he is circumspect about it.  He’s concerned, but he’s not losing his shit over it.

Sue the Tool User, Not the Tool Maker

Think about it for a moment: if they can stop Stable Diffusion and Midjourney for being able to replicate the style of other artists, then they should be able to stop all word processors for being able to output written pieces that emulate the style of other writers. Oops, I accidentally wrote a story in the style of Roger Zelazny, they’ll be coming for my copy of Windows Notepad now… Saxaphones should be outlawed because it is possible for another player to use one to replicate the style of Kenny G … Do you see the fallacy here? It’s not clear cut at all, and is in fact a matter of degree, which makes it a purely subjective call. In point of fact, those bringing these amorphous law suits not based on any established rule of law fail to inform the court as to why the existing protections against copyright infringement are insufficient and why the makers of tools are suddenly liable when they never were before now.

In any case, it’s too late to stuff the genie back in the bottle.  AI powered art tools are here. It’s what we do next, to find ways to understand and integrate the new tools, that will define the new landscape.

It Feels Wrong, But Why?

And yet, one way or the other,  we still have the same situation.  Stable Diffusion, underlying technology for all the successful AI image generation tools, is open source.  That makes it very hard to unmake, and even harder to undistribute. Additionally,  while it’s obvious that disruptive technology is generally created for the primary purpose of eventually making money,  it’s doing so here without breaking the law in any obvious way.

And THAT’S where the problem lies. The ability to replicate somebody’s artistic style to produce specific results is the part that’s disruptive. It makes it harder (and I know I’m preaching to the choir here) for artists to get paid for their work and to have the value of their work respected. Artists instinctively know this, but they don’t have much of defense for what’s happening to them, and this makes them feel like victims, and in a real way, they are.

Artists gotta eat. And pay rent. And visit the doctor. And initially, tools that do work they can do are going to break things.

But as with the invention of the camera, and the music synthesizer, artists will adapt their workflows to include the new tools, and those that do will have an incredible competitve edge.

And those that don’t — or can’t — will suffer for it, and as with any new technology, there isn’t a lot we can do to change that, except maybe help them avoid having their stuff analyzed for neural networks, or helping them learn how to use the new tools. The legal questions won’t be resolved soon enough to matter.

Nobody likes to be hit in the face with some new career-threatening problem that they didn’t see coming, and it’s hard to say that three years ago anybody saw this as an impending storm on the horizon. That’s why it feels wrong. It’s doing something with people’s artwork and photographs that nobody saw coming and for which the standard rules for intellectual property offer no protection whatever Whatever is going to happen as a result of this new technology is just going to happen, long before we figure out something practical to do about it, if we figure out anything at all..

Can Anything Be Done?

I can’t imagine how one would unexplode the hand grenade this represents, given that it takes ten to fifteen years to resolve landmark cases in court. By that time, the technology will have evolved well beyond its current state and likely built into practically everything.

The Getty lawsuit against Midjourney, Stable Diffusion et al. will likely fail on the merits because they don’t fully understand what they’re suing over, and they appear to be trying to claim rights they don’t actually have, but it’ll take years to even get that far. They can start their lawsuits over again  and file new cases, but that starts the clock over from scratch.

Nor can they simply use the DMCA and have the source libraries removed from the web (I can’t imagine on what grounds they would do this, because the DMCA only applies to finished works, not tools for making them). Using DMCA’s on this stuff is like a perpetual unwinnable game of whack-a-mole even if somehow you could make it work.

So, I’m going to estimate ten to fifteen years to see anything on this, assuming there isn’t some sort of settlement. Considering Getty is looking for a couple of trillion dollars in damages, and they know they’ll never get that, it seems to me that they’re trying to just scare the ever-loving crap out of the defendants in court, going after settlement money so as to look good to their shareholders. They don’t give a crap about setting a legal precedent. There will be nothing upon which to base new case law, no judgment to cite, and the end result will be money changes hands (if it even gets that far).  Once the lawsuits are over, the tools will just chug along as always, completely undeterred.

And the Getty lawsuits are the best shot at this there is.

We Need a Better Plan Than This.

I’m sorry if this is disappointing, but if it’s going to be stopped by the global community, there must be a plan put into motion that works. Intellectual property rights laws and right to access as they stand now simply don’t cover it. The next step is a concensus on what to do, but good luck reaching one. Humans have always acted as individuals. Given a population of a sufficient size, and a given stimulus, they will not choose to do a certain specific thing in response to that stimulus. They will do all the things.

That, to me, is what makes the arguments against generative AI art so frustrating.  If AI art can’t be copyrighted, as many claim, then what rights are being taken from actual artists? There’s nothing to recover, because by that definition AI art has no intrinsic value. It’s all doublethink gobbledegook.

Anything that a human can imagine will eventually be made or built or invented, and sometimes by multiple people at the same time. I believe that AI art tools on this scale were inevitable. It’s how we use them and what we do next that matter.

These images, by the way, were all generated, by me, using a Stable Diffusion. I used Google to do image searches for each of them and I can confirm that they are not other people’s images. They’re unique as far as I can tell.  If you find any of these images and it’s older than the copy posted here, let me know and I’ll take down my copy and reexamine my life.

They’re meant as computer wallpaper. If you see one you like, click on the image to zoom in, then right-click and “Save As.”

 

-30-

Free Modern Lab Backgrounds

Free Modern Lab Backgrounds

I’m playijng with Stable Diffusion now, and my father expressed an interest in good laboratory backgrounds for his videos that he wouldn’t have to worry about accidentally infringing on somebody’s copyright.

Gene to the rescue.  Here you go, Dad.

To everyone else, if you want to use any of these, go right ahead. They are specifically not copyrighted, or proprietary, and made for using as background plates in Zoom.   Go nuts.

To Use an Image

  • Click on an image you like, and it will zoom up.
  • Right click on the opened image, and select “Open in New Tab”.
  • In the new tab, right-click again and select “Save As” and you’ll be able to download the image to your hard disk.

Remember that these are all copyright free.  Use them in anything you like, and don’t bother with attribution.

Don’t believe I did these myself?

Well ….

Migrating WordPress Content from One URL to Another

Migrating WordPress Content from One URL to Another

I have been searching for a reliable way to do this for literally years.  I have, surprisingly frequently, found myself needing to move the contents of one WordPress site to another server, under a different URL. There have been lots of solutions for this over the years, most of them demanding not inconsiderable amounts of money, usually on the order of $45 dollars a year for an annual license, or more.

But I am a tightwad when it comes to things like this.  I like to consider myself generous in other ways (helping friends with cosplay props, giving people needful things to make their lives better, et cetera) but I cannot abide being made to pay for something that just puts a few pushbuttons on something I know how to do myself.

Migrating a WordPress site, though, is a different animal. Smaller sites you can migrate yourself with the built-in import-export tools.  Bigger ones kind of break that, and you have to have a plugin, or use a service, and that’s usually where you part with a couple of Benjamins.

Except that now there’s this plugin for Wordpres called WPVivid Backup, which just handles everything. You install a copy on both source and destination Wordpres sites, get a key from the destination site, paste it into the source site’s interface, and hit the button. It couldn’t be simpler.

Or cheaper, There is a premium version that adds some bells and whistles that I assume would be very useful for somebody who maintains dozens of WordPress sites for a living, but none of them are required for the basic essential function of porting a WordPress site from one domain to another, and WPVivid Backup is a free download, with no surprise “to finish this transfer process, pay the license fee that we hadn’t bothered to tell you about before you started monkeying with this” message in the middle. (Yes, one of the plugins I tried actually did this.  Holding my data for ransom?  Shame on you.)

In particular, it makes a web site designer’s job a lot easier because you can move a testbench site into its production URL WordPress setup without having to manually monkey with SQL dumps, shell access or any of that.  It just freaking works.

If you want to do things like not migrate the entire site at once, you’ll have to pay for that.  That’s fair and reasonable, and I may pay for that in the future if my back is ever up against the wall and I need it.  For now, though, WPVivid Backup saved my ass.

And did I mention free?  As in beer?

-30-

 

 

AI Art is Here to Stay.  Better Get Used To It.

AI Art is Here to Stay. Better Get Used To It.

AI Art, or generated art, is a problem, yes, but not for the reasons people think. It’s a problem because nobody was prepared for how rapidly it would impact our world of creatives. Nobody was ready for how hard it would shake the box.

The Argument

The No to AI Generated Images logo, designed by Alexander Nanitchkov
People claim that it steals artwork from the original artists (it doesn’t, only making generalizations made from the observance of the artwork of humans, just as a human artist would do) or that it takes jobs away from humans (TOR Books is in some hot water over a book cover they commissioned that used stock library art that turned out later to have been AI generated). If I paint in the style of Van Gogh (warm saturated earthy colors, impasto, impressionistic, with emphasis on the arcs and swirls that flow between in the negative spaces), am I stealing from his work? No reasonable person would claim this. Now, what If I use an AI Art generator like Midjourney to do the same thing? It’s a shortcut, yes, but stealing, or cheating somehow? To me, it just appears to be a really sophisticated tool, and one in its rocky infancy. It is, however, a new process whose potential as an art tool is understood by very few, and whose operation is understood by even fewer. It is my observation that the alarum being raised is similar to that raised about the rise in popularity of synthesizers as early as the mid-1950’s. Everyone was sure that the synthesizer would put a lot of professional musicians out of work. Of course, that did not happen. It’s true that synthesizers were used in place of an ensemble of real musicians, but in a lot of those situations there just wouldn’t have been money to pay humans.  Instead, music became possible where the alternative would have been silence, canned music taken from something else, or somebody trying to make do with a single guitar or a piano and a set of bongo drums. Synthesizers simply became one more tool in the toolbox.  AI Art is just one more step past CGI, and nobody these days is claiming that isn’t art. A healthy debate is already in full swing. Already facing some backlash from artists, Artstation is allowing artists to opt out of having the artwork they submit to the Artstation web site used to feed AI art generators, and there is an on-going protest there among artists who think Artstation shouldn’t be allowing people to sell AI-generated images there. The site was awash with anti-AI posts protesting the original policy, with the illustrator Alexander Nanitchkov, creator of the No AI logo, proclaiming AI generated work to be “soulless stealing”. The counter-argument to this is, of course, if a human looks at a body of work and says, “Yeah, I think I can paint in that style”, and then does it, is that stealing? Few would argue that studying and replicating somebody else’s art style is theft, because original artwork isn’t being simply copied. Yet, when a machine does it instead of a human, somehow it’s supposed to be different.

How It Works in Very Basic Terms

Superfrog, in the comicbook style of Van Gogh. Midjourney had never seen a Van Gogh comicbook, so it took its cues from Van Gogh paintings and comicbook art it had analyzed. Even so, the elements look just sort of mashed together, and don’t form a cohesive artistic whole. Of course Van Gogh never did comics; the point here is to show that the AI is mashing up concepts, not just copying things.

AI-generated art doesn’t just copy bits and paste them together. Instead, the artificial intelligence is taught about art by looking at a huge number of images, adding noise to each one until it becomes unrecognizeable for what it was, and then taking notes on exactly what made that image recognizable as being a certain thing. The process is repeated on a very large number of similar subjects so that the AI can tell, in general, what makes that subject look like what it is. This may include photographs, or the work of human artists, but always in great quantity, usually tens of thousands or more. To generate a new image, the process is reversed. It begins with a noise field, and then everything that doesn’t look like the requested subject is slowly repaired. It’s further skewed by another instruction layer that adds other elements to the scene as described, to create a new and unique image. The resulting image may contain varying percentages of a given individual’s artwork, but it’s never a straight up copy.

It Needs What It’ll Never Have On Its Own: A Little Heart

Synthesized art suffers the same problem that synthesized music does: it lacks heart. A performance in either medium created solely by algorithm lacks the human touch, the emotional connection that makes that creative product worth consuming. Without it, it’s just an attractive but ultimately soulless effort. It can save time producing creative content, but without the guidance of an actual artist it will produce only the facade of meaning without ever actually touching it. As a result, AI created art is actually pretty easy to spot when you see it. I predict that there will be a great deal of arguing back and forth about AI art, but in the end, people will pause long enough to realize that AI art can’t reasonably compare to the creativity of a skilled human artist, and we’ll all get on with our lives. There is precedent; this is pretty much how the arguments against synthesizers went.  After a while people realized that the synthesizer was just another tool, and in the wrong hands it could produce flavorless pap just like any other tool in any other medium—or in the other direction, allow the art form to be taken to new places previously inaccessible. AI-generated art is like a chainsaw: it can do a lot of damage very fast, and it seems dangerous to be around. Without its human guiding what it does, though, it’s just another tool to be tamed. And, after a while, we’ll get used to the idea that there are such things as chainsaws in the world that do useful things.
Superfrog, in the style of Jack Kirby, third iteration of the prompt, created in MidJourney.

Superfrog in the style of Jack Kirby. Has Jack Kirby ever drawn a superhero frog? Not that I know of. The heavy bombastic ink and color style reminiscent of Kirby is here in these images, but nobody would mistake one of these for a Kirby original.
These are a good jumping off point (pardon the pun), and may be most useful as a tool for ideation, but repeatability is still very dodgy. 
The frog with the spitcurl is just a goofy take on Superman. Artificial intelligence alone will only carry you so far. The rest requires an actual artist to make some creative decisions and use the elements to create actual art.
Oh, and don’t look too closely at the hands. That’s some real nightmare fuel.

-30-

Reprinted from [SCIFI.radio]  
Two Spaces After the Period, or One?

Two Spaces After the Period, or One?

I keep running into this debate on the internet, so here are my two cents on the subject.  To preface this, I’m an editor myself, and have been running the web site at SCIFI.radio for twelve years and have written or edited about three million words or so just on that one site alone.

So here it is:

If you are writing in a word processor, or on Facebook, and you don’t add that extra space, you’ll get auto-kerned to 1.5 spaces after the period.

If you do add that extra space, it’ll delete the extra space and auto-kern it back to 1.5 spaces after the period.

If, however, you are working in a fixed pitch font — as you would if you were working on a typewriter, or writing programming code in a code editor — suddenly that extra space becomes important for legibility. It is important to remember that while two spaces were important for the typewriter era, there are still applications where it matters, and the more writing you do the more likely you are to encounter them.

The obvious answer is, always add two spaces after the period. Depending on where you’re typing, it will always come out correctly if you do that. In particular, your fixed pitch text will always be properly legible (not to say that using a single space after a period will make it illegible, but legibility will be enhanced if you use two, which was the original reason for using two in the first place).

Both the Chicago Manual of Style and the Associated Press Stylebook say to use one space, but this is because kerned fonts are the rule now, rather than the exception. Nearly all presentation formats now auto-kern it to 1.5 spaces after the period regardless, and of course their rationale does not take into account writing text for fixed font environments.

Editors who claim they have to hand edit manuscripts to change them to conform to either two space after the period or one space after the period are lying to you, by the way. There’s this thing called “global search and replace”. It only takes a moment, and the computer does the rest.  If you encounter an editor who makes a fuss about this, pull your manuscript from their hands and run away.  Who knows what else they’re lying to you about?

By the way, web sites that tell you that Microsoft Word treats the double-space as an error are lying as well.  I’ve tested this myself, and it’s untrue.

-30-
Two Spaces After the Period, or One?

Why is WordPress Showing the Number 3 After the Menu Items?

It’s that horrible “Three Bug” in WordPress menu systems. Most people are just unprepared for something so simple in normal daily use as a WordPress site admin to break in such a bizarre way.  It turns out that this bug in WordPress is easily solved. I’m not sure exactly what causes it, but I do know what makes it go away, and it’s simple:

Your theme is trying to use a font that hasn’t been loaded yet.

To fix this, you’ll have to identify what font it is, then add a preload statement to your site’s header.  In my case, I use Divi, so here’s what mine looks like:

<!-- Preloading font to fix menu icon flashing 3 -->
<link rel="preload" href="/wp-content/themes/Divi/core/admin/fonts/modules.ttf" as="font" type="font/ttf" crossorigin="anonymous">
<!-- Preloading font to fix menu icon flashing - end -->

You’ll have to have a theme that allows for special code to be inserted into your page header. The Divi theme does this natively, but for other themes you may have to add a header/footer insertion plugin.

I did the above fix, and SHAZAM! Problem solved.

-30-

Two Spaces After the Period, or One?

Statements About Questions Are Not Questions

Here’s a pet peeve of mine: the use of a question mark when you are actually making a statement about your query or curiosity.

For example, I wonder if it will rain tomorrow. Note the lack of a question mark.

Most would write, “I wonder if it will rain tomorrow?” This isn’t a question, it’s a statement.  I wonder if it will rain tomorrow.

If I’m asking a question, I would frame it this way:

“I wonder, will it rain tomorrow?”

I’ve asked an actual question, and therefore I use a question mark. So why does this bother me so much?

Because it’s a sign of intellectual laziness, not of just one person, which is bad enough on its own, but of English speakers as a whole. If people keep doing it, it will become part of the definition of the language to do this, but it flies in the face of logic. The fact that this is a happening isn’t so bad on its own, but it is symptomatic of larger flaws in our society, a sort of erosion of the principles of reason in what we sometimes tout as the Age of Reason.

When I told a friend about my concerns, she said, “My English teacher insisted that was a question and needed the ? mark.”

I replied that her English teacher was quite wrong, and here is why:

The sentence structure belies that. If you replace the object, subject and conjunction but retain the grammatical structure of the sentence, you can derive something like, “I depend on it to rain.” It’s still not a question, and the sentence structure is not a question. There’s no querant in either case. Simply stating “I wonder” does not make it a question, any more than using any other verb in that sentence structure. It’s a statement about wondering, not a question in and of itself.

The word “wonder”, in this case, is a subjunctive verb. The English subjunctive is a special, relatively rare verb form that expresses something desired or imagined. We use the subjunctive mainly when talking about events that are not certain to happen. For example, we use the subjunctive when talking about events that somebody wants to happen, or anticipates will happen.

Statements do not get question marks, therefore the sentence “I wonder if it will rain tomorrow” does not get one. Q.E.D.

The counterpoint to this is that there is a difference between expository writing and dramatic writing. The question mark indicates the tone of the speaking character’s voice, because the character does indeed intend it to be a question.  My friend Joseph Ksander commented:

“This is not a spoken english problem, but a written one. ‘I wonder if it will rain tomorrow’ is in the first person, and has narrative value. And one would argue that the tone of the statement is interrogative (it absolutely is). So when writing, especially in narrative, we are allowed a certain amount (quite a lot) of poetic license to give words and phrases meaning beyond the literal. By giving ‘I wonder if it will rain tomorrow?’ an interrogative mark, we give the reader a hint of how it would sound coming out of a character’s mouth. It has the feeling of a question, and writing is at least as concerned with creating feeling as it is with conveying meaning.”

There is, as Joe points out, a difference between the written word and the spoken word, and using the written word to illuminate the spoken word.

I doubt that my words will have much of an affect. One can only hope that reasonable voices are occasionally heard, but Joe is right, it all depends on context.

I wonder what will happen.

-30-

Forcing Centovacast to Use LetsEncrypt Certificates for SSL

Forcing Centovacast to Use LetsEncrypt Certificates for SSL

Starting with What Made Sense

So the Krypton Radio web site has to go full SSL now because most modern web browsers (read “Google Chrome”) won’t let people visit without warning them that your web site will slay your children in their sleep if you don’t have an SSL certificate on it.

This is frankly just Google messing with our heads, for the most part. A site that does not handle money shouldn’t have to worry about this. All the monetary things we need to do are handled by external sites that actually are secure. But I digress.

So our first assumption was that we could just buy a cert from secureserver.net, install it on Centova, Icecast and our main web server and we were probably good to go.

And then everything collapsed.

What We Did First

I went to create our private key and Certificate Signing Request. After a few tries, I finally was able to use CPanel to generate this, adding in wildcard domains for every domain we wanted to cover. This is a completely legal thing, and you can buy one certificate to handle as many domains as you want.

If you actually buy it through CPanel, though, you can’t. It’s one domain per cert, and $30 per cert. Buying it through your registrar, though, is a lot cheaper.  If you own ten domains, you can easily spend a bundle doing this, and there’s literally no reason for it. Shall we spend 10% of the cost Cpanel wants us to spend?

Yes. Yes we shall.

Mind you, the only reason we’re doing that is that when we started all this there was no such things as Let’s Encrypt, which is a free secure certificate signing company.

The only catch to certificates you get from Let’s Encrypt is that the certificates expire after 90 days, so it’s kind of a problem if you want to use them with a mail server. Every three months your users will have to accept a new SSL certificate to get their email, and trust me, it wigs them out. Most people can barely operate the send button.

That Utterly Failed

No matter what we did with the certificates supplied to us by secureserver.net, Centovacast hated them. The installation script provided by Centova (your installation will tell you that the script lives at /usr/local/centovacast/sbin/setssl) just barfs on it every time.

The instructions provided by Centova say to get Apache credentials. This is wrong. You need credentials for something, but whatever it is, it wants a .pem file, and whatever is supposed to be IN that .pem file isn’t documented.

If you can get a .pem file, great – otherwise, you’ll have to do a massive workaround.

The Icecast Connection

So that’s when we figured out that Icecast, the Centova main panel , and WordPress all needed to be set up with certificates, and that not all of them were going to be able to use the same ones.

Here’s the instructions I found on how to do that for Icecast.

Note with particular attention that they’re talking about putting the private key, the public key, and the authority chain all in one file. I found a BBS topic on how this should be done, specifically with Let’s Encrypt.

The authority chain from LetsEncrypt comes in in the chain.pem file as created by the Centova LetsEncrypt utility, and that’s stored in /usr/local/centovacast/etc/ssl/certs, and then from there there are directories corresponding to each domain. The fact that they bother to identify files by domain name teases the fact that may be possible to run Centovacast from more than one valid domain so long as the server answers to multiple domains.

To inspect the certificates being managed by CPanel, you need to log into your WHM panel and go to:

Home » SSL/TLS » SSL Storage Manager

The Centovacast Connection

I never got the certs I bought working with Centovacast, and self-signing isn’t an option, so I went ahead and asked the Centova Utility  at /usr/local/centovacast/sbin/setssl to create the certs for me using Let’s Encrypt.

To do that, you need to set up a directory alias on a web server on the same domain as Centovacast to serve up the validation files Let’s Encrypt needs to prove that you’re who you say you are.

The instructions say to use a specific block of code to define the directory alias that points to where those validation files are held on the server, but it says nothing about where in your Apache file to put them.

You’re going to need to set up your aliased directory so that the same domain that handles your Centova server is being served as a regular domain or subdomain on port 80, which is the standard port for serving web pages.

In my case, I had a subdirectory called ‘station’. I had to create a server alias so that that subdomain was included in the list of other servers my main vhost listing handles for me, so that my Centovacast subdomain and my main one are really being handled by the same vhost. This saved me from having to set up a complete separate vhost just to handle this one fricking problem.

It also gives the incorrect code to insert in the first place. Forget what they say in their article. Here’s the correct code:

Alias /.well-known/acme-challenge /usr/local/centovacast/etc/ssl/acme-challenges

<Directory “/usr/local/centovacast/etc/ssl/acme-challenges”>
Options Indexes
AllowOverride None
Order allow,deny
Allow from all
Require all granted

# Apache 2.x
<IfModule !mod_authz_core.c>
Order allow,deny
Allow from all
</IfModule>

# Apache 2.4
<IfModule mod_authz_core.c>
Order allow,deny
Require all granted
</IfModule>
</Directory>

Big question: Regarding the ‘setssl’ utility, why put broken code in a utility, and then write documentation that pretends it works? Again?

Centova is notorious for this.

When we fixed all of the above basic configuration with the Apache server to satisfy the needs of LetsEncrypt, we found that the setssl script was changing permissions on the target file folder such that it would return a 403 error.

That’s right, the Centova utility script that installs the certificate intentionally breaks the process so that it can’t finish.

Insert the change in the chmod instruction that alters it from 750 to 755.

For some reason this bug has been in there forever, and Centova’s never fixed it. They say the answer to this problem is to make the user that the Apache server runs under a member of the ‘centovacast’ group, but in my experience this didn’t work and I got a 403 error no matter what I did later.

Anyway, here’s the code to modify:

if [ ! -e “$challengepath” ]; then
mkdir -p “$challengepath”
fi

chown root.centovacast “$challengepath”
chmod 0755 “$challengepath”

testcontent=”test-$(date +%s).$$”
testfilename=”${testcontent}.txt”
echo “$testcontent” > “$challengepath/$testfilename”

#(In later versions of the code, these two lines are missing entirely. If they’re present, make the change to the chmod parameter as shown).chown root.centovacast “$challengepath/$testfilename”
chmod 0755 “$challengepath/$testfilename”

Once I patched my copy, I set the file permissions on it so that future Centova updates couldn’t revert my changes, like they did the last three times.

You’re Not Out of the Woods Yet

It’s not enough to get Centova itself running on an SSL certificate. Now you have to get IceCast itself working with SSL, which for some reason is a separate task, and any attempt to link to an unsecured stream from a secured site will make web browsers claim your site is not secure, thereby defeating the whole point of having a certificate in the first place.

So the trick here is, you will probably have to create a new listen socket beyond your default. Centova, by default, set you up on port 8000. I had to create a new secure port, so I moved it well up out of the way, on port 8080. All the mount points are available on both ports, but you can’t have one port be both secured and unsecured.

Your Icecast config isn’t called icecast.xml in a Centova installation. It’s called server.conf, and it’s also in a nonstandard location, which is /usr/local/var/vhosts/<your station name>/etc/server.conf.

Here are the listen socket sections from my server.conf.  It’s the top one you’re looking at. The bottom one is the default definition, and the top one is the secure one. Centova does not support the direct creation of secure listen sockets, so you have to hack this by hand.

<listen-socket>
<port>8080</port>
<ssl>1</ssl>
</listen-socket>

<listen-socket>
<port>8000</port>
</listen-socket>

Then finally, you’ll have to add a line in your server.conf file that loads the required SSL certificate. If you’re running your Icecast stream and your Centova server from the same domain, you can reuse the same bundle.pem file, but there’s literally nothing stopping you from making a separate one for each individual Centova virtual host and putting each bundle.pem file in a separate place.

The line for your server.conf file looks like this:

<ssl-certificate>/etc/icecast2/bundle.pem</ssl-certificate>

Last Step

Once this is done, you can go to the Centova interface and reload the server, and it will inhale the new settings. You can test them to make sure it worked by going to https://<yourdomain>:8080 and seeing if it loads. If it doesn’t, you’re still broken, but if it does, congratulations, you now have a secure stream on your internet radio station’s business end!

Except that that doesn’t quite do it.

So it’s /usr/local/letsencrypt/letsencrypt-auto renew,

then rebuild your Icecast PEM file, then use your modified setssl script:

/usr/local/centovacast/sbin/setssl letsencrypt <your domain name>

and then after all that, you have to both restart centova, and then stop your Icecast instance completely and restart it before Icecast will use the new certificate.

Now you’re done.

What a fricking ordeal.

Use Azuracast instead

To be honest, a much better idea is Azuracast, which does everything Centova does, plus a lot more, does not require a monthly license, is open source and is being actively maintained.  Centova is neither fully open source nor actively maintained.

Things I Learned About My Ender 3

Things I Learned About My Ender 3

I’m on an Ender 3 support group on Facebook, and nearly all the comments in this group asking for help are from people who did not follow the instructions when doing the setup on their Ender 3’s in the first place. Those who are meticulous about getting the frame absolutely square and the belts as tight as they can get them without binding everything up are getting superior results.

If you’ve just bought an Ender 3, there are a few things you’ll probably want to do or try at some point.

  1. Make sure your frame is square. I’m serious. 90% of the problems you’re likely to have with your prints will derive from not having properly assembled the gantry.

  2. Make sure your bowden tube is properly trimmed (and by this I mean trimmed off exactly square) and then fully inserted into your hot end.  This one error will make your life hell. If you don’t get this exactly right, you’ll get plastic plugs where your filament heats up and jams your bowden tube, requiring you to partially disassemble your hot end and possibly trim back your bowden tube to get past the clog. Having the bowden tube properly trimmed and properly inserted mostly stops this from happening. There are “fixes” to get around the clog problem, but in general they just add additional points of failure and rarely work as well as setting up your bowden tube properly in the first place.

  3. Replace the plastic extruder clamp with a metal one. Some printers have an extruder arm made entirely of plastic. They’re cheap, usually under $20, but the filament tends to saw through the plastic clamps over time. Sooner or later you’ll want to replace it. Newer ones have a brass liner in there that stops the filament from doing that.

  4. If your printer has to live inside, get a silent controller board. This makes your printer so quiet that you’ll have to do an eyeball check to see if it’s still even printing, it’s so quiet. They’re about $40. However, these silent boards can also prevent you from making linear advance calibrations that can improve the quality of your prints, so you may find yourself with a choice of quality versus machine noise while printing.

  5. Print a fan cowling for your CPU box to keep crap from falling into your controller box through the vent fan. Newer Ender 3’s have the vent on the bottom of the controller box, so it’s not a problem – but the older types have the slots on the top. It’s amazing how much crap can fall in there.

  6. Print a muffling fan cowling for your power supply box. This will drop the fan noise made by the machine by half. Be careful – doing a poor job reassembling your fan with its new housing can make it worse, not better.

  7. Order more bowden tube, and more nozzles, and more pneumatic clamps. These things wear out, and after about four to five months of use, they’ll start to screw up your prints. You do not want to be stuck without spare parts if you’re in the middle of a paying print job and something breaks.

  8. Get a glass build plate.  Go to your local dollar store, and buy a cheap 9×9 square picture frame. Take the glass out of it, and throw the rest of the frame away. The picture frame glass is a quarter the weight of the official Creality glass bed and causes far fewer problems with Y-axis ringing due to its much lower inertia. Alternatively, go to IKEA and get a 1′ mirror tile, then go to your local home improvement store and get a glass cutter so you can cut the mirror tile down to the size of your printer’s bed. If you’d rather use the Creality build plate, don’t be afraid to flip it over and use the untextured side. It’ll leave a mirror-smooth bottom surface on your parts!

  9. Don’t use tape on your print bed. Tape is a substitute for proper bed leveling, and solves problems that haven’t existed since the introduction of heated, adjustable beds. The only time you might want to use it is if you’re using something other than PLA in your printer that sticks like mad to your printing surface, such as PETG. The tape will let you remove the part without breaking your glass.

  10. If you’re having particular problems getting your PLA to stick to the build plate, try a fine mist of Aquanet. It’s cheap, and compared to fiddling with the bed leveling to get it precise to the last 0.01mm, it’s a fast solution that gets you on your way and printing parts again. Some purists think this is cheating, because you can resolve sticking issues with better bed leveling, but if you have to get the parts out the door, there’s nothing wrong with doing something quick that works so you can get on with your life.

    If it feels stupid, but it works, it’s not stupid.

  11. Buy eSun PLA+ filament for printing. It runs about 10° C hotter than the usual stuff, produces much more precise, clean prints, and costs exactly the same as whatever you’re using.

  12. Don’t grind your filament to death. One of the things that can happen if you try to drive your printer too fast, is that you can exceed the structural integrity of the filament as it passes by the gear in your extruder. If the extruder can’t feed the filament through the print head fast enough, it’ll start digging a hole in the side of the filament at the extruder head, and then you are basically and royally screwed, and your print has failed.

  13. If you think vibration damping is something you need, buy a $5 yogo mat, cut it into 1 foot squares, and put a stack of four or five squares under your printer. It will kill a lot of the noise and vibration your printer makes, and may well improve the quality of your prints.

  14. Make sure your belts are as tight as you can get them. Don’t worry, you won’t break them. Tight belts means more precise motion and better prints.

  15. Experiment with printing at stupidly thin layer heights. I started experimenting with 0.08mm layer heights, and people I show the prints to, even other 3d printer owners, are amazed that these are 3d printed objects.

  16. Special fan housings generally do very little, and on the whole don’t improve print quality enough to bother with. Minor tweaks in your slicer settings usually have a much broader effect.

  17. It is possible for an Ender 3 to not want to print because it’s too cold to start with. If your printer lives in the garage or workshop, as mine does, it’s probably not in a heated environment – and that means that it can get down to under 10°C and chill your temperature sensors to the point where the printer’s firmware thinks something must be broken, and you’ll get a MINTEMP error. You could have a bad thermister, or a bad connection trace to it on the motherboard, or a bad connector, but the first thing to try is just bringing the thing inside and letting it warm up a bit. I did this and once I got the temperature of the printer up above about 8°C, the printer realized its sensors weren’t broken, and it started right up.

  18. Get a Raspberry Pi and load it up with a webcam and Octoprint. Being able to run your printer without having to be in the same room with it is heaven. Being able to move your printer to your garage or workshop is even better.

  19. Having trouble with weird surface artifacts in your prints? Slow down. This is especially true of specialty filaments like silks or silk metallics. They are extremely sensitive to print speed, and what looks like a hopeless print may come out perfectly if you cut the print speed in half.

  20. Have to paint your prints, but don’t like the way the parts smell afterwards? Sometimes the client wants it painted, and it’ll smell bad for quite a while once you do. White vinegar will get the smell out of the painted surfaces while not endangering the finish. Be sure to rinse off the white vinegar, or your parts will smell like vinegar instead of paint.

  21. Don’t bother with vibration damping feet for your printer, or special angled mounting arms for your filament spools. By in large these do nothing but waste your time and materials. (If you need to make angled printer arms so that you can fit the spool and printer into a more compact space, that’s a different problem, and you should have no qualms about doing it.)

  22. Octoprint suddenly won’t connect to your 3d printer? It could be the USB voltage levels. What’s happening is that it matters what order you connect your printer to your Octoprint device. If you turn on your Raspberry Pi and plug it into your printer before your printer is powered up, the motherboard in the printer will draw just enough power to make the voltage levels required to detect your printer as a USB device too low to actually do the job. 

    The fix is to disconnect the USB cable, power on the Octopi and the printer separately, and once you’re sure the Octopi is booted, then and only then connect the USB cable between the Pi and the Printer. Now the Pi will be able to detect the printer as a USB device, and all will be well.

Check back here periodically. I’ll be updating this list of tips as I go, and you may find out something new you didn’t know before.

-30-

3d Printing Opens Vistas

3d Printing Opens Vistas

September saw the arrival of a new Creality Ender 3 3d printer at the Krypton Radio head office. The intent was to create new things to offer as station swag and perhaps create a new line of bespoke merch, something along the lines of props and costume pieces that people might want to buy from us.

I’ve been having a blast with it, and I’m trying to find new ways to use it that will benefit the company and myself. It’s a new creative tool I can use to bring daydreams into the real world.

The Lightsaber

I’d always wanted one of these, from the first time I saw one in 1977’s Star Wars, but the goal had always been out of reach. I printed one. This is the one carried by Obi-Wan Kenobi in the first Star Wars movie. 3d printers print in layers, so there are layer lines. You just sand the heck out of it and hit it with primer, and you’d never know it was 3d printed.

I’m working on modifying this one to add electronics to it, but I may just go with a completely different design, one that already supports the idea of adding a blade.

Sabacc Gambling Coins

These replica coins from the movie Solo: A Star Wars Story were printed in black PLA, like the light saber, then painted with black primer, then I added Rub’n’Buff. I hand-sewed a bunch of bags and put 18-20 coins in each one, and gave them to friends and family when we went to Disneyland’s Galaxy’s Edge last month for Life Day on November 18.

My friends and I had a lot of fun giving them to cast members and watching their reactions, which ranged from gratitude to amazement.

I later found out that the Imperial credits were about half-sized, so I’ll be fixing that on future batches, but having bags of these was really something. I looked for a sabacc deck in the shops while I was there, but none of them had the decks in stock.

The Antikythera Mechanism

The Antikythera mechanism  is an ancient Greek analogue computer used to predict astronomical positions and eclipses for calendar and astrological purposes decades in advance. It gets its name from the Greek island off the coast of which the device was found. It was fished out of the sea in 1901, and assumed to be some kind of archeological mistake – it couldn’t possibly be from ancient Greece, could it? They didn’t have computers – or did they?

The largest piece is this one. It’s about eight inches wide, and seven inches tall, something around there. That’s what I’m currently making.

The instrument is believed to have been designed and constructed by Greek scientists, and was made sometime around 70-60 BC. It was housed in a wooden box, about 13.4″x7.1″x3.5″, and they know this because they found bits of the box around it. After conservation, it came apart into 82 separate fragments, four of which contain gears, like this largest piece.

The Antikythera Mechanism originally had at least 30 meshing bronze gears, and up to 37 gear wheels that helped the device keep track of astronomical bodies like Mars, the Sun, and the Moon. It could predict eclipses, and could tell you when the next Olympic Games were going to be. It was extremely accurate as well, correctly reporting subtle variations in the lunar orbit, for example.

The device, housed in the remains of a 34 cm × 18 cm × 9 cm (13.4 in × 7.1 in × 3.5 in) wooden box, was found as one lump, later separated into three main fragments which are now divided into 82 separate fragments after conservation works. Four of these fragments contain gears, while inscriptions are found on many others. The largest gear is approximately 14 centimetres (5.5 in) in diameter and originally had 223 teeth.

All known fragments of the Antikythera mechanism are now kept at the National Archaeological Museum in Athens, along with a number of artistic reconstructions and replicas of the mechanism to demonstrate how it may have looked and worked.

My version started as Cosmo Wenman’s rough layout model of it, about the right shape, but with technically accurate placements of the bronze gears and metal features. Wenman’s finished version makes use of a lot of post-printing texturing and paint, but I brought it into ZBrush and added corrosion detail geometry before sending it to my printer, so a lot of the details on mine will be already there when the printing is done. That will will take two and a half full days on my Ender 3 3d printer.

I thought for a while that my version might be too big, but I took a quick measurement of the main gear while it was on the printer, and yeah, it does look like it’s about five and a half inches across, meaning that the size of my replica artifact is probably pretty close to the real thing.

Once it’s done, it gets a little cleanup with a brush to remove tiny filament strings, artifacts of the printing process. Then it gets painted, and I have purchased an artist’s airbrush and compressor for the purpose. Whatever details that didn’t make it into my sculpt I can probably fudge with paint. I figure once I get going it will take about a full day to paint it properly.

The lighting here shows off the printing artifacts on the surface are going to be sanded off. There’s a lot of handwork to go before I can put any paint on this, and only a few days to go before I deliver it.

Rather than sand it all down, I used acrylic sculpting medium instead and just filled in the raster lines from the printing. Then I painted it all flat black, then airbrushed it and added some finishing touches of Rub’n’Buff to make some of the details pop. My airbrush work is – well, I need a ton more practice, let’s put it that way.

Walt Disney

At Disneyworld in Orlando, Florida, there is a sculpture garden decorated with bronzes of famous Disney characters. One of them is a bust of the great man himself. Somebody took a 3d scan of it, and converted it into a geometry file, and being the fan of his work and the man himself as I am, I had to print a copy of it.

One of these days I’m going to print a really big one. The model is actually a lot better than the resolution of my printer can cope with at this size. Still, it’s gorgeous, and I’ve given away three copies of this thing to my Disney-fan friends so far.

I’m sorry you’re not still here, Walt, The world could use you. I’m doing my best to try to follow in your footsteps, but I’m not doing it very well, I’m afraid. I just don’t have the reach you did – but that doesn’t mean I’m going to stop trying. The world needs every bit of magic we can muster.

-30-