Saturday, January 24, 2026

Indiana Jones and the Sword of Valor

Back in January 2020, I had just wrapped up my self-assigned project, Massachusetts License Plate Concepts. Around the same time, a new Indiana Jones film—later revealed as The Dial of Destiny—was announced to be in active development. Intrigued by what thrilling and possibly perilous artifact Dr. Jones might chase next, I began sketching ideas for what I personally would want to see in an Indiana Jones movie—and on a poster.

During this brainstorming phase, I turned to Wikipedia and explored a list of lost and legendary treasures. That’s when I discovered the Sword of Kusanagi, one of the three Imperial Regalia of Japan. According to legend, the blade holds the power to control the wind. Instantly, my imagination ignited: visions of WWII naval battles, sword-summoned hurricanes, and a quest for vengeance flooded my mind. From that moment, the poster began to take shape.

I originally expected the project to take about a year. But after a global pandemic, a mountain of college assignments, a stolen laptop, and a stubborn illness, I finally crossed the finish line—six years later.

Behind the Poster

  • In Japanese folklore, the sword—Kusanagi-no-Tsurugi—symbolizes the virtue of valor.
  • While the central plot of this imagined film revolves around Indy recovering the sword and keeping it out of the hands of the Imperial Japanese Army, it also tells a more personal story: a tale of revenge for Short Round, who was orphaned during the Japanese bombing of Shanghai in 1932.
  • The four corners of the poster feature illustrated “windows” that trace the sword’s mythic journey to its WWII-era resting place:
    • Top Left: The storm god Susanoo slays the eight-headed serpent Yamata-no-Orochi and discovers the sword hidden in its tail.
    • Top Right: Prince Yamato Takeru uses the sword to reverse an approaching wildfire, turning it against his treacherous enemies.
    • Bottom Right: The child Emperor Antoku meets his tragic end. Upon learning of her clan’s defeat, his grandmother leads him and his court into the sea, drowning themselves along with two of the three Imperial Regalia—the sacred jewel and the sword.
    • Bottom Left: The sword washes ashore at Ise, where it is recovered by Shinto priests.

·         No AI generation was used in the creation of this poster.

·         A final note: two of the actors featured in the credits—Cary-Hiroyuki Tagawa and Udo Kier—were still alive when I began this project back in 2020. 
 

Sunday, February 11, 2024

Military Wall Mural (AI and Collage)

Two weeks ago an acquaintance of mine, who is in the US Air Force, contacted me and asked if I could design a mural for him.  Apparently, a Captain (or maybe he was a Major) at F.E. Warren wanted to decorate a blank wall in the 90th Operations Group building.  The Captain wanted the mural to evoke a feeling of “heritage” while also being inclusive of all who currently worked at the base.  My friend wanted the mural to include elements that held historical significance to F.E. Warren like B-24 Bombers, Minuteman ICBMs, Transport Erector trucks, etc. 


Because I was given very little time to work on this project, I chose to create the final images using a collage of AI generated images as well as pre-existing photographs from Google, which I ended up editing in Adobe Photoshop. 

Below are the two brainstorming plans I came up with.


After reviewing both designs the Captain ultimately decided to just paint the wall red and slap their three squadron logos on top of it.

Tuesday, October 17, 2023

Storytelling Through AI

Continuing with the topic of Artificially Intelligent generative tools, I have recently started exploring AI chatbots, more specifically, Bing’s AI Chat (which uses a version of the popular ChatGPT-4).  AI chatbots are computer programs that can have human-like conversations using natural language processing (NLP).  NLP is a branch of artificial intelligence that enables computers to understand and generate natural language and conversation.  Much of the data and information used by the AI tool is pulled from a language model as well as the Internet.  AI chatbots can be used for various purposes, such as customer service, entertainment, education, and more.  All a human user has to do is type a question, statement or request into the search bar and the AI chatbot will respond.  It has the ability to generate stories, poems, messages, etc.

 

The story telling aspect is what interested me the most about these AI language tools.  The notion that a computer can quickly generate a completely original story (that makes logical sense) is exciting for the field of art and entertainment. 

 

Personally, I have been having a lot of fun with both of these AI tools (art and chat).  The other day I had the idea to use both AI programs to create a singular work of art; by having Bing Chat write a short story and then have Bing AI Art create the illustrations for said story.  What you see below is the final result.

 

The prompt I gave to Bing Chat was to, “write a short story about the Hulk trying on new clothes”:

 

 

The Hulk was tired of wearing the same old tattered clothes every day. He wanted to try something new, something that would make him feel more comfortable and stylish.

 


So, he decided to go shopping for new clothes.

 


At the mall, he walked into a clothing store and started browsing through the racks.

 


He picked out a few shirts and pants that he liked and headed to the fitting room to try them on.

 

 

As he tried on the clothes, he realized that his massive size made it difficult to find clothes that fit him properly. The shirts were too tight around his broad shoulders, and the pants were too short for his long legs.

 


Frustrated, he decided to take matters into his own hands. He transformed back into Bruce Banner and used his scientific knowledge to create a special fabric that could withstand the Hulk’s incredible strength and size.

 

 

With his new clothes in hand, the Hulk transformed back into his green self and tried them on. To his delight, they fit him perfectly! He looked in the mirror and smiled at his reflection. For the first time in a long time, he felt comfortable and stylish.

 


From that day on, the Hulk wore his new clothes with pride. He strutted down the streets of New York City, turning heads wherever he went. People couldn’t help but stare in awe at the massive green creature dressed in stylish clothes.

 

 

And so, the Hulk learned that sometimes it’s okay to try something new. With a little creativity and ingenuity, anything is possible.”

 

 

Additional / Alternative Illustrations

 


Sunday, September 17, 2023

Singular AI Art Generations

Recently, I have been playing around with Bing’s AI art generation.  In this new experiment, I wanted to see if the AI artist could replicate particular art periods as well as the styles of famous artists.  I was also curious to see what types of media the program could imitate, such as oil paint, ink etching, cut paper, and 4K photographs.  As I played around with various ideas and prompts, I noticed that there appear to be limitations (possibly put in place by Microsoft) with Bing’s AI art generator as adult themes like nudity and violence are not allowed.  There also seems to be an issue when attempting to reproduce the faces of celebrities; either the program struggles to create them (eventually settling on a vague facsimile) or the programmers have once again written restrictions into the algorithm to avoid this.  For example, I gave the AI artist the prompt: “Bill Gates shaking hands with Agent Smith from the Matrix.”  It was immediately blocked, and I was given a warning about attempting to break Bing’s AI art rules.  Additionally, there are the rare occasions where the AI will start to develop ideas for generations but will then glitch and fail.  Overall, I am still amazed by the AI’s ability to create original works of art while also replicating all of the particulars I mentioned earlier.

 

Below are some of my recent experimentations along with the prompts used to create them.  I have selected the best art generations from each prompt:

 

“Man running through a decaying art deco city at sunset, 4k, photograph.”

 


“Woman in toga praying in front of a gigantic statue of Aphrodite, intaglio, etching.”

 

 

Portrait of a man tiger hybrid in a fancy suit and top hat, seaport background, 1800s painting.”

 

 

“Bicycle race, Bauhaus.”

 

 

“Waiter delivering food to customers in the style of Caravaggio, oil painting.”

 

 

“Asian chess masterpiece of a knight, made of ruby and gold, tilted, top detailed Maya render, style raw, ar 2:3.”

 

 

“Tropical beach with a close-up of a macaw, cut colored paper.”

 


Additional generations that didn’t make the final cut:



 
 



 

Thursday, September 14, 2023

AI Art Experimentation

In my previous post, I discussed how humans, whether they are artists or laymen, are using Artificial Intelligence to create digital works of art.  Currently, there are several AI art generative tools that one can choose from, such as DALL-E 2, DreamStudio (Stable Diffusion), Midjourney, NightCafe, and Prodia, just to name a few.  Some of these programs are free to use, but many charge an annual fee (monthly or yearly).

 

Eager to try out this new technology and see how far it could be pushed, I chose to experiment with Bing Image Creator, which is a highly regulated version of DALL-E 2 (and is free to use with a Microsoft account).  The way this tool works is really quite simple; first you type in a word or sentence of an idea you have, then you click the “create” button.  The time it takes for the AI algorithm to formulate and generate the art of your idea depends on how specific your request is.

 

For my first generation request, I simply type in the word “dog”.  After a minute of waiting, the AI program generated four images of a dog that you see below. 

 

 

Initially, I was very excited that the resulting images actually looked like a dog.  However, my gradual thoughts were: “Why did the AI generate images of these particular dog breeds? Why did it choose to generate hyper-realistic images of dogs as opposed to say hand-drawn illustrations or 3D models?  Why are they all headshots and not full body views?”  I understand that users must be more precise with their words in order to get varied results, but I wonder why are these particular images (of dogs) the program’s defaults?  Interestingly, when I reverse image searched a few of these generations through Google I found a few websites were using very similar images of these dogs.  In fact, some images had the Bing AI art watermark in the lower left hand corner.

 

Next, I decided to repeat the same search, as I was curious to see if the AI would generate art of the same dog breed.  Once again, four images appeared; the color, lighting, and position of the dog’s head were all the same.  Perhaps the programmers (or those who built the original algorithm) chose a Retriever to be the AI’s default idea of a “dog”.

 

 

Eager to create something different, I decided to include the additional detail “with alien” in my original search.  In these new generations, I finally got four different breeds of dogs, different species of aliens, and varied head positions.  I was astonished by the uniqueness of each alien’s features (the number of eyes, the colors of their leathery skin), and I especially enjoyed the expressions on each of the dogs’ faces (some scared and others confused).

 




 

Subsequently, I decided to modify the sentence even further by adding the words “playing catch.”  It was at this point when I started to notice that the AI program seemed to be struggling with merging several figures into one image.  For example, you may notice that there are distortions around the eyes in some of the dogs’ faces.  I was intrigued by the fact that the program seems to interpret the request “Dog playing catch with alien” in multiple ways.  In two images, a dog and alien are playing catch with a ball (just as I requested), but in another image, it looks like the alien has taken on a football shape and is perhaps being caught by the dog (like a chew toy).  What I also found interesting was that in two images, the AI chose to include a UFO ship, even though I never requested that in the original prompt.

 





In the final step of my experiment with AI-generated art, I added a few more words to this ever-growing sentence: “Dog playing catch with alien at Fenway Park, photograph.”  By adding the word “photograph,” I hoped to make the final image appear more realistic, with no blurring and crystal-clear detail, rather than a digital illustration with painterly brushstrokes.  In the end, I am very happy with how three of the four art generations turned out.  In each image, the viewer can clearly see that there is a dog and an alien throwing a ball back and forth, that the location is a baseball stadium, and that the AI used the correct colors of Fenway Park.  The AI program really pushed itself to create dynamic movement in both the dog and alien bodies (specifically outstretched arms and bent knees).  One question I would have for the AI artist is, “Why are all of the dogs portrayed in profile view and not three-quarter view?”  I wonder if the program is capable of producing images where the dog has its back towards the camera.

 


 

 

If you would like to learn more about Bing AI click this link:

 

https://www.bing.com/images/create/help?FORM=GENHLP

 


For those of you who are unfamiliar with the concept of AI Art check out this article:

 

https://www.techtarget.com/searchenterpriseai/definition/AI-art-artificial-intelligence-art