This is an idea I’ve been toying with for a bit. There is a ton of media that includes unimportant information that doesn’t need to be stored pixel perfect. Storing large portions of the image data as text will save substantial amounts of storage, and as the reality of on-device image generation becoming commonplace sets in digital memories will become the main way people capture the world around them. I think this will inevitably be the next form of media capture (photography and video), not replacing other methods/ formats, but I could see things like phone cameras having saving images as digital memories set to default to save on storage.
currently, storage space is significantly cheaper than all the cpu power needed to generate the images from a text description. also, what if you actually wanted to view the backgroud of the object? and where’s the advantage besides an at best 40 % increased storage space edficiency? after all, people are taking pictures to actually capture the moment. else they would do voice memos all the time.
after all, people are taking pictures to actually capture the moment
Depending on what you mean by “the moment”, I don’t think that’s really true. Modern cell phone photography doesn’t really give you what the sensors have picked up. You take a picture of your friend with his eyes closed and the phone will change the picture to have his eyes open. You take a blurry picture of the moon and your phone will enhance it to make a better picture of the moon. I mean some people hate it but a lot people do actually like it.
And they like it because they don’t really take pictures for the purpose of posterity. They don’t take a picture of their friend because they need to look back 20 years from now and remember exactly how that one plastic bag 30m in the distance was crumpled. They take the picture because they want to post to Instagram, get some likes from their friends, and maybe look back 20 years from now to remember the general vibe, and if their phone can “enhance” that for them.
If people could record a voice memo and have their phone actually make a really decent Instagram post out of it for them, I 1000% believe people would do it instead of taking an actual picture. Posting pictures is more about socializing than it is about posterity.
People still photograph analog
The mother of all lossy compression
I’m sorry, but no. Not only does that invoke a ton of extraneous processing on both ends (when saving and when recalling the image), but the rest of the image is still important, too! Can you imagine taking a photo at a family gathering, and then coming back later to see randomly generated people in the background? A photograph isn’t just about the “subject”, it’s often about a moment in time.
I don’t think this will work well and others already explained why, but thanks for using this community to pitch your idea. We should have more of these discussions here rather than CEO news and tech gossip.
deleted by creator
I think the music version would be MIDI.
No, just, no.
MIDI is awesome in some ways, but I would never replace an actual recording of an instrumental song with a midi file.
The midi is too denpentant on the decoder, and won’t replicate the sound accurately.
Have you heard of MIDI2?
Have you? Do you understand what MIDI2 offers? What you’re describing is unrelated to controlling synths and samplers. What you’re describing seems more like a mod file system where the samples are generated on the fly and that has all same limits that and generative art does.
Don’t get me wrong, MIDI is awesome, as a music creation tool, but as a recording tool, it just won’t work.
I love trackers, and have thousands of C64/Amiga remixes made on trackers (you can get those songs themselves at Remix64), but a midi or mod tracker is just fundamentally the wrong technology to use to record a copy of music, I am sure that there are programs to build midis from a recording, but that a new work based on an old, not a recordin
So I take a photo of a friend and then the ai changes them and it’s no longer them.
Actually sounds like a black mirror episode. So… congrats on that?
As the “object” the friend would stay the same in this proposal, but everything behind them would vary.
I mean… that’s gonna fuck with people’s memory. And what of the software that provides this service? What things will it decide aren’t important? A child playing in the background? A beautiful sunset that should be remembered as it was.
I think it’s an interesting idea, but the power it takes to make this happen every time you view the image and the wild inaccuracy that would inevitably happen would create significantly more problems than just improving image compression or investing time in increasing the capacity of existing storage devices.
Realistically, in the future we won’t have to worry at all about our disk space. It’s almost to that point already. My NAS is way more than I’ll ever use, and it was pretty cheap.
It could be an interesting idea, but would be terrible to implement for anything where accuracy mattered.
Generally when you’re doing video or image editing, you don’t want the image to change after you’re done saving it. That would be a loss of hundreds of hours of work in some cases. And if you’re working on something where, small details matter, those might get lost in translation.
As someone how enjoys photography, this seems dumb.
This just adds another abstraction layer, on top of all the other abstraction layers, and for what?
Saving storage?
Storage is fairly cheap these days, processing power, less so.
We already have image compression, you don’t need to save every raw file (though I save both every raw and jpg I get from my camera), if space is running out, get another harddrive.
I also don’t believe that an AI would be able to recreate a picture exaclty the same way every time, even from the same prompt.
You would need to describe the image in exruciating detail to get the AI to draw the same picture every time, that would also take time to generate the image every time you want to see it, sure caches exists, but they take up storage and/or RAM.
The pitch is that everything surrounding the subject is extra, and so it doesn’t matter if it’s the same every time. It’s literally throwing that information away in favor of a simplified description. It’s extremely processor-intensive data compression.
That is completely terrible, the background is often critical to the photo, there are only a tiny number of photos where the background might not matter.
The author claims to want to help preserve memories, but to me, it seems like this concept would change existing memories.
I use my photo collection as a way to remember events and places, making the memories clearer when I look at them, I can’t imagine a time when I would ever want parts of the image to change on it’s own from viewing time to viewing time.
The only kind of photos that this could work for would be stock photos, where the customer won’t care if the photo convey a memory or not as long as it convey the message they want.
This is a dumb concept, less dumb in some specific areas, but still dumb, it feels kinda like that woman who tried to restore the painting of Jesus in Spain…
I love this and have had similar thoughts in relation to my non verbal kid wanting to keep memories in a way they can point out different parts and link together multiple things to make new stories or comments or hypotheticals. Important to have the context and the parts and named things all relating together. I don’t know much about it, but there is a thing called “sidecar” file that can be associated with media. There are some moves to make EXIF data more standardized. So there’s a chance this could be done in an open format.
OP sounds like he’s making a data compression pitch, but I think you have the better idea. I think surrounding the picture with a lot of contextual data about when/why/how this picture was taken will absolutely help recall and connecting to related concepts.
How is subject determined? Did you manually cut out that image?
This is a really cool idea. Other posters here have explained why it isn’t a good idea, but I still think it is neat. Maybe there is a niche edge case use for such a thing? If nothing else it is very scifi
Agree it’s fun to think about even if not practical. If anything reminds me of how my own memory works, where it’s more like a description of what I saw than an image.
One use case might be for asset store pages, to show the object in various environments.
Maybe even have a field where you can input a description of the environment where you plan to use the asset
Please don’t patent it
Op, please patent it so no other company can use it for the next 20 years
/s
I like the idea. Basically turning b Roll and background info into reproduceable info. So you could for example get a pixel perfect 8k view of say the main subject and edit around that instead of needing actual 8k of unimportant background scene.
I think an added one would trying to explore more with latent space to see how precise would might be able to get with the AI compressed details.