Peanutbjelly

joined 1 year ago
[–] Peanutbjelly@sopuli.xyz 0 points 1 month ago

I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that "nature" has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.

Most people haven't even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.

That being said, localized narrow generative models are just building large individual models of predictive process that doesn't by default actively update information.

People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.

But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.

[–] Peanutbjelly@sopuli.xyz 0 points 2 months ago* (last edited 2 months ago)

While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional chip concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything reassembling an actively intelligent system.

On that note, the recent developments with six like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I'm almost glad it's not being absorbed into the hype and hate cycle.

[–] Peanutbjelly@sopuli.xyz 0 points 6 months ago

I'm talking about the general strides in cognitive computing and predictive processing.

https://youtu.be/A1Ghrd7NBtk?si=iaPVuRjtnVEA2mqw

Machine learning is still impressive, we just can better frame the limitations now.

For the note on scale and ecosystems, review recent work by karl friston or Michael Levin.

[–] Peanutbjelly@sopuli.xyz 0 points 6 months ago (2 children)

Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn't inevitably lead to unnecessary conflict due to diverging models that haven't grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

breath

We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

We're seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.

[–] Peanutbjelly@sopuli.xyz 2 points 8 months ago* (last edited 8 months ago) (1 children)

The main issue though is the economic system, not the technology.

My hope is that it shakes things up fast enough that they can't boil the frog, and something actually changes.

Having capable AI is a more blatantly valid excuse to demand a change in economic balance and redistribution. The only alternative would be destroy all technology and return to monkey. Id rather we just fix the system so that technological advancements don't seem negative because the wealthy have already hoarded all new gains of every new technology for this past handful of decades.

Such power is discretely weaponized through propaganda, influencing, and economic reorganizing to ensure the equilibrium stays until the world is burned to ash, in sacrifice to the lifestyle of the confidently selfish.

I mean, we could have just rejected the loom. I don't think we'd actually be better off, but I believe some of the technological gain should have been less hoardable by existing elite. Almost like they used wealth to prevent any gains from slipping away to the poor. Fixing the issue before it was this bad was the proper answer. Now people don't even want to consider that option, or say it's too difficult so we should just destroy the loom.

There is a markov blanket around the perpetuating lifestyle of modern aristocrats, obviously capable of surviving every perturbation. every gain as a society has made that reality more true entirely due to the direction of where new power is distributed. People are afraid of AI turning into a paperclip maximizer, but that's already what happened to our abstracted social reality. Maximums being maximized and minimums being minimized in the complex chaotic system of billions of people leads to inevitable increase of accumulation of power and wealth wherever it has already been gathered. Unless we can dissolve the political and social barrier maintaining this trend, it we will be stuck with our suffering regardless of whether we develop new technology or don't.

Although doesn't really matter where you are or what system you're in right now. Odds are there is a set of rich asshole's working as hard as possible to see you are kept from any piece of the pie that would destabilize the status quo.

I'm hoping AI is drastic enough that the actual problem isn't ignored.

[–] Peanutbjelly@sopuli.xyz 0 points 11 months ago

Finally set it up today. A bit limiting, but definitely useful. Can't seem to set it up with 1200x1200 which is a shame, since that's been me sdxl sweet spot.

[–] Peanutbjelly@sopuli.xyz 2 points 11 months ago* (last edited 11 months ago)

I conflate these things because they come from the same intentional source. I associate the copywrite chasing lawyers with the brands that own them, it is just a more generalized example.

Also an intern who can give you a songs lyrics are trained on that data. Any effectively advanced future system is largely the same, unless it is just accessing a database or index, like web searching.

Copyright itself is already a terrible mess that largely serves brands who can afford lawyers to harass or contest infringements. Especially apparent after companies like Disney have all but murdered the public domain as a concept. See the mickey mouse protection act, as well as other related legislation.

This snowballs into an economy where the Disney company, and similarly benefited brands can hold on to ancient copyrights, and use their standing value to own and control the development and markets of new intellectual properties.

Now, a neuralnet trained on copywritten material can reference that memory, at least as accurately as an intern pulling from memory, unless they are accessing a database to pull the information. To me, sueing on that bases ultimately follows the logic that would dictate we have copywritten material removed from our own stochastic memory, as we have now ensured high dimensional informational storage is a form of copywrite infringement if anyone instigated the effort to draw on that information.

Ultimately, I believe our current system of copywrite is entirely incompatible with future technologies, and could lead to some scary arguments and actions from the overbearing oligarchy. To argue in favour of these actions is to argue never to let artificial intelligence learn as humans do. Given our need for this technology to survive the near future as a species, or at least minimize the excessive human suffering, I think the ultimate cost of pandering to these companies may be indescribably horrid.

[–] Peanutbjelly@sopuli.xyz 31 points 11 months ago (27 children)

Music publishers sue happy in the face of any new technological development? You don't say.

If an intern gives you some song lyrics on demand, do they sue the parents?

Do we develop all future A.I. Technology only when it can completely eschew copyrighted material from their comprehension?

"I am sorry, I'm not allowed to refer to the brand name you are brandishing. Please buy our brand allowance package #35 for any action or communication regarding this brand content. "

I dream of a future when we think of the benefit of humanity over the maintenance of our owners' authoritarian control.

[–] Peanutbjelly@sopuli.xyz 1 points 1 year ago* (last edited 1 year ago)

Might have to edit this after I've actually slept.

human emotion and human style intelligences are not exclusive in the entire realm of emotion and intelligence. I define intelligence and sentience on different scales. I consider intelligence the extent of capable utility and function, and emotion as just a different set of utilities and functions within a larger intelligent system. Human style intelligence requires human style emotion. I consider gpt an intelligence, a calculator an intelligence, and a stomach an intelligence. I believe intelligence can be preconscious or unconscious. Rather, a part of consciousness independent from a functional system complex enough for emergent qualia and sentience. Emotions are one part in this system exclusive to adaptation within the historic human evolutionary environment. I think you might be underestimating the alien nature of abstract intelligences.

I'm not sure why you are so confident in this statement. You still haven't given any actual reason for this belief. You are addressing it as consensus, so there should be a very clear reason why no successful considerably intelligent function exists without human style emotion.

You have also not defined your interpretation of what intelligence is, you've only denied that any function untied to human emotion could be an intelligent system.

If we had a system that could flawlessly complete françois chollet's abstraction and reasoning corpus, would you suggest it is connected to specifically human emotional traits due to its success? Or is that still not intelligence if it still lacks emotion?

You said neural function is not intelligence. But you would also exclude non-neural informational systems such as collective cooperating cell systems?

Are you suggesting the real time ability to preserve contextual information is tied to emotion? Sense interpretation? Spacial mapping with attention? You have me at a loss.

Even though your stomach cells interacting is an advanced function, it's completely devoid of any intelligent behaviour? Then shouldn't the cells fail to cooperate and dissolve into a non functioning system? again, are we only including higher introspective cognitive function? Although you can have emotionally reactive systems without that. At what evolutionary stage do you switch from an environmental reaction to an intelligent system? The moment you start calling it emotion? Qualia?

I'm lacking the entire basis of your conviction. You still have not made any reference to any aspect of neuroscience, psychology, or even philosophy that explains your reasoning. I've seen the opinion out there, but not strict form or in consensus as you seem to suggest.

You still have not shown why any functional system capable of addressing complex tasks is distinct from intelligence without human style emotion. Do you not believe in swarm intelligence? Or again do you define intelligence by fully conscious, sentient, and emotional experience? At that point you're just defining intelligence as emotional experience completely independent from the ability to solve complex problems, complete tasks, or make decisions with outcomes reducing prediction error. At which point we could have completely unintelligent robots capable of doing science and completing complex tasks beyond human capability.

At which point, I see no use in your interpretation of intelligence.

[–] Peanutbjelly@sopuli.xyz 1 points 1 year ago (2 children)

What aspect of intelligence? The calculative intelligence in a calculator? The basic environmental response we see in amoeba? Are you saying that every single piece of evidence shows a causal relationship between every neuronal function and our exact human emotional experience? Are you suggesting gpt has emotions because it is capable of certain intelligent tasks? Are you specifically tying emotion to abstraction and reasoning beyond gpt?

I've not seen any evidence suggesting what you are suggesting, and I do not understand what you are referencing or how you are defining the causal relationship between intelligence and emotion.

I also did not say that the system will have nothing resembling the abstract notion of emotion, I'm just noting the specific reasons human emotions developed as they have, and I would consider individual emotions a unique form of intelligence to serve its own function.

There is no reason to assume the anthropomorphic emotional inclinations that you are assuming. I also do not agree with your assertions of consensus that all intelligent function is tied specifically to the human emotional experience.

TLDR: what?

[–] Peanutbjelly@sopuli.xyz 1 points 1 year ago (4 children)

But specifically human emotion? Tied to survival and reproduction? There is a whole spectrum of influence from our particular genetic history. I see no reason that a useful functional intelligence can't be parted from the most incompatible aspects of our very specific form of intelligence.

 

one of my favourite things about stablediffusion is that you can get weird dream-like worlds and architectures. how about a garden of tiny autumn trees?

view more: next ›