Zum Inhalt der Seite gehen


Advice on putting AIs in a solarpunk setting (by a ML engineer)


Dieser Beitrag wurde bearbeitet. (1 Jahr her)
Als Antwort auf keepthepace

nice ! ive been a bit wary of llms cuz of electricity usage n environmental impact. iz there any things u can point me to for running a greener ai myself?
Als Antwort auf max

Self hosted Kobold Ai with any model you would like on a pc witch is powered by solar panels?
Als Antwort auf WeLoveCastingSpellz

An environmentalist activist at our local fablab once told me "forget about your fancy electronics, insulate your motherfucking water heater!" :-)

When you look at the hard numbers if your AI machine is your main energy use and source of CO2 emission, you are doing extremely well from an environmental point of view. Even running a LLM locally requires probably less than 300W (at peak) which is very easy to get through solar power.

Dieser Beitrag wurde bearbeitet. (1 Jahr her)
Als Antwort auf keepthepace

One issue here is that that's 300W more than would have otherwise been consumed with the addition of solar panels what would not have needed to be produced. The net is quite carbon intensive.
Als Antwort auf 𝗧𝗼𝗮𝘀𝘁𝗲𝗿 *𝑣𝑒𝑟𝑦 𝑝𝑢𝑠ℎ𝑒𝑑 𝑑𝑜𝑤𝑛

I give 300W because that's what my own computer uses. It is not optimal for merely running a well designed system, I designed my machine to be able to do training experiments too. With a mature system, you can probably get away with 30W peak and much lower when unused. If that gives you a home AI able to optimize the heating patterns of your house or shutting down unused circuits, avoiding one or two car trips in a month, that's much more environmentally friendly than it costs.
Als Antwort auf max

Dieser Beitrag wurde bearbeitet. (1 Jahr her)
Als Antwort auf keepthepace

Als Antwort auf schmorp

370W is quite a bit of electricity and that could scale up as people run more computational heavy AI or want shorter runtimes. 500-1000W PSUs are common. For reference, a raspberry pi 4 consumes 2.5-4W under a normal load. The average American home consumes 30-50kWh per day which works out to ~1-2kW per day. So 370W accounts for 19% to 37% of an average American's electricity consumption.

We need to be careful with the eco-modernist perspectives on AI since we're already seeing an increased demand for power which is one of the main growth-related issues impacting climate change. AI certainly has a place, especially for medical research. However there are also many talented human artists who are looking for work. Meanwhile, image generation drives work efficiency which is far from essential.

Critically appraising the accelerationism around AI by no means makes one a luddite. I say this as someone deeply entrenched in tech.

Als Antwort auf 𝗧𝗼𝗮𝘀𝘁𝗲𝗿 *𝑣𝑒𝑟𝑦 𝑝𝑢𝑠ℎ𝑒𝑑 𝑑𝑜𝑤𝑛

arent raspi 4s more like a 10-12W when ur using for desktop ? have i gotten it confused with the raspi 5s idk >~< im still got a 3b+ and a B model lmao i stick by the reduce reuse before buying new. i do agree tho that 370W is quite a lot for a computer, my main laptop is 150w ans it is incredibly snappy + has a dgpu (nVidia GTx 1060M) so id honestly consider that the max power consumption for a computer, i dont really see y u'd need more. I have an old lenovo not-quite-thinkpad that was like 150$ and was just before soldered ram + emmc came online so ive managed to upgrade it to 8gb ram (it came w 2 [WITH WIN 10 !! ??]) but is dualcore 2.1ghz and is a bit slow for my tastes but can do word processing and ~some~ web browsing but it has a usual power consumption of 7W off the battery and about 20W when plugged in + charging so im very happy with that old thing :3
Als Antwort auf 𝗧𝗼𝗮𝘀𝘁𝗲𝗿 *𝑣𝑒𝑟𝑦 𝑝𝑢𝑠ℎ𝑒𝑑 𝑑𝑜𝑤𝑛

370W is peak consumption for my computer which is oversized if you only want to run inference. I also want to run training. If LLMs become a commodity they will likely run on specialized hardware that is unlikely to eat more than 20W while running, which is likely to be less than 1% of the time. I used my 370W figure to state something I am sure of, and to show that even without any optimization effort, this is at worse a very manageable amount.
Dieser Beitrag wurde bearbeitet. (1 Jahr her)
Als Antwort auf schmorp

Als Antwort auf keepthepace

thanks for the reply !! i have an old "gaming"" laptop (that i just use for most things tbh) but its 150W amd got a GTX 1060m in it, so i would be curious to see if i could run some sort of local ai on it. i believe the CO_2/kWh is pretty good here we rely mostly on renewables, but still have some coal power stations around -_- ah ! if training the ai is the hard part and running it is easier on energy then it may be better than i thought, though i am wary of how it gets used by companies... i am very aware though going into a engineering field in the next couple years that llms are gonna be a big tool to use, though i dont agree with stuff like chatgpt for privacy reasons, but i was unsure about how bad the environmental impact would be if i tried it, thanks for the info !! :3
Als Antwort auf max

The important spec for being able to run good model is the VRAN. Yours with 6GB is a bit in the low range but I guess it could run a 7B model qith heavy quantization?

i am wary of how it gets used by companies


Me too, that's why I feel it is important that people use these models without relying on companies.

Als Antwort auf keepthepace

Nice vision! Please keep advocating this viewpoint to non-insider audiences. It’s a very realistic extrapolation from today to the near-future. It tickles me because it is also hopeful.

We are building these things so shouldn’t we think really hard, all of us, about how they are built? Everyone is a stakeholder in the future.

Als Antwort auf meyotch

Thank you for your kind words! 2023 has been a weird year for me as lifelong AI enthusiast. I saw people go from "it is impossible" to "we should not do it" very quickly and was sad that the utopian point of view, which has been the motor of most researchers, is almost never represented in the media. I almost feel that it is more important right now to take part of the public discussion than it is to code AI systems.
Als Antwort auf keepthepace

Als Antwort auf schmorp

I might not be hired as a translator


Everything in automation has the same effect: human work becomes obsolete.

Most of the time is work that nobody likes, like elevator operator or copying books by hand. Sometimes is work that someone likes, like knitting or distributing newspaper by bike.

LLMs and stuff like that is nothing new in that regard. Although today LLMs are not an actual replacement of a professional translator.

Dieser Beitrag wurde bearbeitet. (1 Jahr her)
Als Antwort auf schmorp

Als Antwort auf keepthepace

As automation progresses, “tax the rich” becomes an increasingly obvious thing to push for.


Don't tax the rich, abolish them.

Als Antwort auf keepthepace

It is not work we want, it is income.
I think if we had no need for income as a society, we would find pleasure in doing work we enjoy and we would want to work. But maybe we won't call it work.
Als Antwort auf CubitOom

Exactly!

I have been to rice harvest that were actually the village's social event of the month. There is a way to partify work that could create a totally different society! Labor abolition could have happened before automation, but we opted out of it. Now it simply becomes much harder to avoid.

Als Antwort auf meyotch

Everyone is a stakeholder in the future.


This is really cool, I like it. We go from 'sufferers of the future' to 'stakeholders in the future'.

Als Antwort auf keepthepace

I agree with this take that the path to implementing AI from a solarpunk perspective is through open source software which promotes collaboration and privacy.
Als Antwort auf keepthepace

Open source and the self sufficiency of self hosting and running software locally are at the core of solarpunk.

The more llms are optimized in the future, the easier it will be to host and run on lower end hardware with less resources using upcycled hardware.

ollama is great for self hosting llms. It comes with several very useful ones that will work out of its box from its library. With a little bit of effort (creating a modelfile and adding a from line to point to the model's path) one can also run any openly available model if its in gguf format or one can take models found on huggingface and convert them to gguf format if they arn't already.

Als Antwort auf keepthepace

Absolutely brillant to see my post spark a discussion like this, I'll be taking a ton of this on-board and rethinking it all. Thank you! :D
Als Antwort auf keepthepace

Well, this is the first time somebody has managed to convince me that AI could actually improve things, so congrats on that. It has to do with your mentioning of AIs being made as a result of public debate; I'm seeing major parallels to scientific research. Research must be done and documented with rigor, and fully exposed to peer review and criticism. If - and I think only if - AI models end up being handled the same way, I think we have a chance. Not sure if it's already too late for that, but it's a chance.
Als Antwort auf Landsharkgun

Thanks for the kind words!

I think the process will have to be different: scientific research has a ground truth. It tests its theories against reality. With AI design we veer in the morality side of things, where ground truths do not exist and the process will have to be a bit political, with people with coherent but incompatible opinions debating.

But I do think that it will be easier than we suspect. We actually agree on 90% of what we want to be done. We want a labor free life, we want a free house. All we will have left to argue will be the color we paint it.

Dieser Beitrag wurde bearbeitet. (1 Jahr her)