We have used AI at work for a number of years now, but mostly simpler models, not the deep language models like ChatGPT. We use it for classification purposes on vision and range datasets primarily for problems that have difficult to develop algorithmic approaches. AI has made our classification of certain features in an image much easier in some cases but it comes at a cost. The first cost is that we need a separate GPU for each "feature" we try to classify, this drives up the cost of a system significantly with the GPUs now accounting for 80% or more to the cost of the system. The second cost is that the AI versions of feature detection take far more computing power than the straight algorithmic approaches, this takes additional time and power and with the problems we are trying to solve time is a limited commodity. The final cost is the cost of training the AI, we hire skilled people to classify the features in our image datasets, it is a very mentally difficult and tedious job and finding people appropriate to do this work is difficult.
The large public accessible AIs that most people are talking about now have similar issues. They required high end computing equipment to process the input data, this uses both a large number of material resources but also a lot of power. And all these models need to be trained as well. In the case of the public models they are primarily based on data from the web, and as others have mentioned, this means they are almost guaranteed to be using data without permission or accredited to the persons who created the original data.
The various AI models need to be considered a tool. They can be useful in some circumstances, they are quite good at classifying, but in terms of creative content they are creating content that appeals to the masses of internet users based on statistical approaches. The AI's creative content can make links or associations that have not been considered by humans, but they really cannot create, they only have a collective back history, they are not trying to tell a story, they are just feeding back what it was told is appropriate by vast numbers of people. They also do not handle novel situations well, and they can be "biased" easily, both situations are difficult to control.
My expectation is that we will see AI's use expand greatly over the next bit as large companies with deep pockets look to develop business models from them. This will lead to a "Blanding?", Vanillalization, of content in the future, leading to people specifically looking for human created content. I also see it being incorporated into a number of industries, some which will improve with its use and others failing miserably. In the end we are in hype stage of this technology, in the more distance future I still see it being used extensively, but market forces will limit its use to areas it makes sense to use. I hope it doesn't take over the "human" aspects of our life, if we let it, I believe our lives will become a lot smaller and less significant.