Agents and Their Optimal Thresholds

How should the next generation of the Web (or Web2) develop? Some talks of semantic capacities, some others of ontologies, many of agents able to manage one or the other. The truth is that there is no much difference, since any semantics available and usable by the sort of artificial agents we can actually engineer is really a matter of ontology, i.e. of producing huge, machine-readable catalogues and inventories of the environment in which they operate, and of the "furniture" of such environments that they need to handle and interact with.

The hope is that agents may autonomously aggregate into societies that can, as macro-agents, combine their individual functions to perform increasingly complex and demanding tasks, in view of more ambitious goals.

It is not easy. On the one hand, coarse ontologies are more easily implementable but less useful. On the other hand, the more useful ontologies are those that are finely grained, but then these are the most difficult to manage. One needs to find the right "level of abstraction" or granularity to optimize complexity in the construction and efficiency in its application.

Apparently, it is now possible to identify such optimal threshold (click on the title of this blog). In a recent study on how well agents perform at increasingly detailed levels of abstraction, it was shown that the rate of improvement of the performance of agents varied at different levels of abstraction, that performance rises slowly at the two extremes (high and low-levels of abstraction) but that there appears to be a steep rise in performance in the middle. This suggests that there may be effective levels of abstraction that may provide a method for balancing costs and benefits.

The question now left to the philosopher is whether there is also an effective threshold in biological agents too, which constrains their cognitive development. To put it more simply: is it the case that the world is what it is and that we have adapted to perceive it as it is, or is it is our specific thresholds in the cost-effective evolution of our embodied and hard-wired levels of abstraction (e.g. in the sort of light, sounds, heat etc, that we are able to perceive and process) that make our world what we perceive as the world?

Reference

Comments

Popular posts from this blog

On the importance of being pedantic (series: notes to myself)

Mind the app - considerations on the ethical risks of COVID-19 apps

On being mansplained (series: notes to myself)

On the art of biting one's own tongue (series: notes to myself)

The ethics of WikiLeaks

Call for Papers for American Philosophical Quarterly’s special issue on The Ethics of Artificial Intelligence

On why publishing (series: notes to myself)

On the need to be exposed and the beginning of philosophy (series: notes to myself)

ERIH Journals project: a total failure so far