Oude en nieuwe media
Beschikbaarstelling en gebruik in de volle breedte
Instead of typing keywords in a search bar, we would rather have a chat with a (virtual) librarian and express our 'information need' or interest in certain content in the archive in a conversation. At Netherlands Institute for Sound & Vision, we investigate several use cases in the area of Spoken Conversational Search (SCS) as a means to guide access to our rich but also large and heterogeneous audiovisual archive. In our LABS environment, we have search APIs built on top of indices of rich metadata that can be connected to a dialogue system that uses keyboard or speech input to communicate with users online and in our museum.
Research questions we can think of:
- how to represent the 'librarian's knowledge' of the contents of the archive so that it can be used in an SCS system?
- which type of use benefits the most from an SCS approach?
- What are the ethical and legal aspects of conversational AI?
- how to encode multimodal dialogue status and model the evolving 'information need' of a user that communicates with the system and receives information from it?
- would it already be possible to develop a 'convincing system' for museum visitors with a simple approach that uses a search API or the Open Images metadata set?
- how to personalise a dialogue based on additional information sources (e.g., a camera that spots a child)?
- how to optimize voice processing of a dialogue system in a Museum environment?
- how to implement a search dialogue in a VR environment with the librarian as an omnipresent oracle?
- could we enrich the information of archival content by using the information we obtain from the human-computer dialogue?
Please contact us if you are interested to engage with our data and APIs for your research or development project. A relatively straightforward starting point could be harvesting and indexing our Open Images data set.