I'm having a hard time figuring out how to display a list of objects using typeahead with a json file as the source. None of my data is being displayed.
I want to list the names, and use the other attributes for other things when selected.
../data/test.json
[
{"name": "John Snow", "id": 1},
{"name": "Joe Biden", "id": 2},
{"name": "Bob Marley", "id": 3},
{"name": "Anne Hathaway", "id": 4},
{"name": "Jacob deGrom", "id": 5}
]
test.js
$(document).ready(function() {
var names = new Bloodhound({
datumTokenizer: Bloodhound.tokenizers.whitespace("name"),
queryTokenizer: Bloodhound.tokenizers.whitespace,
prefetch: {
url: '../data/test.json'
}
});
names.initialize();
$('#test .typeahead').typeahead({
name: 'names',
displayKey: 'name',
source: names.ttAdapter()
});
)};
test.html
<div id="test">
<input class="typeahead" type="text">
</div>
** And if someone can explain to me what the datumTokenizer and queryTokenizer is, that would be awesome **
The JSON file contains an array of JSON objects, but the Bloodhound suggestion engine expects an array of JavaScript objects.
Hence you need to add a filter to your prefetch declaration:
prefetch: {
url: '../data/test.json',
filter: function(names) {
return $.map(names, function(name) {
return { name: name };
});
}
As for the "datumTokenizer", it's purpose is to determine how the datums (i.e. the suggestion values) should be tokenized. It is these tokens which are then used to find a match with the input query.
For instance:
Bloodhound.tokenizers.whitespace("name")
This takes a datum (in your case a name value) and splits it into two tokens e.g. "Bob Marley" will be split into two tokens, "Bob" and "Marley".
You can see how the whitespace tokenizer works by viewing the typeahead source:
function whitespace(str) {
str = _.toStr(str);
return str ? str.split(/\s+/) : [];
}
Note how it splits the datums using the regex for whitespace (\s+ i.e. one or more occurrences of whitespace).
Similarly, the "queryTokenizer" also determines how to tokenize the search term. Again, in your example you are using the whitespace tokenizer, so a search term of "Bob Marley" will result in the datums "Bob" and "Marley".
Hence with the tokens determined, if you were to search for "Marley", a match would be found for "Bob Marley".