
Warning : This is a thinking out-loud post , where I try to elucidate my understanding of statistics and ways of showing evidence for causation.
Warning : This is a thinking out-loud post , where I try to elucidate my understanding of statistics and ways of showing evidence for causation.
I was recently asked to do a class on literature review and as per usual, I decided to cover the concept of citation chaining.
In 2015, Marshall Breeding the guru of Library technology and discovery summarized the state of Library Discovery in a NISO White paper entitled "The Future of Library Discovery"
In a 2012, blog post titled - Introducing the Knowledge Graph: things, not strings, Google announced that they would be augmenting Google results not just with links but also results from the Google Knowledge Graph.
Earlier this year, I blogged about how new citation indexes were impacting Science mapping tools such as Citespace, VOSviewer , Citation Gecko and more.
When you have been in the library industry for a while, you start noticing patterns.
I've always been a big fan of Summon's Best bet feature and Primo's resource recommender. If you are unfamiliar with the idea, it allows the librarian to augment search results users get with additional custom messages or recommendations when they search in Summon or Primo.
Academic related browser extensions : Browser extensions (Scite/Scholarcy)and improvements to Zotero
For a long time, there were just two main citation sources that had data that could claim to be relatively comprehensive multi-discipline wise, namely Web of Science and Scopus.
Where I speculate about how the concepts of skill caps and performance caps and the views that librarians implictly hold on them in these two dimensions, affect how they engage with them.
In this week's blog entry, I cover three new stories of interest that have one common thread - issues affecting the measurement of journal usage.