Portfolio

Incorporating world knowledge to document clustering via heterogeneous information networks

Chenguang Wang, Yangqiu Song, Ahmed El-Kishky, Dan Roth, Ming Zhang, and Jiawei Han.

In Proc. 2015 ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD 2015).

paper code data slides video

We provide three ways to specify the world knowledge to domains by resolving the ambiguity of the entities and their types, and represent the data with world knowledge as a heterogeneous information network.

Text classification with heterogeneous information network kernels

Chenguang Wang, Yangqiu Song, Haoran Li, Ming Zhang and Jiawei Han.

In Proc. 2016 AAAI Conf. on Artificial Intelligence (AAAI 2016).

paper code data slides

This paper presents a novel text as network classification framework, which introduces a structured and typed heterogeneous information networks (HINs) representation of texts, and a meta-path based approach to link texts.

Crowd-in-the-loop: A hybrid approach for annotating semantic roles

Chenguang Wang, Alan Akbik, Laura Chiticariu, Yunyao Li, Fei Xia, and Anbang Xu.

In Proc. 2017 Conf. on Empirical Methods on Natural Language Processing (EMNLP 2017).

paper data slides

Our experimental evaluation shows that the proposed approach reduces the workload for experts by over two-thirds, and thus significantly reduces the cost of producing SRL annotation at little loss in quality.

Language models with Transformers

Chenguang Wang, et al.

In arXiv preprint arXiv:1904.09408 (arXiv 2019).

paper code slides

Gets more than 4.4k blog views and more than 320 Likes and Retweets on Twitter. Experimental results on the PTB, WikiText-2, and WikiText-103 show that proposed method achieves perplexities between 20.42 and 34.11 on all problems, i.e. on average an improvement of 12.0 perplexity units compared to state-of-the-art LSTMs.

Language Models are Open Knowledge Graphs

Chenguang Wang, Xiao Liu, and Dawn Song.

In arXiv preprint arXiv:2010.11967 (arXiv 2020).

paper slides

What's the relationship between deep language models (e.g., BERT, GPT-2, GPT-3) and knowledge graphs? Can we use the pre-trained deep language models to construct knowledge graphs? We find that we can construct knowledge graphs from the pre-trained language models. The generated knowledge graphs not only cover the knowledge already in existing knowledge graphs, such as Wikidata, but also feature open factual knowledge that is new.