We describe a simple method for creating word vectors by combining a distributional model trained on a text corpus with a graph embedding model trained on a semantic knowledge base. Each model describes the semantics of a word based on an entirely different source of ground truth. Our method involves training separate models and combining them in a straightforward manner. We show that doing so yields a slight but consistent improvement on a variety of analogy tasks, particularly those requiring access to encyclopedic information. We also show improvement on the RG-65 word similarity task, which is not generally considered a task requiring encyclopedic information, suggesting perhaps that some of the human similarity judgments may be accessing encyclopedic information to assign similarity scores to thematicaly related words.