When it comes to analyzing anything, the more organized and structured the subject is, the easier it may be to conceptualize information about the subject, and limit certain self-perceptions and biases. Tidy data assists historians in this way by narrowing their focus on core points of interest pertinent to a broader narrative, event, and/or system. All this, however, weighs heavily on the structural integrity of the primary source, aka received data. If the received data lacks its own structure and organization, the modern bias of the historian is imposed (to an extent inadvertently) to craft the data into a more understandable format. It is much more truthfully reflective of the primary source to contain its own categorized elements representative of the subject and time-frame of the subject. When faced with information holes, however, analysts create derived data – data created in corroboration with other outside points of information to fill these holes. When asking historical questions, it is imperative to discern what is received and what is derived. The unique perceptual interaction with data from one analyst to the next can skew and manipulate a primary source beyond its raw, intended values. It then becomes the responsibility of the researcher to infer what they can from received data in the subject’s best interest – meaning, how can we as historical researcher’s protect the historical integrity of a primary data source while expounding our own modern, narrative structure in a way that both projects the historical evidence in a truthful light while also revealing additional information for future analyzers? Our encounter with contemporary data, derived data, and metadata must then also reflect this goal, as our responsibility to the primary source can be validated through the work of secondary research. Only then can we continue in our own research and analysis to bring about our own unique perceptions and findings.