Transformations is a family of modules built to perform user-defined, arbitrary data transformations. It's specifically suited to do import/export tasks (e.g. for CSV or XML data), but given the right set of operations, it could be used to do pretty much anything.
Transformations is different from other import/export modules because it works on a lower level then others do. The advantage of this approach is greater flexibility over the data flow - you can freely mix and match your data sources, transformations, and target data sinks the way you need them. The downside is that it also results in more complexity, both for the module to provide a nice user interface and for the user to build the required transformations. Like Rules or maybe Views, Transformations is one of those "abstract" modules that are not fitted to any specific use case, instead the module is what you make of it.
The big idea
At the core of Transformations, there are pipelines and operations. (There are also data wrappers, but those are mostly interesting for developers and less so for end users.) An operation is a piece of code that takes a number of data elements as input, and provides a number of data elements as output upon successful execution. Operations specify the set of required and returned data elements upfront, they are called input and output slots respectively.
Operations are the building blocks for pipelines - a pipeline combines multiple operations and connects an operation's output slot with another operation's input slot (as long as those slots are compatible). A pipeline can also expect data to be provided from the outside (which is known as pipeline parameters) and provide some of the operation outputs as output of the pipeline itself (which is known as pipeline outputs). That way, a pipeline can itself be used as a single operation in other pipelines. Subsequently, it is possible to build a personal library of pipelines and combine several small transformations into a larger one.
In the end, you've got a series of operations connected to your liking, and you execute a pipeline with some input data to retrieve the output data that the pipeline was configured to produce.
Yeah, that sounds a bit abstract, sorry for that. In practice, the whole thing looks a bit easier - have a look at the examples in the HOWTO pages of this documentation.
In the broader software universe, this way of working is often described as ETL ("extract, transform, load") although unlike other ETL systems, Transformations is not database/row/column-centered, which gives it a couple of different characteristics.
Transformations itself consists of two base modules, Transformations API and Transformations UI. The API module can be used on its own by other modules, so instead of the block-based user interface provided by Transformations UI, it would also be possible to have wizard-like interfaces tailored to specific tasks, such as Node Import or Feed Element Mapper. Depending on which data sources you need to handle, you will need to install extension modules like CSV Transformations, XML Transformations or Drupal Transformations. Those modules provide addtional operations for importing from, modifying and exporting to the respective data formats.
Splitting out these data-format-specific operations into separate modules is supposed to avoid maintenance issues that Import/Export API encountered, and provides examples and a clear path for other developers who want to contribute their own extensions.