Primeros pasos

Next Steps

You may be wondering... "OK, so now what?"

Let's walk through the complete setup and workflow for getting started with Gigantics.

Initial Setup

First, ensure you've completed these essential setup steps:

  1. Installed Gigantics - Downloaded and installed the Gigantics platform on your server

  2. Installed MongoDB - Set up and configured MongoDB as the backend database for Gigantics

  3. Started the app (in setup mode) - Launched Gigantics for the first time to initialize the configuration

  4. Initial web setup - Visited https://yourserver:5000/ to complete the first-time Gigantics configuration, including:

    • Setting up administrator credentials
    • Configuring basic system settings
    • Validating your installation
  5. Login and create organization - Logged back into https://yourserver:5000/ and created your first Organization and Project

After your first configuration is finished, you'll see the Login screen. At first, Gigantics will be empty so you'll need to create your first organization and a project before proceeding with data workflows.

Data Processing Workflow

Now it's time to connect to a datasource (tap), run a scan and discovery, and set up your first data processing rule.

  1. Connect to your data sources from the Data Sources → Tap page. Here you can configure connections to your source databases like PostgreSQL, MySQL, MongoDB, SQL Server, Oracle, and other supported database systems.

  2. Add a sink from the Data Sources → Sinks page. (Recommended) Sinks are destination endpoints where your processed data will be loaded - this could be another database, data warehouse, or analytics platform.

  3. Discover your tap using the Discovery overview. This step scans your connected data source to analyze table structures, data types, relationships, and identify potentially sensitive information like PII (Personally Identifiable Information).

  4. Create a new rule in the Model rules section and add operations. Rules define how your data should be transformed, masked, or synthesized. You can add operations for:

    • Data masking and anonymization
    • Data type conversion
    • Filtering and subsetting
    • Custom transformations
  5. Run the rule and load the resulting data to a sink or create a new dataset. This executes your data processing pipeline and moves the transformed data to its destination.

After a project is created, the general workflow is as follows:

Getting Started with Your Project

Once you have your first rule running successfully, you can expand your data processing capabilities by:

  • Creating multiple rules for different data sources and transformation requirements
  • Setting up scheduled runs to automate your data pipelines
  • Using variables and functions to create reusable transformation logic
  • Implementing data validation and quality checks
  • Monitoring your data processing jobs and reviewing execution logs

This iterative workflow allows you to build sophisticated data anonymization and processing pipelines that meet your organization's specific requirements while maintaining data security and compliance.

Tabla de Contenidos