Scale Logic, Inc.

1-855-440-4678

Scale-Out NAS

HYPERFS SAN ManagementThe HyperFS Scale-Out NAS application module seamlessly integrates with the HyperFS SAN application, and features simple to use GUI-based configuration and management tools. Like HyperFS, Scale-Out NAS is based open an architecture leveraging common protocols such as NFS, SMB/CIFS, Webdav, HTTP(s), and FTP(s), which allows it to present block-level SAN storage as file-based storage workflows to the LAN (Mac, Win & Linux) clients.

Within one global namespace, Scale-Out NAS supports the bottom-layer virtualization and creates resources that can be adjusted dynamically according to business needs; it also supports independent adjustment and instant expansion of the bandwidth and capacity while supporting dynamic fail-over (HA) between up to 64 Scale-Out NAS nodes.

Example of workflow using Scale-Out NAS

Scale-Out NAS Storage Architecture featuring HyperFS

Cost-effective TCO: Unlike other SAN file systems, our LAN clients are based on open protocols therefore our workflow tuning capabilities are GUI-based, rather than requiring expensive command line based pro-services engagements and the aggregate related downtime. Finally, our licensing model for LAN clients is by the total amount bandwidth required; not by the total number of LAN clients; so we can support a lot more LAN clients from a budgetary perspective.

Scale-Out NAS Gateway FAQs

What is the description of the redundancy mechanisms for Scale-Out NAS nodes?

Server blades have mirrored boot drives and redundant power. Server NIC bonding easily configured in the NAS GUI is used to keep uptime in case of Ethernet cable/NIC failure. In case of software failure, there's a daemon on each NAS server which will monitor the cluster status. If the daemon is down or abnormal, the cluster service will migrate to surviving nodes. Cluster service will migrate for bond Ethernet failure, power reset, power off, or stopping a single node in software for maintenance reasons etc.

How are the IP virtual addresses implemented?

At least 1 IP address is provided per NAS node, but there could be a larger range of IPs and a node failure will move the IP address(es) to a surviving node. DNS is used for mounting and round robin connections from clients to nodes for ease of use (DNS) and load balancing.

How does Scale-Out NAS access files?

Scale-Out NAS nodes all mount a HyperFS file system locally to a local /fs_share directory over FC multipath network. Ethernet bonding is used for the metadata network to receive the file system metadata from the 2 x HA metadata servers. Metadata is read over Ethernet, and file data is read over FC.

How is Scale-Out NAS integrated with the customer’s DNS and the role of the customer’s DNS in Scale-Out NAS functionalities?

Scale-Out NAS has some flexibility in how it is configured, but mostly when the end customer has a DNS configured already, we configure cluster NAS with DNS enabled, then add a 'Conditional Forwarder' on the customer's DNS server, then mount our NAS via DNS. With this method there is no change or addition to the client computers' DNS settings.

Downloads Related Materials

HyperFS

HyperFS Licensing Procedure

HyperFS Supported Platform Guide

Rorke 2.1 to SLI 3.xx Upgrade of HyperFS

Contact Us to learn more! Or simply call 1-855-440-4678.